| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|/
|
|
|
|
|
|
|
| |
* Fix an exception that was raising a tuple
* Remove unused imports
* Remove unused exceptions
* Remove extra blank lines
Change-Id: I9127be991e9081dc173525c9b57ea297f389d16d
|
|
|
|
|
|
| |
Bug 1134802
Change-Id: I9cc3c9d9324314d293f01f047882eb6be06e02dd
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There are cases where an rpc call is gracefully handled. The rpc
drivers should just let the caller deal with this and decide whether it
is an error worth logging a traceback over. Otherwise, we unnecessarily
raise alarm by leaving a mess in the log file.
Fix bug 1137994.
Change-Id: I0e831ddcc43ffea78aae1fb5e46c5037c461b2a1
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Reverts part of a94b9b4 which added an extra LOG.error
statement when max_depth is hit.
This is causing spurious errors to get logged in some of our
projects which have adopted this changed.
Related to: https://bugs.launchpad.net/nova/+bug/1140133
Change-Id: Ie7939e41797da000dd8b269f905f351df0b7116d
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Nova's simple in-memory cache replicates the memcache interface,
so clients can cache things in memcache or in memory using
the same commands. Using memcached requires having the client
library installed and the memcached_servers config option set.
Callers can also pass in a list of memcached servers when they
initialize the client.
Change-Id: I831142a36797b04006cba4792df803e09f6fd69b
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Introduces a reference implementation
of a matchmaker (based on redis) that
supports dynamic host/topic registrations,
host expiration, and hooks for consuming
applications to acknowledge or neg-acknowledge
topic.host service availability.
Implements blueprint advanced-matchmaking
Change-Id: I8608d2089fca118b0e369f2eb5c6aedacf6821fe
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The notify parameter was expected as
a kwarg, although __init__.py expects this
to be a standard argument.
Tests are not yet covering this, but
are forthcoming.
Change-Id: Id6a0a81ef250e43c7ab3dc9d5392f89752d0f313
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The ZeroMQ driver needs to manipulate
the topic for notifications because
the period is used as a delimiter internally.
The code was already trying to perform
this modification via topic.replace,
but was not storing the result.
Change-Id: I02a174dd96ff9181f6d7460fd41434ea05fb39d4
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Prevent attacks through xml entity expansion etc.
Fixes LP# 1100282
Change-Id: I391531deac122697556c282184c8f8890ea66489
|
| | | |
| | | |
| | | |
| | | | |
Change-Id: I6085bb4a0b990985c8f7a013c89b7d5acafdf312
|
|\ \ \ \ |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
quantum run_tests.py fails because
openstack.common.setup._get_version_from_git fails. It is because
quantum unit tests run under quantum/tests/unit instead of git root dir.
So the function should check parent dirs for .git.
cinder folks seem to have hit this bug (1125416).
ERROR: test_network_gateway_update (quantum.tests.unit.nicira.test_networkgw.NetworkGatewayExtensionTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "quantum/quantum/tests/unit/nicira/test_networkgw.py", line 70, in setUp
config.parse(args=args)
File "quantum/quantum/common/config.py", line 99, in parse
version='%%prog %s' % quantum_version.release_string())
File "quantum/quantum/openstack/common/version.py", line 63, in release_string
self.release = self._get_version_from_pkg_resources()
File "quantum/quantum/openstack/common/version.py", line 56, in _get_version_from_pkg_resources
return setup.get_version(self.package)
File "quantum/quantum/openstack/common/setup.py", line 334, in get_version
raise Exception("Versioning for this project requires either an sdist"
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository.
Change-Id: I2e24c00b5ba8f35381cac081ff72d86ea0d75d19
Fixes: bug #1131162 and bug #1125416
Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Currently some clients lack of non-ASCII characters support. This patch
introduces 2 functions (strutils.py) that will help clients and servers
to "safely" encode and decode strings.
About the ensure_(str|unicode) functions:
They both try to use first the encoding used in stdin (or python's
default encoding if that's None) and fallback to utf-8 if those
encodings fail to decode a given text.
Neither of them will try to encode / decode non-basestring objects
and will raise a TypeError if one is passed.
Use case:
This is currently being used in glanceclient. I5c3ea93a716edfe284d19f6291d4e36028f91eb2
Needed For:
* Bug 1061156
* Bug 1130572
Change-Id: I78960dfdb6159fd600a6f5e5551ab5d5a3366ab5
|
|\ \ \ \ \
| |_|_|_|/
|/| | | | |
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Moves DB exceptions that can be shared between DB implementations into
their own module.
Adds DBDeadlock() exception wrapping. Nova has its own code for
determining Deadlock and it's better to consolidate it with
DBDuplicateKey checking.
Change-Id: I108bd0da2a14d62e460a997b1472f0b65bfc9b95
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When using rabbit's mirrored queues or qpid's replicated queues, there
are conditions under which you can receive the same message twice.
One such condition is where a message has been sent to a consumer but
before an ack is received by a consumer, the master fails over to a
slave and the slave resends the message. Note that the consumer may have
sent the ack, but it was lost as the master went down.
Dispatching the same message twice is obviously something we want to
avoid. In order to do so, we add a unique_id to each message sent and
have consumers maintain a fixed length queue of recently seen unique
message IDs. Before dispatching any received message, the queue is
checked and the message is skipped if it is a duplicate.
Fixes bugs 1107064.
Change-Id: I5bfacadbdf7de8b34d6370b9aa869c271957692d
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This reverts Ib0260a0c62e3d312d2e3448a125bed64d861319e (commit a603678)
The issue we're trying to fix here is bug #1107064 - when using mirrored
queues with AMQP, acks can be lost while a master is failing over to a
slace causing the new slave to re-send messages which had previously
been acked.
The "replay detection" code applies to more than just amqp and also has
the appearance of a security measure (e.g. the use of the term 'nonce')
when clearly it serves no security purpose until we actually have
message signing.
Revert the "replay detection" approach in favour of the more targetted
amqp bugfix.
Change-Id: I8b8d15835c8b4c85cd388f5df08b60ff4c74e38d
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This reverts commit 22c497097b0d0bd461be40a7a03290aa0b4179f2.
I'm not convinced that this isn't just a micro-optimization when
compared to the rest of the work that a given OpenStack service performs
as the result of these messages.
The implementation is also problematic. It depends on using a '\0' byte
as a separator and assumes that '\0' will not exist anywhere else in the
message. This seems to be asking for trouble. If future data could
ever have a '\0' in it, this will be broken. Further, if a user could
get a '\0' in a message directly with user-supplied input, this could
result in a security vulnerability.
Lastly, this has a significant impact on consumers of notifications that
are outside of OpenStack code, which have been the primary use case of
notifications (Ceilometer is changing that to a degree). I don't think
consumers of notifications should have to implement this deserialization
method.
Change-Id: Ib3163ca98f568bf9f789d4b64bcc6d72e0fcb459
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes bug 1128605
The dbpool code in sqlalchemy session is the wrong place to implement
thread pooling as it wraps each individual SQL call to run in its own
thread. When combined with SQL server locking, all threads can be eaten
waiting on locks with none available to run a 'COMMIT'.
The correct place to do thread pooling is around each DB API call.
This patch removes dbpool from sqlalchemy and creates a common DB API
loader for all openstack projects which implements the following
configuration options:
db_backend: Full path to DB API backend module (or a known short name if
a project chooses to implement a mapping)
dbapi_use_tpool: True or False whether to use thread pooling around all
DB API calls.
DB backend modules must implement a 'get_backend()' method.
Example usage for nova/db/api.py would be:
"""
from nova.openstack.common.db import api as db_api
_KNOWN_BACKENDS = {'sqlalchemy': 'nova.db.sqlalchemy.api'}
IMPL = db_api.DBAPI(backend_mapping=_KNOWN_BACKENDS)
"""
NOTE: Enabling thread pooling will be broken until this issue is
resolved in eventlet _OR_ until we modify our eventlet.monkey_patch()
calls to include 'thread=False':
https://bitbucket.org/eventlet/eventlet/issue/137/
Change-Id: Idf14563ea07cf8ccf2a77b3f53659d8528927fc7
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Direct messages were being stripped of the host
value when performing IPC forwarding. This caused
direct topics to be round-robined to all
services running on the system consuming from
the same base topic name.
i.e. if 'scheduler.host1' and 'scheduler.host2'
were running on the SAME machine, messages to
'scheduler.host1' may have been routed to
'scheduler.host2'.
Now, mulitple processing specifying different
rpc_zmq_host parameters will consume on
separate direct topics and will not
round-robin to other processes.
Adds a zmq-specific test to ensure that messages to
directed topics are not consumed by other
consumers of direct topics sharing a bare
topic on the same host.
Fixes bug 1123715
Change-Id: I939c24397e58492fc16561666aed3ca891325e9c
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We currently set up the exc handler before we configure logging.
This means that an error in the setup will try to raise an exception
which will then fail to log properly because we haven't set up
handlers properly yet. Therefore, simply rely on the default
exception handler during setup and setup our exception handler
immediately after logging is configured.
Fixes bug 1130464
Change-Id: I4b80c646a7d7d5048c8fbadc67dbb9f607d2af69
|
|\ \ \ |
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Currently TopicCosumer and FanoutConsumer can be used
Mirrored Queue support, but not DirectConsumer.
This patch fix this issue.
Fix bug 1124162
Change-Id: I68ae23467ae810ce0ec917a4cb34a488283f401c
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
By flattening the dictionary and turning
it into a fast-serialized string, we save
ourselves from Kombu's very slow JSON serialization
routines.
Change-Id: I64796265c7cc89a05406faabd8d7b253fe3b8acb
|
|\ \ \ \ |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | | |
Bumps the envelope revision to 2.1
Change-Id: Ib0260a0c62e3d312d2e3448a125bed64d861319e
|
|\ \ \ \ |
|
| | |_|/
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The eventlet backdoor has a 'pgt' function for listing green
threads and their stack traces. This adds a new 'pnt' function
for doing the same with native threads.
Change-Id: If6dcfd8dde61c96adfc6e052e5ec7db82914cd55
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch began as a set
of tests verifying the functionality of
sending and receiving RPC envelopes when
using impl_zmq. It was discovered that
when enabled, RPC envelopes were not
actually working,
The ZeroMQ driver includes its own envelopes.
This patch introduce versioning to that
envelope, eliminating the previously reserved
'style' field.
A new iteration of the zeromq-envelope is
introduced, 'impl_zmq_v2'. It specifies
that the zeromq-envelope should be followed
by an unpacked array representing key value
pairs of the standard RPC Envelope.
Because the key-values of the RPC Envelope
can be successfully transformed with bytes(),
this prevents the need to double-serialize
the content traversing the message bus.
Also removes some unused imports.
Closes bug 1123709
Closes bug 1055446
Change-Id: Ib04e3d092c9596146f1048d3502ac248496d313b
|
|\ \ \ \
| |_|/ /
|/| | | |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Implements blueprint cfg-filter-view
At the moment, if a module requires a configuration option from another
module, we do:
CONF.import_opt('source.module', 'option_name')
but, in fact, all options from the imported module are available for
use.
The new ConfigFilter class makes it possible to enforce which options
are available within a module e.g. with
CONF = cfgfilter.ConfigFilter(cfg.CONF)
CONF.import_opt('foo', 'source.module')
CONF.register_opt(StrOpt('bar'))
then the foo and bar options would be the only options available via
this CONF object while still being available via the global cfg.CONF
object.
Change-Id: Ie3aa2cd090a626da8afd27ecb78853cbf279bc8b
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For AMQP based RPC, specifically RabbitMQ and Qpid, this change replaces the
dynamically created RPC call reply queue with a single queue that is created
on the first RPC call and used on all subsequent calls. It provides backward
compatibility on the callee side by recognizing downlevel callers and on the
caller side by adding a config option to revert to the old dynamically
created queue based upon the msg_id.
Change-Id: Idb09a71472866bd3950f58d4f7f45a3181eb40fc
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The cfg API is now available via the oslo-config library, so switch to
it and remove the copied-and-pasted version.
Add the 2013.1b3 tarball to tools/pip-requires - this will be changed
to 'oslo-config>=2013.1' when oslo-config is published to pypi. This
will happen in time for grizzly final.
Remove the 'deps = pep8==1.3.3' and 'deps = pyflakes' from tox.ini as
it means all the other deps get installed with easy_install which can't
install oslo-config from the URL.
Change-Id: I4815aeb8a9341a31a250e920157f15ee15cfc5bc
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In common setup the check for the .git directory is too
restrictive. Instead of checking that it is a directory just
check to see if it exists. That way if the project is part
of a submodule it will continue to work correctly.
Change-Id: If6b6531ab5778ac17537e3f18bde1844620c8316
Fixes: bug 1126416
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Found when testing a bug in cinder (1125416), code
relying on throw_on_error won't work because returncode
is None if checked before the communicate() method is
called
Change-Id: I8c9dd00396346ec3ad7bbe1dc17643c385da8d6f
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On using Mirrored Queue feature in impl_kombu.py, there is a case
that messages are lost because amqp client does not handle exceptions
properly when rabbitmq is dead on the way to wait return value of call().
This patch fix this and enables ampq client reconnecting slave rabbitmq.
Fixes bug 1102051
Change-Id: Ia7a1b9067f7ea4639195a1548de29e0364368e51
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
blueprint move-listener-framework-oslo
bug 1047015
bug 1111632
Ceilometer and Quantum use private methods of the RPC connection
object to configure themselves to listen to a queue shared among a
group of workers. This change adds a public method to the RPC
connection to support this use case, without resorting to using
private API calls.
Change-Id: I3a89f1dfdcf8accca70cf305f7a31315bea093d8
Signed-off-by: Doug Hellmann <doug.hellmann@dreamhost.com>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
explicitly sort options when adding them to argparse.
it's a bit silly to print them in a dict iteration order.
Change-Id: Id508331d7ee3b24e76be7fa958d27d29905bd3d2
Signed-off-by: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
depth of 3, but there is at least on case in Nova (Security
group Rules) which requires a depth beyond this.
https://bugs.launchpad.net/nova/+bug/1118608
Specifically security_group_rule_get_by_security_group
returns a set of rules which have the structure:
rule -> grantee_group -> Instance -> Instance_type
Rather than just bumping the depth limit, which might break some
other user of to_primitive we make it a specific parameter that
defaults to the current value but can be over-ridden when
required and log a warning when the depth is exceeded
Change-Id: I1eaebd484e20cb2eae09a693289709973de9943c
|
|\ |
|