| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \ \ \
| |_|/ /
|/| | | |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes bug #1160475.
Positional arguments were dropped during the deserialization of
valid remote exceptions, while keyword arguments are correctly
supplied.
Change-Id: I7b95fc4ed3fb9e5c75f5711ed6aace7aa5593727
|
|/ /
| |
| |
| |
| |
| |
| | |
One of the problems with fakes is that sometimes you can fake away
a bug and have happily passing tests and broken code.
Change-Id: Ib544739699b63d3f5e80edc16e19377d45782334
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The test changes current directory to a temporary one
and ChangeLog file is created here. That's why the file
doesn't reside in project's root directory.
The correct behavior is to look for ChangeLog at cwd as is
done in test_generate_authors for AUTHORS file.
Use openstack.common.setup.__file__ to determine the
root directory of openstack package.
Change-Id: Ic7c9b01485a08b8921c88ac624e0aecd9409fa7c
Fixes: bug #1161362
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Now that Havana has started, remove deprecated Grizzly features
* deprecated alias list_notifier_drivers (for notification_driver)
* openstack/common/notifier/rabbit_notifier.py (use rpc_notifier
instead)
Change-Id: I0dbb997ba774f58766bddf950049ec1e2d5b79de
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | | |
Change-Id: Idf2235300fa2c1c73563ddcff6ce9e84bd11ea0e
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | | |
flake8 is pluggable and handles pep8 and pyflakes, as well as configuration
through tox.ini. It also removes the need for flakes.py.
Change-Id: If5f7d8ad348b4fb8119fa4ec7b5e9d17bdc72a39
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Co-authored-by lines are the way we've decided to indicated shared
authorship of a patch, so content from them should be included in
the generated AUTHORS file.
Fixes bug 1158319.
Change-Id: I9dacf78c01f3ad74e696f16a7aa39edb98e8d185
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Fixes bug 1158179
In the case that the lock_path does not exist and there is a contested
resource then the process that was waiting on the lock will not
be locking due to the fact that the directory was deleted.
Change-Id: I75d720d4df499e85386d3e2cc86b927b017e12ac
|
|\| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add a unit test which fails because of #1107950 - in this case,
lockutils is deleting the lock directory every time the lock is dropped
even though another process might have acquired a lock using that
directory.
Change-Id: Ic82409f9462e570bc102ab469334c53aafc6d7ac
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This patch moves the traceback for an rpc timeout from inside an
iterator, which gave a useless traceback, into the main flow of the
program.
It also adds the rpc method being called and the topic used to the
exception's message.
When the caller logs the message higher up the stack, the log
information and traceback will be more useful.
Finally it removes the timeout logging in the amqp.py module, in the
spirit of bug #1137994 and https://review.openstack.org/#/c/23295/
Works towards: bug #1148516
Change-Id: I29a3b1b97c6114c4479e2b71c1257c4d72131535
|
|\ \ \ \ \ |
|
| | |_|/ /
| |/| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Grizzly had the ability to receive messages with an envelope, but did
not send them. Now update the code to send them.
Change-Id: I73aad7697cf83ad4aabb3c2058b7cc4f53f783c2
|
|\ \ \ \ \
| | |_|/ /
| |/| | | |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Use a separate directory for the lockutils external locks and for the
test's flocks which are used to check that serialization is actually
occurring. By using the same directory, we can't test the code in
lockutils which auto-creates this directory because the dir gets
deleted by a process by lockutils cleanup while another directory
is using it for flocks.
Also, assume the the handles directory has been created in the parent
rather than having each child attempt to create it.
Related, add a try/finally block so that when a child process throws an
exception it immediately exits rather than deleting the temporary
directories created by the parent.
Change-Id: I32d7e8e05fb3f22cf38fa586f8bc97646c83f182
|
|\ \ \ \
| |_|/ /
|/| | | |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes bug #1154245
If the parameter to the --log-config option cannot be loaded by the
logging.config.fileConfig() function, you get an exception like this
raised:
NoSectionError: No section: 'formatters'
which doesn't do much to help users understand the error.
Improve the situation by wrapping the error in a custom exception type
giving this:
LogConfigError: Error loading logging config /etc/logging.conf: No section: 'formatters'
Also add some tests to check we raise this error under the most common
failure conditions.
Change-Id: I6ad2eb4867213b6356ce43d75da50218d1b498f6
|
|\ \ \
| |/ /
|/| | |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
This fix allows someone using openstack.common.wsgi to pass
kwargs down to eventlet.wsgi.server when calling wsgi.run_server.
Fixes bug 1156930
Change-Id: Id35232f68ee40c5435e157a834fc94d4bbd04970
|
|/
|
|
|
|
| |
Fixes bug #1157596
Change-Id: I36d5484eaa2f0e21188eed6e70cc1ad785233d6a
|
|
|
|
|
|
|
|
|
|
| |
fixes bug 1154745
The previous update (https://review.openstack.org/#/c/24103/) missed
header files that contained "OpenStack, LLC". This change corrects the
missed files to reflect the OpenStack Foundation.
Change-Id: I9c6de265267485ef2c82ea7e6d8643e82134d102
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Sockets are created by the zeromq driver
for the topic specified by each incoming message.
Because the topic is arbitrarily supplied by the sender,
path separators in the topic must be illegal.
Fixes bug 1122763
Change-Id: Iccdb9b69e646bfe7665ee34c367fd4019db25f17
|
|/
|
|
|
|
| |
One code change, rest are in headers
Change-Id: I73f59681358629e1ad74e49d3d3ca13fcb5c2eb1
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Boolean values for capabilities don't work because extra_specs are
all converted to unicode. The scheduler will then check, for example,
if the boolean 'True' is equal to the unicode string 'True', and will
always return False. This patch allows admins to specify '<is> True'
in extra_specs, which will compare successfully to boolean True.
Fixes bug: 1146306
Change-Id: Id0e6dcfb71eb0943a16bba551ec23c4d57206550
|
|/
|
|
|
|
|
|
|
| |
* Fix an exception that was raising a tuple
* Remove unused imports
* Remove unused exceptions
* Remove extra blank lines
Change-Id: I9127be991e9081dc173525c9b57ea297f389d16d
|
|
|
|
|
|
| |
Bug 1134802
Change-Id: I9cc3c9d9324314d293f01f047882eb6be06e02dd
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Nova's simple in-memory cache replicates the memcache interface,
so clients can cache things in memcache or in memory using
the same commands. Using memcached requires having the client
library installed and the memcached_servers config option set.
Callers can also pass in a list of memcached servers when they
initialize the client.
Change-Id: I831142a36797b04006cba4792df803e09f6fd69b
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Introduces a reference implementation
of a matchmaker (based on redis) that
supports dynamic host/topic registrations,
host expiration, and hooks for consuming
applications to acknowledge or neg-acknowledge
topic.host service availability.
Implements blueprint advanced-matchmaking
Change-Id: I8608d2089fca118b0e369f2eb5c6aedacf6821fe
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Prevent attacks through xml entity expansion etc.
Fixes LP# 1100282
Change-Id: I391531deac122697556c282184c8f8890ea66489
|
|\ \ \ |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
quantum run_tests.py fails because
openstack.common.setup._get_version_from_git fails. It is because
quantum unit tests run under quantum/tests/unit instead of git root dir.
So the function should check parent dirs for .git.
cinder folks seem to have hit this bug (1125416).
ERROR: test_network_gateway_update (quantum.tests.unit.nicira.test_networkgw.NetworkGatewayExtensionTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "quantum/quantum/tests/unit/nicira/test_networkgw.py", line 70, in setUp
config.parse(args=args)
File "quantum/quantum/common/config.py", line 99, in parse
version='%%prog %s' % quantum_version.release_string())
File "quantum/quantum/openstack/common/version.py", line 63, in release_string
self.release = self._get_version_from_pkg_resources()
File "quantum/quantum/openstack/common/version.py", line 56, in _get_version_from_pkg_resources
return setup.get_version(self.package)
File "quantum/quantum/openstack/common/setup.py", line 334, in get_version
raise Exception("Versioning for this project requires either an sdist"
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository.
Change-Id: I2e24c00b5ba8f35381cac081ff72d86ea0d75d19
Fixes: bug #1131162 and bug #1125416
Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Currently some clients lack of non-ASCII characters support. This patch
introduces 2 functions (strutils.py) that will help clients and servers
to "safely" encode and decode strings.
About the ensure_(str|unicode) functions:
They both try to use first the encoding used in stdin (or python's
default encoding if that's None) and fallback to utf-8 if those
encodings fail to decode a given text.
Neither of them will try to encode / decode non-basestring objects
and will raise a TypeError if one is passed.
Use case:
This is currently being used in glanceclient. I5c3ea93a716edfe284d19f6291d4e36028f91eb2
Needed For:
* Bug 1061156
* Bug 1130572
Change-Id: I78960dfdb6159fd600a6f5e5551ab5d5a3366ab5
|
|\ \ \ \ |
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Moves DB exceptions that can be shared between DB implementations into
their own module.
Adds DBDeadlock() exception wrapping. Nova has its own code for
determining Deadlock and it's better to consolidate it with
DBDuplicateKey checking.
Change-Id: I108bd0da2a14d62e460a997b1472f0b65bfc9b95
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When using rabbit's mirrored queues or qpid's replicated queues, there
are conditions under which you can receive the same message twice.
One such condition is where a message has been sent to a consumer but
before an ack is received by a consumer, the master fails over to a
slave and the slave resends the message. Note that the consumer may have
sent the ack, but it was lost as the master went down.
Dispatching the same message twice is obviously something we want to
avoid. In order to do so, we add a unique_id to each message sent and
have consumers maintain a fixed length queue of recently seen unique
message IDs. Before dispatching any received message, the queue is
checked and the message is skipped if it is a duplicate.
Fixes bugs 1107064.
Change-Id: I5bfacadbdf7de8b34d6370b9aa869c271957692d
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This reverts Ib0260a0c62e3d312d2e3448a125bed64d861319e (commit a603678)
The issue we're trying to fix here is bug #1107064 - when using mirrored
queues with AMQP, acks can be lost while a master is failing over to a
slace causing the new slave to re-send messages which had previously
been acked.
The "replay detection" code applies to more than just amqp and also has
the appearance of a security measure (e.g. the use of the term 'nonce')
when clearly it serves no security purpose until we actually have
message signing.
Revert the "replay detection" approach in favour of the more targetted
amqp bugfix.
Change-Id: I8b8d15835c8b4c85cd388f5df08b60ff4c74e38d
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes bug 1128605
The dbpool code in sqlalchemy session is the wrong place to implement
thread pooling as it wraps each individual SQL call to run in its own
thread. When combined with SQL server locking, all threads can be eaten
waiting on locks with none available to run a 'COMMIT'.
The correct place to do thread pooling is around each DB API call.
This patch removes dbpool from sqlalchemy and creates a common DB API
loader for all openstack projects which implements the following
configuration options:
db_backend: Full path to DB API backend module (or a known short name if
a project chooses to implement a mapping)
dbapi_use_tpool: True or False whether to use thread pooling around all
DB API calls.
DB backend modules must implement a 'get_backend()' method.
Example usage for nova/db/api.py would be:
"""
from nova.openstack.common.db import api as db_api
_KNOWN_BACKENDS = {'sqlalchemy': 'nova.db.sqlalchemy.api'}
IMPL = db_api.DBAPI(backend_mapping=_KNOWN_BACKENDS)
"""
NOTE: Enabling thread pooling will be broken until this issue is
resolved in eventlet _OR_ until we modify our eventlet.monkey_patch()
calls to include 'thread=False':
https://bitbucket.org/eventlet/eventlet/issue/137/
Change-Id: Idf14563ea07cf8ccf2a77b3f53659d8528927fc7
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Direct messages were being stripped of the host
value when performing IPC forwarding. This caused
direct topics to be round-robined to all
services running on the system consuming from
the same base topic name.
i.e. if 'scheduler.host1' and 'scheduler.host2'
were running on the SAME machine, messages to
'scheduler.host1' may have been routed to
'scheduler.host2'.
Now, mulitple processing specifying different
rpc_zmq_host parameters will consume on
separate direct topics and will not
round-robin to other processes.
Adds a zmq-specific test to ensure that messages to
directed topics are not consumed by other
consumers of direct topics sharing a bare
topic on the same host.
Fixes bug 1123715
Change-Id: I939c24397e58492fc16561666aed3ca891325e9c
|