| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
| |
Convert Nova to use the Oslo versions of:
* Service
* Launchers
Also add Service timers to the Service ThreadGroup.
blueprint use-oslo-services
Change-Id: Id8ab017f4525afd69fed322311f2d5cc3b6d6f98
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Import zookeeper.membership rather than zookeeper.membersip.
Also fixed some issues with setting up the tests for the zookeeper
servicegroup driver. Config options were not being set before
initializing the driver leading to failures.
There is no added test for this because the bug is indistinguishable
from not having the zookeeper python modules installed, which leads to
skipping these servicegroup tests.
Bug 1177776
Change-Id: Idd6dca2e03169399b930cc1fc1a85636497cb0b5
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Import the oslo looping call implementation (which is a copy of
nova's), delete nova's local copy, convert all users to the new
location.
It should be noted that the oslo implementation of
FixedIntervalLoopingCall measures time from the start of the
periodic task, not the end, so periodic tasks will run with a
constant frequency instead of the frequency changing depending on
how long the periodic task takes to run.
Change-Id: Ia62ce1988f5373c09146efa6b3b1d1dc094d50c4
|
| |\ |
|
| | |
| |
| |
| | |
Change-Id: I11ee70b36f06bc4a45b5ff207e53a331891a6bfa
|
| |/
|
|
|
|
| |
Update IBM copyright strings to one consistant format
Change-Id: If4aef948bac27fe337991f36ea1bf02d4078e8bf
|
| |
|
|
|
|
|
|
|
|
|
| |
The zookeeper python module isn't widely available (e.g. not available
on Fedora), so allow the zk servicegroup driver be imported even if the
module isn't found.
This helps allow the generate_sample.sh script generate a sample config
file even without zookeeper installed.
Change-Id: Ic3b3bb75bcd2003150909ebd2c5724fa5093c346
|
| |
|
|
|
|
| |
The zookeeper driver group name is 'zookeeper' not 'zk'.
Change-Id: I346cb2740ea835d17918f46be7ca142957537626
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The cfg API is now available via the oslo-config library, so switch to
it and remove the copied-and-pasted version.
Add the 2013.1b4 tarball to tools/pip-requires - this will be changed
to 'oslo-config>=2013.1' when oslo-config is published to pypi. This
will happen in time for grizzly final.
Add dependency_links to setup.py so that oslo-config can be installed
from the tarball URL specified in pip-requires.
Remove the 'deps = pep8==1.3.3' from tox.ini as it means all the other
deps get installed with easy_install which can't install oslo-config
from the URL.
Make tools/hacking.py include oslo in IMPORT_EXCEPTIONS like it already
does for paste. It turns out imp.find_module() doesn't correct handle
namespace packages.
Retain dummy cfg.py file until keystoneclient middleware has been
updated (I18c450174277c8e2d15ed93879da6cd92074c27a).
Change-Id: I4815aeb8a9341a31a250e920157f15ee15cfc5bc
|
| |\ |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Today the heartbeat information of Nova services/nodes
is maintained in the DB, while each service updates the
corresponding record in the Service table periodically
(by default -- every 10 seconds), specifying the timestamp
of the last update. This mechanism is highly inefficient
and does not scale. E.g., maintaining the heartbeat
information for 1,000 nodes/services would require 100 DB
updates per second (just for the heartbeat).
This patch adds nova.servicegroup.drivers.memcached, a
service heartbeat driver using Memcached. You can reduce
DB updates with it.
blueprint memcached-service-heartbeat
Change-Id: I60bdb1cfbce1fea051f276ebfd6ccc4ad8fe6d2b
|
| |/
|
|
|
|
|
|
|
| |
Using abbreviated config group names just makes the config file harder
to understand, so just use full name.
zk => zookeeper
Change-Id: Ia8b3ff9201365477003535600e419540deae7341
|
| |\ |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The lock-based synchronization is not re-entrant, which
would introduce a deadlock with the new conductor patch.
Because race condition is not be possible in the
eventlet threading model with GIL, we simply remove the
synchronized declarator.
Change-Id: I03e52227385dafd2b2b66bca18cab8445c91f3be
|
| |/
|
|
|
|
|
|
|
|
|
|
|
| |
The ZooKeeper driver uses ephemeral nodes in ZooKeeper to keep
track of node liveness in a service group. The Implementation is
based on the evzookeeper library to combine zookeeper
and eventlet.
Part of blueprint zk-service-heartbeat
DocImpact: new driver
Change-Id: Ia20519de2b4964007f8b91ea5d56d1875510d40f
|
| |
|
|
|
|
|
| |
Remove all currently unused imports
Prevent future unused imports
Change-Id: I6ac26d5c71b79952a7732db300355a00310c712e
|
| |
|
|
|
|
| |
Fix all N302 issues, and re-enable.
Change-Id: Ic94d144c915b228b7ff2fd9c5951875e159ffcdd
|
| |
|
|
|
|
|
|
| |
* Includes some general tools/hacking cleanup
* Fix several N302 cases
* Disable N302 until all cases are fixed
Change-Id: Iddba07ff13e10dc41a6930749044bb8c0572d279
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This patch updates the servicegroup db driver to use the conductor API
in the cases where it was accessing the db directly before. If the
service is allowed to do direct db access, the conductor API will
optimize the call to go straight to the db. Otherwise, it will invoke
rpc to a remove conductor service to perform the operations.
Part of bp no-db-compute.
Change-Id: I96adffc6f80288c829d84f170414613a35b5c840
|
| |
|
|
|
|
|
|
|
|
| |
The base service code already ensures that a service record is created.
Store that in the service object. Then down in the servicegroup db
driver, access it instead of having to go look it up again.
Part of bp no-db-compute.
Change-Id: Ife3c98c29c6f654585e14ded5a9b4c7e3bec226d
|
| |
|
|
|
|
|
|
|
|
| |
Right now there is only one servicegroup driver, the db backed driver.
Create a directory for drivers to reside in to keep things a bit more
tidy as we add additional drivers.
Part of blueprint rpc-based-servicegroup-driver.
Change-Id: Ib563e1a8d184cef838e5730b2fc6904940d04c21
|
| |\ |
|
| | |
| |
| |
| |
| | |
blueprint: scope-config-opts
Change-Id: I5fddb3768348c43a38b72dbf738b0c7fa2967691
|
| |/
|
|
|
|
|
| |
fix N402 (single line docstrings should end in a period) for
rest of nova files
Change-Id: I57d0d9ab01345dd83e544e476d79d2c2ca68ee51
|
| |\ |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is the final step in enabling availability_zones using aggregate
metadata. Previously all services had an availability_zone, but the
availability_zone is only used for nova-compute. Services such as
nova-scheduler, nova-network, nova-conductor have always spanned all
availability_zones.
After this change only compute nodes (nova-compute), will have an
availability_zone. In order to preserve current APIs, when running:
* nova host-list (os-hosts)
* euca-describe-availability-zones verbose
* nova-manage service list
Internal services will appear in there own internal availability_zone
(CONF.internal_service_availability_zone)
Internal zone is hidden in euca-describe-availability_zones
(non-verbose)
CONF.node_availability_zone has been renamed to
CONF.default_availability_zone and is only used by the nova-api and
nova-scheduler. CONF.node_availability_zone still works but is
deprecated
DocImpact
Completes blueprint aggregate-based-availability-zones
Change-Id: Ib772df5f9ac2865f20df479f8ddce575a9ce3aff
|
| |\ \
| |/
|/| |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We had previously been ignoring all our custom N4xx hacking.py
errors. This fixes all the N401 errors "doc strings
should not start with a space" and reduces the ignore set down
to N402 only "single line docstrings should end with period".
It also fixes the N401 parser to catch only docstrings, and
not tripple quoted string blocks used later on in a function.
Clean up a few of the more crazy uses of """ in our code
Clean up additional funky comments to make indents a bit more
consistent, and pull in lines when possible.
Change-Id: I9040a1d2ca7efda83bd5e425b95d1408b5b63577
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This review allows periodic tasks to be enabled or disabled in the
decorator, as well as by specifying an interval which is negative.
The spacing between runs of a periodic task is now specified in
seconds, with zero meaning the default spacing which is currently 60
seconds.
There is also a new argument to the decorator which indicates if a
periodic task _needs_ to be run in the nova-compute process. There is
also a flag (run_external_periodic_tasks) which can be used to move
these periodic tasks out of the nova-compute process.
I also remove the periodic_interval flag to services, as the interval
between runs is now dynamic based on the number of seconds that a
periodic task wants to wait for its next run. For callers who want to
twiddle the sleep period (for example unit tests), there is a
create() argument periodic_interval_max which lets the period
periodic_tasks() specifies be overridden. This is not exposed as a
flag because I cannot see a use case for that. It is needed for unit
testing however.
DocImpact. Resolves bug 939087.
Change-Id: I7f245a88b8d229a481c1b65a4c0f1e2769bf3901
|
|
|
Summary:
* provide a pluggable ServiceGroup monitoring API
* refactor the old DB-based implementation to the new API
Currently nova compute nodes periodically write to the database (every
10 seconds by default) to report their liveness. This implementation
factors out this functionality and make it a set of abstract internal
APIs with a pluggable backend implementation. Currently it's named as
ServiceGroup APIs.
With this effort, we are hopeful to see the following benefits:
* We expect to see more backend implementations in addition to the
default database-based one, such as ZooKeeper (as described in
blueprint zk-service-heartbeat) or rabbitmq heartbeat based.
* We expect the code to live in openstack-common so projects other
than Nova can take advantage of the internal APIs.
* Lay the foundations to use lower overhead heartbeat mechanisms
which scale better.
* Other than reporting whether a node in a service group is up or
down, the code may also be used to query for members. Other parts of
the code could also take advantage of the new APIs. One noteable
example is the MatchMaker in the rpc library, which may even become
redundant. We have been working with Eric at Cloudscaling to see how
this fits with the matchmaker. It is likely that this code will need
to be used, at least by the peer-to-peer based RPC mechanisms, to
implement the new create_worker method.
DocImpact: new config options
Co-authored-by: Pavel Kravchenco <kpavel@il.ibm.com>
Co-authored-by: Alexey Roytman <roytman@il.ibm.com>
Change-Id: I51645687249c75e7776a684f19529a1e78f33a41
|