diff options
author | Yun Mao <yunmao@gmail.com> | 2012-08-06 12:37:12 -0400 |
---|---|---|
committer | Yun Mao <yunmao@gmail.com> | 2012-11-26 17:10:40 -0500 |
commit | 64e167eb62bd3483b2947ec8de218453c116bd93 (patch) | |
tree | c34fa896109d8bb33e5d44792b634644ec9517d4 /nova/scheduler/driver.py | |
parent | ab77c4e1b8e4c500a9372e290e658952a2441627 (diff) | |
download | nova-64e167eb62bd3483b2947ec8de218453c116bd93.tar.gz nova-64e167eb62bd3483b2947ec8de218453c116bd93.tar.xz nova-64e167eb62bd3483b2947ec8de218453c116bd93.zip |
Add pluggable ServiceGroup monitoring APIs
Summary:
* provide a pluggable ServiceGroup monitoring API
* refactor the old DB-based implementation to the new API
Currently nova compute nodes periodically write to the database (every
10 seconds by default) to report their liveness. This implementation
factors out this functionality and make it a set of abstract internal
APIs with a pluggable backend implementation. Currently it's named as
ServiceGroup APIs.
With this effort, we are hopeful to see the following benefits:
* We expect to see more backend implementations in addition to the
default database-based one, such as ZooKeeper (as described in
blueprint zk-service-heartbeat) or rabbitmq heartbeat based.
* We expect the code to live in openstack-common so projects other
than Nova can take advantage of the internal APIs.
* Lay the foundations to use lower overhead heartbeat mechanisms
which scale better.
* Other than reporting whether a node in a service group is up or
down, the code may also be used to query for members. Other parts of
the code could also take advantage of the new APIs. One noteable
example is the MatchMaker in the rpc library, which may even become
redundant. We have been working with Eric at Cloudscaling to see how
this fits with the matchmaker. It is likely that this code will need
to be used, at least by the peer-to-peer based RPC mechanisms, to
implement the new create_worker method.
DocImpact: new config options
Co-authored-by: Pavel Kravchenco <kpavel@il.ibm.com>
Co-authored-by: Alexey Roytman <roytman@il.ibm.com>
Change-Id: I51645687249c75e7776a684f19529a1e78f33a41
Diffstat (limited to 'nova/scheduler/driver.py')
-rw-r--r-- | nova/scheduler/driver.py | 9 |
1 files changed, 5 insertions, 4 deletions
diff --git a/nova/scheduler/driver.py b/nova/scheduler/driver.py index 54c467833..c5523f229 100644 --- a/nova/scheduler/driver.py +++ b/nova/scheduler/driver.py @@ -37,9 +37,9 @@ from nova.openstack.common import log as logging from nova.openstack.common.notifier import api as notifier from nova.openstack.common import rpc from nova.openstack.common import timeutils +from nova import servicegroup from nova import utils - LOG = logging.getLogger(__name__) scheduler_driver_opts = [ @@ -151,6 +151,7 @@ class Scheduler(object): CONF.scheduler_host_manager) self.compute_api = compute_api.API() self.compute_rpcapi = compute_rpcapi.ComputeAPI() + self.servicegroup_api = servicegroup.API() def update_service_capabilities(self, service_name, host, capabilities): """Process a capability update from a service node.""" @@ -163,7 +164,7 @@ class Scheduler(object): services = db.service_get_all_by_topic(context, topic) return [service['host'] for service in services - if utils.service_is_up(service)] + if self.servicegroup_api.service_is_up(service)] def schedule_prep_resize(self, context, image, request_spec, filter_properties, instance, instance_type, @@ -230,7 +231,7 @@ class Scheduler(object): raise exception.ComputeServiceUnavailable(host=src) # Checking src host is alive. - if not utils.service_is_up(services[0]): + if not self.servicegroup_api.service_is_up(services[0]): raise exception.ComputeServiceUnavailable(host=src) def _live_migration_dest_check(self, context, instance_ref, dest): @@ -246,7 +247,7 @@ class Scheduler(object): dservice_ref = dservice_refs[0] # Checking dest host is alive. - if not utils.service_is_up(dservice_ref): + if not self.servicegroup_api.service_is_up(dservice_ref): raise exception.ComputeServiceUnavailable(host=dest) # Checking whether The host where instance is running |