Product SiteDocumentation Site

Chapter 4. Nova (Compute)

So far we have installed Keystone for identity management and Glance for storage of virtual machine images. Now we will install Nova, which is responsible for the life cycle of virtual machines. The first step is to install the openstack-nova package.
$ sudo yum install openstack-nova
Nova makes use of MySQL just like Keystone and Glance. Use the openstack-db utility to initialize the database for Nova.
$ sudo openstack-db --init --service nova
You must explicitly configure Nova to make use of Keystone for authentication. To do so, run the following commands to update the Nova configuration files:
$ sudo openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
$ sudo openstack-config --set /etc/nova/api-paste.ini \
  filter:authtoken admin_token $(cat /tmp/ks_admin_token)
You must also configure Nova to be aware of which network interfaces to use:
$ sudo openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface em1
$ sudo openstack-config --set /etc/nova/nova.conf DEFAULT public_interface em1
Nova is made up of multiple services. As an OpenStack deployment is scaled, these services will be running on multiple machines. The Nova services utilize AMQP (Advanced Message Queueing Protocol) to communicate amongst themselves. We will use Qpid to provide AMQP for OpenStack. Run the following commands to install, configure, and run Qpid:
$ sudo yum install qpid-cpp-server
$ sudo sed -i -e 's/auth=.*/auth=no/g' /etc/qpidd.conf
$ sudo service qpidd start 
$ sudo chkconfig qpidd on
When Nova needs to create a virtual machine, it utilizes libvirt to do so. Run the following commands to start up libvirtd.
$ sudo service libvirtd start
$ sudo chkconfig libvirtd on
One of the Nova services is nova-volume, which is responsible for handling persistent storage for virtual machines. There are multiple backends for nova-volume. The one that we will be using utilizes an LVM volume group called nova-volumes to carve out storage for virtual machines on demand. For testing purposes, we will create a simple file-backed volume group.
$ sudo service tgtd start
$ sudo chkconfig tgtd on
$ sudo truncate --size 20G /srv/rhsummit/nova-volumes
$ sudo losetup -fv /srv/rhsummit/nova-volumes
$ sudo vgcreate nova-volumes /dev/loop0
$ sudo vgdisplay nova-volumes  
--- Volume group ---
  VG Name               nova-volumes
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               20.00 GiB
  PE Size               4.00 MiB
  Total PE              5119
  Alloc PE / Size       0 / 0   
  Free  PE / Size       5119 / 20.00 GiB
  VG UUID               9ivjbZ-wmwI-xXQ4-YSon-mnrC-MMwZ-dU53L5
We have completed the prerequisite steps for being able to start the Nova services. Run the following commands to start up all of the Nova services:
$ for srv in api cert network objectstore scheduler volume compute ; do \
   sudo service openstack-nova-$srv start ; \
  done
$ for srv in api cert network objectstore scheduler volume compute ; do \
   sudo chkconfig openstack-nova-$srv on ; \
  done
Make sure that the Nova log files do not contain any errors:
$ grep -i ERROR /var/log/nova/*
Now that the nova services are running, you may find it beneficial to monitor the nova logs during this lab, to do so you can open another terminal and tail the files in /var/log/nova
$ tail -f /var/log/nova/*.log
Register the Nova compute API as an endpoint with Keystone. Note that the service_id passed to the endpoint-create command comes from the output of the service-create command.
$ . ~/keystonerc_admin
$ keystone service-create --name=nova --type=compute --description="Nova Compute Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description | Nova Compute Service             |
| id          | 9f004f52a97e469b9983d5adefe9f6d0 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+
$ keystone endpoint-create --service_id 9f004f52a97e469b9983d5adefe9f6d0 \
  --publicurl "http://127.0.0.1:8774/v1.1/\$(tenant_id)s" \
  --adminurl "http://127.0.0.1:8774/v1.1/\$(tenant_id)s" \
  --internalurl "http://127.0.0.1:8774/v1.1/\$(tenant_id)s"
+-------------+------------------------------------------+
|   Property  |                  Value                   |
+-------------+------------------------------------------+
| adminurl    | http://127.0.0.1:8774/v1.1/$(tenant_id)s |
| id          | 247a56ec4fa94afca231aa0c304c7049         |
| internalurl | http://127.0.0.1:8774/v1.1/$(tenant_id)s |
| publicurl   | http://127.0.0.1:8774/v1.1/$(tenant_id)s |
| region      | regionOne                                |
| service_id  | 9f004f52a97e469b9983d5adefe9f6d0         |
+-------------+------------------------------------------+
We must also register the Nova volumes API with Keystone. Again, get the service_id used in the endpoint-create command from the output of the service-create command.
$ . ~/keystonerc_admin
$ keystone service-create --name=volume --type=volume --description="Nova Volume Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description | Nova Volume Service              |
| id          | 9347a8dc271c44da9f9991b5c70a6361 |
| name        | volume                           |
| type        | volume                           |
+-------------+----------------------------------+
$ keystone endpoint-create --service_id 9347a8dc271c44da9f9991b5c70a6361 \
  --publicurl 'http://127.0.0.1:8776/v1/%(tenant_id)s' \
  --internalurl 'http://127.0.0.1:8776/v1/%(tenant_id)s' \
  --adminurl 'http://127.0.0.1:8776/v1/%(tenant_id)s'
+-------------+----------------------------------------+
|   Property  |                 Value                  |
+-------------+----------------------------------------+
| adminurl    | http://127.0.0.1:8776/v1/%(tenant_id)s |
| id          | b2a128758a274b09bb09337b99faa42a       |
| internalurl | http://127.0.0.1:8776/v1/%(tenant_id)s |
| publicurl   | http://127.0.0.1:8776/v1/%(tenant_id)s |
| region      | regionOne                              |
| service_id  | 9347a8dc271c44da9f9991b5c70a6361       |
+-------------+----------------------------------------+
Set up your environment to use the credentials for your regular user. Then test out the nova client application. The list command shows running instances. Since no instances have been started yet, the output should be an empty table. The image-list command should show that the image we added to Glance is available for use.
$ . ~/keystonerc_username
$ nova list
+----+------+--------+----------+
| ID | Name | Status | Networks |
+----+------+--------+----------+
+----+------+--------+----------+
$ nova image-list
+--------------------------------------+-----------+--------+--------+
|                  ID                  |    Name   | Status | Server |
+--------------------------------------+-----------+--------+--------+
| 17a34b8e-c573-48d6-920c-b4b450172b41 | RHEL 6.2  | ACTIVE |        |
+--------------------------------------+-----------+--------+--------+
Run the following command to create a network that Nova can use to allocate IP addresses for instances:
$ sudo nova-manage network create demonet 10.0.0.0/24 1 256 --bridge=demonetbr0
You can ask Nova to create an ssh keypair. When you create an instance, you can specify the name of this keypair and the public key will be placed in the instance so that you can log in using ssh. The output of this command is the private key, so that needs to be saved in a file for use later.
$ nova keypair-add oskey > oskey.priv
$ chmod 600 oskey.priv