summaryrefslogtreecommitdiffstats
path: root/doc/source
diff options
context:
space:
mode:
authorArmando Migliaccio <armando.migliaccio@citrix.com>2010-11-17 18:33:47 +0000
committerArmando Migliaccio <armando.migliaccio@citrix.com>2010-11-17 18:33:47 +0000
commite0ad4e8dd9f73c3c1e775f3deebe5a08f2321ac6 (patch)
treead3a30ad0068dab59c09f1da2799266bb859ed2e /doc/source
parent0c19386f7c4ca063edbf8c10ffb86b399884e457 (diff)
parent551fd309fcbfedb99555a81fac6a40f003598fd6 (diff)
merged with trunk
Diffstat (limited to 'doc/source')
-rw-r--r--doc/source/adminguide/distros/others.rst88
-rw-r--r--doc/source/adminguide/distros/ubuntu.10.04.rst41
-rw-r--r--doc/source/adminguide/distros/ubuntu.10.10.rst41
-rw-r--r--doc/source/adminguide/index.rst2
-rw-r--r--doc/source/adminguide/managing.networks.rst71
-rw-r--r--doc/source/adminguide/multi.node.install.rst63
-rw-r--r--doc/source/adminguide/network.flat.rst60
-rw-r--r--doc/source/adminguide/network.vlan.rst179
-rw-r--r--doc/source/adminguide/single.node.install.rst344
-rw-r--r--doc/source/community.rst3
-rw-r--r--doc/source/conf.py7
-rw-r--r--doc/source/nova.concepts.rst7
-rw-r--r--doc/source/quickstart.rst24
-rw-r--r--doc/source/service.architecture.rst6
14 files changed, 857 insertions, 79 deletions
diff --git a/doc/source/adminguide/distros/others.rst b/doc/source/adminguide/distros/others.rst
new file mode 100644
index 000000000..ec14a9abb
--- /dev/null
+++ b/doc/source/adminguide/distros/others.rst
@@ -0,0 +1,88 @@
+Installation on other distros (like Debian, Fedora or CentOS )
+==============================================================
+
+Feel free to add additional notes for additional distributions.
+
+Nova installation on CentOS 5.5
+-------------------------------
+
+These are notes for installing OpenStack Compute on CentOS 5.5 and will be updated but are NOT final. Please test for accuracy and edit as you see fit.
+
+The principle botleneck for running nova on centos in python 2.6. Nova is written in python 2.6 and CentOS 5.5. comes with python 2.4. We can not update python system wide as some core utilities (like yum) is dependent on python 2.4. Also very few python 2.6 modules are available in centos/epel repos.
+
+Pre-reqs
+--------
+
+Add euca2ools and EPEL repo first.::
+
+ cat >/etc/yum.repos.d/euca2ools.repo << EUCA_REPO_CONF_EOF
+ [eucalyptus]
+ name=euca2ools
+ baseurl=http://www.eucalyptussoftware.com/downloads/repo/euca2ools/1.3.1/yum/centos/
+ enabled=1
+ gpgcheck=0
+
+ EUCA_REPO_CONF_EOF
+
+::
+
+ rpm -Uvh 'http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm'
+
+Now install python2.6, kvm and few other libraries through yum::
+
+ yum -y install dnsmasq vblade kpartx kvm gawk iptables ebtables bzr screen euca2ools curl rabbitmq-server gcc gcc-c++ autoconf automake swig openldap openldap-servers nginx python26 python26-devel python26-distribute git openssl-devel python26-tools mysql-server qemu kmod-kvm libxml2 libxslt libxslt-devel mysql-devel
+
+Then download the latest aoetools and then build(and install) it, check for the latest version on sourceforge, exact url will change if theres a new release::
+
+ wget -c http://sourceforge.net/projects/aoetools/files/aoetools/32/aoetools-32.tar.gz/download
+ tar -zxvf aoetools-32.tar.gz
+ cd aoetools-32
+ make
+ make install
+
+Add the udev rules for aoetools::
+
+ cat > /etc/udev/rules.d/60-aoe.rules << AOE_RULES_EOF
+ SUBSYSTEM=="aoe", KERNEL=="discover", NAME="etherd/%k", GROUP="disk", MODE="0220"
+ SUBSYSTEM=="aoe", KERNEL=="err", NAME="etherd/%k", GROUP="disk", MODE="0440"
+ SUBSYSTEM=="aoe", KERNEL=="interfaces", NAME="etherd/%k", GROUP="disk", MODE="0220"
+ SUBSYSTEM=="aoe", KERNEL=="revalidate", NAME="etherd/%k", GROUP="disk", MODE="0220"
+ # aoe block devices
+ KERNEL=="etherd*", NAME="%k", GROUP="disk"
+ AOE_RULES_EOF
+
+Load the kernel modules::
+
+ modprobe aoe
+
+::
+
+ modprobe kvm
+
+Now, install the python modules using easy_install-2.6, this ensures the installation are done against python 2.6
+
+
+easy_install-2.6 twisted sqlalchemy mox greenlet carrot daemon eventlet tornado IPy routes lxml MySQL-python
+python-gflags need to be downloaded and installed manually, use these commands (check the exact url for newer releases ):
+
+::
+
+ wget -c "http://python-gflags.googlecode.com/files/python-gflags-1.4.tar.gz"
+ tar -zxvf python-gflags-1.4.tar.gz
+ cd python-gflags-1.4
+ python2.6 setup.py install
+ cd ..
+
+Same for python2.6-libxml2 module, notice the --with-python and --prefix flags. --with-python ensures we are building it against python2.6 (otherwise it will build against python2.4, which is default)::
+
+ wget -c "ftp://xmlsoft.org/libxml2/libxml2-2.7.3.tar.gz"
+ tar -zxvf libxml2-2.7.3.tar.gz
+ cd libxml2-2.7.3
+ ./configure --with-python=/usr/bin/python26 --prefix=/usr
+ make all
+ make install
+ cd python
+ python2.6 setup.py install
+ cd ..
+
+Once you've done this, continue at Step 3 here: :doc:`../single.node.install`
diff --git a/doc/source/adminguide/distros/ubuntu.10.04.rst b/doc/source/adminguide/distros/ubuntu.10.04.rst
new file mode 100644
index 000000000..ce368fab8
--- /dev/null
+++ b/doc/source/adminguide/distros/ubuntu.10.04.rst
@@ -0,0 +1,41 @@
+Installing on Ubuntu 10.04 (Lucid)
+==================================
+
+Step 1: Install dependencies
+----------------------------
+Grab the latest code from launchpad:
+
+::
+
+ bzr clone lp:nova
+
+Here's a script you can use to install (and then run) Nova on Ubuntu or Debian (when using Debian, edit nova.sh to have USE_PPA=0):
+
+.. todo:: give a link to a stable releases page
+
+Step 2: Install dependencies
+----------------------------
+
+Nova requires rabbitmq for messaging and optionally you can use redis for storing state, so install these first.
+
+*Note:* You must have sudo installed to run these commands as shown here.
+
+::
+
+ sudo apt-get install rabbitmq-server redis-server
+
+
+You'll see messages starting with "Reading package lists... Done" and you must confirm by typing Y that you want to continue.
+
+If you're running on Ubuntu 10.04, you'll need to install Twisted and python-gflags which is included in the OpenStack PPA.
+
+::
+
+ sudo apt-get install python-twisted
+
+ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 95C71FE2
+ sudo sh -c 'echo "deb http://ppa.launchpad.net/openstack/openstack-ppa/ubuntu lucid main" > /etc/apt/sources.list.d/openstackppa.list'
+ sudo apt-get update && sudo apt-get install python-gflags
+
+
+Once you've done this, continue at Step 3 here: :doc:`../single.node.install`
diff --git a/doc/source/adminguide/distros/ubuntu.10.10.rst b/doc/source/adminguide/distros/ubuntu.10.10.rst
new file mode 100644
index 000000000..a3fa2def1
--- /dev/null
+++ b/doc/source/adminguide/distros/ubuntu.10.10.rst
@@ -0,0 +1,41 @@
+Installing on Ubuntu 10.10 (Maverick)
+=====================================
+Single Machine Installation (Ubuntu 10.10)
+
+While we wouldn't expect you to put OpenStack Compute into production on a non-LTS version of Ubuntu, these instructions are up-to-date with the latest version of Ubuntu.
+
+Make sure you are running Ubuntu 10.10 so that the packages will be available. This install requires more than 70 MB of free disk space.
+
+These instructions are based on Soren Hansen's blog entry, Openstack on Maverick. A script is in progress as well.
+
+Step 1: Install required prerequisites
+--------------------------------------
+Nova requires rabbitmq for messaging and redis for storing state (for now), so we'll install these first.::
+
+ sudo apt-get install rabbitmq-server redis-server
+
+You'll see messages starting with "Reading package lists... Done" and you must confirm by typing Y that you want to continue.
+
+Step 2: Install Nova packages available in Maverick Meerkat
+-----------------------------------------------------------
+Type or copy/paste in the following line to get the packages that you use to run OpenStack Compute.::
+
+ sudo apt-get install python-nova
+ sudo apt-get install nova-api nova-objectstore nova-compute nova-scheduler nova-network euca2ools unzip
+
+You'll see messages starting with "Reading package lists... Done" and you must confirm by typing Y that you want to continue. This operation may take a while as many dependent packages will be installed. Note: there is a dependency problem with python-nova which can be worked around by installing first.
+
+When the installation is complete, you'll see the following lines confirming:::
+
+ Adding system user `nova' (UID 106) ...
+ Adding new user `nova' (UID 106) with group `nogroup' ...
+ Not creating home directory `/var/lib/nova'.
+ Setting up nova-scheduler (0.9.1~bzr331-0ubuntu2) ...
+ * Starting nova scheduler nova-scheduler
+ WARNING:root:Starting scheduler node
+ ...done.
+ Processing triggers for libc-bin ...
+ ldconfig deferred processing now taking place
+ Processing triggers for python-support ...
+
+Once you've done this, continue at Step 3 here: :doc:`../single.node.install`
diff --git a/doc/source/adminguide/index.rst b/doc/source/adminguide/index.rst
index 9a6a70d45..51228b319 100644
--- a/doc/source/adminguide/index.rst
+++ b/doc/source/adminguide/index.rst
@@ -75,6 +75,8 @@ Networking
:maxdepth: 1
multi.node.install
+ network.vlan.rst
+ network.flat.rst
Advanced Topics
diff --git a/doc/source/adminguide/managing.networks.rst b/doc/source/adminguide/managing.networks.rst
index 3d6b9b7d7..c8df471e8 100644
--- a/doc/source/adminguide/managing.networks.rst
+++ b/doc/source/adminguide/managing.networks.rst
@@ -16,74 +16,43 @@
License for the specific language governing permissions and limitations
under the License.
-OpenStack Network Overview
-==========================
+Networking Overview
+===================
+In Nova, users organize their cloud resources in projects. A Nova project consists of a number of VM instances created by a user. For each VM instance, Nova assigns to it a private IP address. (Currently, Nova only supports Linux bridge networking that allows the virtual interfaces to connect to the outside network through the physical interface. Other virtual network technologies, such as Open vSwitch, could be supported in the future.) The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network.
-Introduction
-------------
+..
+ (perhaps some of this should be moved elsewhere)
+ Introduction
+ ------------
-Nova consists of seven main components, with the Cloud Controller component representing the global state and interacting with all other components. API Server acts as the Web services front end for the cloud controller. Compute Controller provides compute server resources, and the Object Store component provides storage services. Auth Manager provides authentication and authorization services. Volume Controller provides fast and permanent block-level storage for the comput servers. Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. Scheduler selects the most suitable compute controller to host an instance.
+ Nova consists of seven main components, with the Cloud Controller component representing the global state and interacting with all other components. API Server acts as the Web services front end for the cloud controller. Compute Controller provides compute server resources, and the Object Store component provides storage services. Auth Manager provides authentication and authorization services. Volume Controller provides fast and permanent block-level storage for the comput servers. Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. Scheduler selects the most suitable compute controller to host an instance.
-.. todo:: Insert Figure 1 image from "An OpenStack Network Overview" contributed by Citrix
+ .. todo:: Insert Figure 1 image from "An OpenStack Network Overview" contributed by Citrix
-Nova is built on a shared-nothing, messaging-based architecture. All of the major components, that is Compute Controller, Volume Controller, Network Controller, and Object Store can be run on multiple servers. Cloud Controller communicates with Object Store via HTTP (Hyper Text Transfer Protocol), but it communicates with Scheduler, Network Controller, and Volume Controller via AMQP (Advanced Message Queue Protocol). To avoid blocking each component while waiting for a response, Nova uses asynchronous calls, with a call-back that gets triggered when a response is received.
+ Nova is built on a shared-nothing, messaging-based architecture. All of the major components, that is Compute Controller, Volume Controller, Network Controller, and Object Store can be run on multiple servers. Cloud Controller communicates with Object Store via HTTP (Hyper Text Transfer Protocol), but it communicates with Scheduler, Network Controller, and Volume Controller via AMQP (Advanced Message Queue Protocol). To avoid blocking each component while waiting for a response, Nova uses asynchronous calls, with a call-back that gets triggered when a response is received.
-To achieve the shared-nothing property with multiple copies of the same component, Nova keeps all the cloud system state in a distributed data store. Updates to system state are written into this store, using atomic transactions when required. Requests for system state are read out of this store. In limited cases, the read results are cached within controllers for short periods of time (for example, the current list of system users.)
+ To achieve the shared-nothing property with multiple copies of the same component, Nova keeps all the cloud system state in a distributed data store. Updates to system state are written into this store, using atomic transactions when required. Requests for system state are read out of this store. In limited cases, the read results are cached within controllers for short periods of time (for example, the current list of system users.)
-.. note:: The database schema is available on the `OpenStack Wiki <http://wiki.openstack.org/NovaDatabaseSchema>_`.
+ .. note:: The database schema is available on the `OpenStack Wiki <http://wiki.openstack.org/NovaDatabaseSchema>_`.
Nova Network Strategies
-----------------------
-In Nova, users organize their cloud resources in projects. A Nova project consists of a number of VM instances created by a user. For each VM instance, Nova assigns to it a private IP address. (Currently, Nova only supports Linux bridge networking that allows the virtual interfaces to connect to the outside network through the physical interface. Other virtual network technologies, such as Open vSwitch, could be supported in the future.)
-
Currently, Nova supports three kinds of networks, implemented in three "Network Manager" types respectively: Flat Network Manager, Flat DHCP Network Manager, and VLAN Network Manager. The three kinds of networks can c-exist in a cloud system. However, the scheduler for selecting the type of network for a given project is not yet implemented. Here is a brief description of each of the different network strategies, with a focus on the VLAN Manager in a separate section.
-Flat Network
-++++++++++++
-
-IP addresses for VM instances are grabbed from a subnet specified by the network administrator, and injected into the image on launch. All instances of the system are attached to the same Linux networking bridge, configured manually by the network administrator both on the network controller hosting the network and on the computer controllers hosting the instances.
-
-Flat Network with DHCP
-++++++++++++++++++++++
-
-IP addresses for VM instances are grabbed from a subnet specified by the network administrator. Similar to the flat network, a single Linux networking bridge is created and configured manually by the network administrator and used for all instances. A DHCP server is started to pass out IP addresses to VM instances from the specified subnet.
-
-VLAN Network
-++++++++++++
-
-Each project gets its own VLAN, Linux networking bridge, and subnet. The subnets are specified by the network administrator, and are assigned dynamically to a project when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the project. All instances belonging to one project are bridged into the same VLAN for that project. The Linux networking bridges and VLANs are created by Nova when required, described in more detail in Nova VLAN Network Management Implementation.
-
-Nova VLAN Networks
-------------------
-
-Because the flat network and flat DhCP network are simple to understand and yet do not scale well enough for real-world cloud systems, this section focuses on the VLAN network implementation by the VLAN Network Manager.
-
-In the VLAN network mode, all the VM instances of a project are connected together in a VLAN with the specified private subnet. Each running VM instance is assigned an IP address within the given private subnet.
-
-.. todo:: Insert Figure 2 from "An OpenStack Network Overview" contributed by Citrix
-
-While network traffic between VM instances belonging to the same VLAN is always open, Nova can enforce isolation of network traffic between different projects by enforcing one VLAN per project.
-
-In addition, the network administrator can specify a pool of public IP addresses that users may allocate and then assign to VMs, either at boot or dynamically at run-time. This capability is similar to Amazon's 'elastic IPs'. A public IP address may be associated with a running instances, allowing the VM instance to be accessed from the public network. The public IP addresses are accessible from the network host and NATed to the private IP address of the project.
-
-.. todo: Describe how a public IP address could be associated with a project (a VLAN)
-
-Nova VLAN Network Management Implementation
--------------------------------------------
-
-This section describes the current (November 2010) implementation of the network structure of Nova.
-
-The network assignment to a project, and IP address assignment to a VM instance, are triggered when a user starts to run a VM instance. When running a VM instance, a user needs to specify a project for the instances, and the security groups (described in Security Groups) when the instance wants to join. If this is the first instance to be created for the project, then Nova (the cloud controller) needs to find a network controller to be the network host for the project; it then sets up a private network by finding an unused VLAN id, an unused subnet, and then the controller assigns them to the project, it also assigns a name to the project's Linux bridge, and allocating a private IP within the project's subnet for the new instance.
+Read more about Nova network strategies here:
-If the instance the user wants to start is not the project's first, a subnet and a VLAN must have already been assigned to the project; therefore the system needs only to find an available IP address within the subnet and assign it to the new starting instance. If there is no private IP available within the subnet, an exception will be raised to the cloud controller, and the VM creation cannot proceed.
+.. toctree::
+ :maxdepth: 1
+ network.flat.rst
+ network.vlan.rst
-.. todo: insert the name of the Linux bridge, is it always named bridge?
+Network Management Commands
+---------------------------
-Managing Networks
-=================
+Admins and Network Administrators can use the 'nova-manage' command to manage network resources:
VPN Management
~~~~~~~~~~~~~~
diff --git a/doc/source/adminguide/multi.node.install.rst b/doc/source/adminguide/multi.node.install.rst
index d2afb6212..fa0652bc8 100644
--- a/doc/source/adminguide/multi.node.install.rst
+++ b/doc/source/adminguide/multi.node.install.rst
@@ -15,8 +15,8 @@
License for the specific language governing permissions and limitations
under the License.
-Installing Nova Development Snapshot on Multiple Servers
-========================================================
+Installing Nova on Multiple Servers
+===================================
When you move beyond evaluating the technology and into building an actual
production environemnt, you will need to know how to configure your datacenter
@@ -48,16 +48,23 @@ Step 1 Use apt-get to get the latest code
-----------------------------------------
1. Setup Nova PPA with https://launchpad.net/~nova-core/+archive/ppa.
+
+::
+
+ sudo apt-get install python-software-properties
+ sudo add-apt-repository ppa:nova-core/ppa
+
2. Run update.
::
- update
+ sudo apt-get update
3. Install nova-pkgs (dependencies should be automatically installed).
::
+ sudo apt-get install python-greenlet
sudo apt-get install nova-common nova-doc python-nova nova-api nova-network nova-objectstore nova-scheduler
It is highly likely that there will be errors when the nova services come up since they are not yet configured. Don't worry, you're only at step 1!
@@ -103,18 +110,49 @@ Note: CC_ADDR=<the external IP address of your cloud controller>
--FAKE_subdomain=ec2 # workaround for ec2/euca api
+5. Create a nova group
-5. nova-objectstore specific flags < no specific config needed >
+::
+
+ sudo addgroup nova
+
+6. nova-objectstore specific flags < no specific config needed >
Config files should be have their owner set to root:nova, and mode set to 0640, since they contain your MySQL server's root password.
+::
+
+ cd /etc/nova
+ chown -R root:nova .
+
Step 3 Setup the sql db
-----------------------
-1. First you 'preseed' (using vishy's directions here). Run this as root.
+1. First you 'preseed' (using vishy's :doc:`../quickstart`). Run this as root.
+
+::
+
+ sudo apt-get install bzr git-core
+ sudo bash
+ export MYSQL_PASS=nova
+
+
+::
+
+ cat <<MYSQL_PRESEED | debconf-set-selections
+ mysql-server-5.1 mysql-server/root_password password $MYSQL_PASS
+ mysql-server-5.1 mysql-server/root_password_again password $MYSQL_PASS
+ mysql-server-5.1 mysql-server/start_on_boot boolean true
+ MYSQL_PRESEED
+
2. Install mysql
-3. Configure mysql so that external users can access it, and setup nova db.
+
+::
+
+ sudo apt-get install -y mysql-server
+
4. Edit /etc/mysql/my.cnf and set this line: bind-address=0.0.0.0 and then sighup or restart mysql
+
5. create nova's db
::
@@ -130,6 +168,19 @@ Step 3 Setup the sql db
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
SET PASSWORD FOR 'root'@'%' = PASSWORD('nova');
+7. branch and install Nova
+
+::
+
+ sudo -i
+ cd ~
+ export USE_MYSQL=1
+ export MYSQL_PASS=nova
+ git clone https://github.com/vishvananda/novascript.git
+ cd novascript
+ ./nova.sh branch
+ ./nova.sh install
+ ./nova.sh run
Step 4 Setup Nova environment
-----------------------------
diff --git a/doc/source/adminguide/network.flat.rst b/doc/source/adminguide/network.flat.rst
new file mode 100644
index 000000000..1b8661a40
--- /dev/null
+++ b/doc/source/adminguide/network.flat.rst
@@ -0,0 +1,60 @@
+..
+ Copyright 2010 United States Government as represented by the
+ Administrator of the National Aeronautics and Space Administration.
+ All Rights Reserved.
+
+ Licensed under the Apache License, Version 2.0 (the "License"); you may
+ not use this file except in compliance with the License. You may obtain
+ a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ License for the specific language governing permissions and limitations
+ under the License.
+
+
+Flat Network Mode (Original and Flat)
+=====================================
+
+Flat network mode removes most of the complexity of VLAN mode by simply
+bridging all instance interfaces onto a single network.
+
+There are two variations of flat mode that differ mostly in how IP addresses
+are given to instances.
+
+
+Original Flat Mode
+------------------
+IP addresses for VM instances are grabbed from a subnet specified by the network administrator, and injected into the image on launch. All instances of the system are attached to the same Linux networking bridge, configured manually by the network administrator both on the network controller hosting the network and on the computer controllers hosting the instances. To recap:
+
+* Each compute host creates a single bridge for all instances to use to attach to the external network.
+* The networking configuration is injected into the instance before it is booted or it is obtained by a guest agent installed in the instance.
+
+Note that the configuration injection currently only works on linux-style systems that keep networking
+configuration in /etc/network/interfaces.
+
+
+Flat DHCP Mode
+--------------
+IP addresses for VM instances are grabbed from a subnet specified by the network administrator. Similar to the flat network, a single Linux networking bridge is created and configured manually by the network administrator and used for all instances. A DHCP server is started to pass out IP addresses to VM instances from the specified subnet. To recap:
+
+* Like flat mode, all instances are attached to a single bridge on the compute node.
+* In addition a DHCP server is running to configure instances.
+
+Implementation
+--------------
+
+The network nodes do not act as a default gateway in flat mode. Instances
+are given public IP addresses.
+
+Compute nodes have iptables/ebtables entries created per project and
+instance to protect against IP/MAC address spoofing and ARP poisoning.
+
+
+Examples
+--------
+
+.. todo:: add flat network mode configuration examples
diff --git a/doc/source/adminguide/network.vlan.rst b/doc/source/adminguide/network.vlan.rst
new file mode 100644
index 000000000..a7cccc098
--- /dev/null
+++ b/doc/source/adminguide/network.vlan.rst
@@ -0,0 +1,179 @@
+..
+ Copyright 2010 United States Government as represented by the
+ Administrator of the National Aeronautics and Space Administration.
+ All Rights Reserved.
+
+ Licensed under the Apache License, Version 2.0 (the "License"); you may
+ not use this file except in compliance with the License. You may obtain
+ a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ License for the specific language governing permissions and limitations
+ under the License.
+
+
+VLAN Network Mode
+=================
+VLAN Network Mode is the default mode for Nova. It provides a private network
+segment for each project's instances that can be accessed via a dedicated
+VPN connection from the Internet.
+
+In this mode, each project gets its own VLAN, Linux networking bridge, and subnet. The subnets are specified by the network administrator, and are assigned dynamically to a project when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the project. All instances belonging to one project are bridged into the same VLAN for that project. The Linux networking bridges and VLANs are created by Nova when required, described in more detail in Nova VLAN Network Management Implementation.
+
+..
+ (this text revised above)
+ Because the flat network and flat DhCP network are simple to understand and yet do not scale well enough for real-world cloud systems, this section focuses on the VLAN network implementation by the VLAN Network Manager.
+
+
+ In the VLAN network mode, all the VM instances of a project are connected together in a VLAN with the specified private subnet. Each running VM instance is assigned an IP address within the given private subnet.
+
+.. todo:: Insert Figure 2 from "An OpenStack Network Overview" contributed by Citrix
+
+While network traffic between VM instances belonging to the same VLAN is always open, Nova can enforce isolation of network traffic between different projects by enforcing one VLAN per project.
+
+In addition, the network administrator can specify a pool of public IP addresses that users may allocate and then assign to VMs, either at boot or dynamically at run-time. This capability is similar to Amazon's 'elastic IPs'. A public IP address may be associated with a running instances, allowing the VM instance to be accessed from the public network. The public IP addresses are accessible from the network host and NATed to the private IP address of the project.
+
+.. todo:: Describe how a public IP address could be associated with a project (a VLAN)
+
+This is the default networking mode and supports the most features. For multiple machine installation, it requires a switch that supports host-managed vlan tagging. In this mode, nova will create a vlan and bridge for each project. The project gets a range of private ips that are only accessible from inside the vlan. In order for a user to access the instances in their project, a special vpn instance (code named :ref:`cloudpipe <cloudpipe>`) needs to be created. Nova generates a certificate and key for the user to access the vpn and starts the vpn automatically. More information on cloudpipe can be found :ref:`here <cloudpipe>`.
+
+The following diagram illustrates how the communication that occurs between the vlan (the dashed box) and the public internet (represented by the two clouds)
+
+.. image:: /images/cloudpipe.png
+ :width: 100%
+
+Goals
+-----
+
+* each project is in a protected network segment
+
+ * RFC-1918 IP space
+ * public IP via NAT
+ * no default inbound Internet access without public NAT
+ * limited (project-admin controllable) outbound Internet access
+ * limited (project-admin controllable) access to other project segments
+ * all connectivity to instance and cloud API is via VPN into the project segment
+
+* common DMZ segment for support services (only visible from project segment)
+
+ * metadata
+ * dashboard
+
+
+Limitations
+-----------
+
+* Projects / cluster limited to available VLANs in switching infrastructure
+* Requires VPN for access to project segment
+
+
+Implementation
+--------------
+Currently Nova segregates project VLANs using 802.1q VLAN tagging in the
+switching layer. Compute hosts create VLAN-specific interfaces and bridges
+as required.
+
+The network nodes act as default gateway for project networks and contain
+all of the routing and firewall rules implementing security groups. The
+network node also handles DHCP to provide instance IPs for each project.
+
+VPN access is provided by running a small instance called CloudPipe
+on the IP immediately following the gateway IP for each project. The
+network node maps a dedicated public IP/port to the CloudPipe instance.
+
+Compute nodes have per-VLAN interfaces and bridges created as required.
+These do NOT have IP addresses in the host to protect host access.
+Compute nodes have iptables/ebtables entries created per project and
+instance to protect against IP/MAC address spoofing and ARP poisoning.
+
+The network assignment to a project, and IP address assignment to a VM instance, are triggered when a user starts to run a VM instance. When running a VM instance, a user needs to specify a project for the instances, and the security groups (described in Security Groups) when the instance wants to join. If this is the first instance to be created for the project, then Nova (the cloud controller) needs to find a network controller to be the network host for the project; it then sets up a private network by finding an unused VLAN id, an unused subnet, and then the controller assigns them to the project, it also assigns a name to the project's Linux bridge, and allocating a private IP within the project's subnet for the new instance.
+
+If the instance the user wants to start is not the project's first, a subnet and a VLAN must have already been assigned to the project; therefore the system needs only to find an available IP address within the subnet and assign it to the new starting instance. If there is no private IP available within the subnet, an exception will be raised to the cloud controller, and the VM creation cannot proceed.
+
+.. todo:: insert the name of the Linux bridge, is it always named bridge?
+
+External Infrastructure
+-----------------------
+
+Nova assumes the following is available:
+
+* DNS
+* NTP
+* Internet connectivity
+
+
+Example
+-------
+
+This example network configuration demonstrates most of the capabilities
+of VLAN Mode. It splits administrative access to the nodes onto a dedicated
+management network and uses dedicated network nodes to handle all
+routing and gateway functions.
+
+It uses a 10GB network for instance traffic and a 1GB network for management.
+
+
+Hardware
+~~~~~~~~
+
+* All nodes have a minimum of two NICs for management and production.
+
+ * management is 1GB
+ * production is 10GB
+ * add additional NICs for bonding or HA/performance
+
+* network nodes should have an additional NIC dedicated to public Internet traffic
+* switch needs to support enough simultaneous VLANs for number of projects
+* production network configured as 802.1q trunk on switch
+
+
+Operation
+~~~~~~~~~
+
+The network node controls the project network configuration:
+
+* assigns each project a VLAN and private IP range
+* starts dnsmasq on project VLAN to serve private IP range
+* configures iptables on network node for default project access
+* launches CloudPipe instance and configures iptables access
+
+When starting an instance the network node:
+
+* sets up a VLAN interface and bridge on each host as required when an
+ instance is started on that host
+* assigns private IP to instance
+* generates MAC address for instance
+* update dnsmasq with IP/MAC for instance
+
+When starting an instance the compute node:
+
+* sets up a VLAN interface and bridge on each host as required when an
+ instance is started on that host
+
+
+Setup
+~~~~~
+
+* Assign VLANs in the switch:
+
+ * public Internet segment
+ * production network
+ * management network
+ * cluster DMZ
+
+* Assign a contiguous range of VLANs to Nova for project use.
+* Configure management NIC ports as management VLAN access ports.
+* Configure management VLAN with Internet access as required
+* Configure production NIC ports as 802.1q trunk ports.
+* Configure Nova (need to add specifics here)
+
+ * public IPs
+ * instance IPs
+ * project network size
+ * DMZ network
+
+.. todo:: need specific Nova configuration added
diff --git a/doc/source/adminguide/single.node.install.rst b/doc/source/adminguide/single.node.install.rst
index 9ecb6d49a..27597962a 100644
--- a/doc/source/adminguide/single.node.install.rst
+++ b/doc/source/adminguide/single.node.install.rst
@@ -1,12 +1,344 @@
-Single Node Installation
-========================
-
-.. todo:: need extended notes on running a single machine
+Installing Nova on a Single Host
+================================
Nova can be run on a single machine, and it is recommended that new users practice managing this type of installation before graduating to multi node systems.
-The fastest way to get a test cloud running is through our :doc:`../quickstart`.
+The fastest way to get a test cloud running is through our :doc:`../quickstart`. But for more detail on installing the system read this doc.
+
+
+Step 1 and 2: Get the latest Nova code system software
+------------------------------------------------------
+
+Depending on your system, the mehod for accomplishing this varies
+
+.. toctree::
+ :maxdepth: 1
+
+ distros/ubuntu.10.04
+ distros/ubuntu.10.10
+ distros/others
+
+
+Step 3: Build and install Nova services
+---------------------------------------
+
+Switch to the base nova source directory.
+
+Then type or copy/paste in the following line to compile the Python code for OpenStack Compute.
+
+::
+
+ sudo python setup.py build
+ sudo python setup.py install
+
+
+When the installation is complete, you'll see the following lines:
+
+::
+
+ Installing nova-network script to /usr/local/bin
+ Installing nova-volume script to /usr/local/bin
+ Installing nova-objectstore script to /usr/local/bin
+ Installing nova-manage script to /usr/local/bin
+ Installing nova-scheduler script to /usr/local/bin
+ Installing nova-dhcpbridge script to /usr/local/bin
+ Installing nova-compute script to /usr/local/bin
+ Installing nova-instancemonitor script to /usr/local/bin
+ Installing nova-api script to /usr/local/bin
+ Installing nova-import-canonical-imagestore script to /usr/local/bin
+
+ Installed /usr/local/lib/python2.6/dist-packages/nova-2010.1-py2.6.egg
+ Processing dependencies for nova==2010.1
+ Finished processing dependencies for nova==2010.1
+
+
+Step 4: Create a Nova administrator
+-----------------------------------
+Type or copy/paste in the following line to create a user named "anne."::
+
+ sudo nova-manage user admin anne
+
+You see an access key and a secret key export, such as these made-up ones:::
+
+ export EC2_ACCESS_KEY=4e6498a2-blah-blah-blah-17d1333t97fd
+ export EC2_SECRET_KEY=0a520304-blah-blah-blah-340sp34k05bbe9a7
+
+
+Step 5: Create a project with the user you created
+--------------------------------------------------
+Type or copy/paste in the following line to create a project named IRT (for Ice Road Truckers, of course) with the newly-created user named anne.
+
+::
+
+ sudo nova-manage project create IRT anne
+
+::
+
+ Generating RSA private key, 1024 bit long modulus
+ .....++++++
+ ..++++++
+ e is 65537 (0x10001)
+ Using configuration from ./openssl.cnf
+ Check that the request matches the signature
+ Signature ok
+ The Subject's Distinguished Name is as follows
+ countryName :PRINTABLE:'US'
+ stateOrProvinceName :PRINTABLE:'California'
+ localityName :PRINTABLE:'MountainView'
+ organizationName :PRINTABLE:'AnsoLabs'
+ organizationalUnitName:PRINTABLE:'NovaDev'
+ commonName :PRINTABLE:'anne-2010-10-12T21:12:35Z'
+ Certificate is to be certified until Oct 12 21:12:35 2011 GMT (365 days)
+
+ Write out database with 1 new entries
+ Data Base Updated
+
+
+Step 6: Unzip the nova.zip
+--------------------------
+You should have a nova.zip file in your current working directory. Unzip it with this command:
-Install Dependencies
+::
+
+ unzip nova.zip
+
+
+You'll see these files extract.
+
+::
+
+ Archive: nova.zip
+ extracting: novarc
+ extracting: pk.pem
+ extracting: cert.pem
+ extracting: nova-vpn.conf
+ extracting: cacert.pem
+
+
+Step 7: Source the rc file
+--------------------------
+Type or copy/paste the following to source the novarc file in your current working directory.
+
+::
+
+ . novarc
+
+
+Step 8: Pat yourself on the back :)
+-----------------------------------
+Congratulations, your cloud is up and running, you’ve created an admin user, retrieved the user's credentials and put them in your environment.
+
+Now you need an image.
+
+
+Step 9: Get an image
--------------------
+To make things easier, we've provided a small image on the Rackspace CDN. Use this command to get it on your server.
+
+::
+
+ wget http://c2477062.cdn.cloudfiles.rackspacecloud.com/images.tgz
+
+
+::
+
+ --2010-10-12 21:40:55-- http://c2477062.cdn.cloudfiles.rackspacecloud.com/images.tgz
+ Resolving cblah2.cdn.cloudfiles.rackspacecloud.com... 208.111.196.6, 208.111.196.7
+ Connecting to cblah2.cdn.cloudfiles.rackspacecloud.com|208.111.196.6|:80... connected.
+ HTTP request sent, awaiting response... 200 OK
+ Length: 58520278 (56M) [appication/x-gzip]
+ Saving to: `images.tgz'
+
+ 100%[======================================>] 58,520,278 14.1M/s in 3.9s
+
+ 2010-10-12 21:40:59 (14.1 MB/s) - `images.tgz' saved [58520278/58520278]
+
+
+
+Step 10: Decompress the image file
+----------------------------------
+Use this command to extract the image files:::
+
+ tar xvzf images.tgz
+
+You get a directory listing like so:::
+
+ images
+ |-- aki-lucid
+ | |-- image
+ | `-- info.json
+ |-- ami-tiny
+ | |-- image
+ | `-- info.json
+ `-- ari-lucid
+ |-- image
+ `-- info.json
+
+Step 11: Send commands to upload sample image to the cloud
+----------------------------------------------------------
+
+Type or copy/paste the following commands to create a manifest for the kernel.::
+
+ euca-bundle-image -i images/aki-lucid/image -p kernel --kernel true
+
+You should see this in response:::
+
+ Checking image
+ Tarring image
+ Encrypting image
+ Splitting image...
+ Part: kernel.part.0
+ Generating manifest /tmp/kernel.manifest.xml
+
+Type or copy/paste the following commands to create a manifest for the ramdisk.::
+
+ euca-bundle-image -i images/ari-lucid/image -p ramdisk --ramdisk true
+
+You should see this in response:::
+
+ Checking image
+ Tarring image
+ Encrypting image
+ Splitting image...
+ Part: ramdisk.part.0
+ Generating manifest /tmp/ramdisk.manifest.xml
+
+Type or copy/paste the following commands to upload the kernel bundle.::
+
+ euca-upload-bundle -m /tmp/kernel.manifest.xml -b mybucket
+
+You should see this in response:::
+
+ Checking bucket: mybucket
+ Creating bucket: mybucket
+ Uploading manifest file
+ Uploading part: kernel.part.0
+ Uploaded image as mybucket/kernel.manifest.xml
+
+Type or copy/paste the following commands to upload the ramdisk bundle.::
+
+ euca-upload-bundle -m /tmp/ramdisk.manifest.xml -b mybucket
+
+You should see this in response:::
+
+ Checking bucket: mybucket
+ Uploading manifest file
+ Uploading part: ramdisk.part.0
+ Uploaded image as mybucket/ramdisk.manifest.xml
+
+Type or copy/paste the following commands to register the kernel and get its ID.::
+
+ euca-register mybucket/kernel.manifest.xml
+
+You should see this in response:::
+
+ IMAGE ami-fcbj2non
+
+Type or copy/paste the following commands to register the ramdisk and get its ID.::
+
+ euca-register mybucket/ramdisk.manifest.xml
+
+You should see this in response:::
+
+ IMAGE ami-orukptrc
+
+Type or copy/paste the following commands to create a manifest for the machine image associated with the ramdisk and kernel IDs that you got from the previous commands.::
+
+ euca-bundle-image -i images/ami-tiny/image -p machine --kernel ami-fcbj2non --ramdisk ami-orukptrc
+
+You should see this in response:::
+
+ Checking image
+ Tarring image
+ Encrypting image
+ Splitting image...
+ Part: machine.part.0
+ Part: machine.part.1
+ Part: machine.part.2
+ Part: machine.part.3
+ Part: machine.part.4
+ Generating manifest /tmp/machine.manifest.xml
+
+Type or copy/paste the following commands to upload the machine image bundle.::
+
+ euca-upload-bundle -m /tmp/machine.manifest.xml -b mybucket
+
+You should see this in response:::
+
+ Checking bucket: mybucket
+ Uploading manifest file
+ Uploading part: machine.part.0
+ Uploading part: machine.part.1
+ Uploading part: machine.part.2
+ Uploading part: machine.part.3
+ Uploading part: machine.part.4
+ Uploaded image as mybucket/machine.manifest.xml
+
+Type or copy/paste the following commands to register the machine image and get its ID.::
+
+ euca-register mybucket/machine.manifest.xml
+
+You should see this in response:::
+
+ IMAGE ami-g06qbntt
+
+Type or copy/paste the following commands to register a SSH keypair for use in starting and accessing the instances.::
+
+ euca-add-keypair mykey > mykey.priv
+ chmod 600 mykey.priv
+
+Type or copy/paste the following commands to run an instance using the keypair and IDs that we previously created.::
+
+ euca-run-instances ami-g06qbntt --kernel ami-fcbj2non --ramdisk ami-orukptrc -k mykey
+
+You should see this in response:::
+
+ RESERVATION r-0at28z12 IRT
+ INSTANCE i-1b0bh8n ami-g06qbntt 10.0.0.3 10.0.0.3 scheduling mykey (IRT, None) m1.small 2010-10-18 19:02:10.443599
+
+Type or copy/paste the following commands to watch as the scheduler launches, and completes booting your instance.::
+
+ euca-describe-instances
+
+You should see this in response:::
+
+ RESERVATION r-0at28z12 IRT
+ INSTANCE i-1b0bh8n ami-g06qbntt 10.0.0.3 10.0.0.3 launching mykey (IRT, cloud02) m1.small 2010-10-18 19:02:10.443599
+
+Type or copy/paste the following commands to see when loading is completed and the instance is running.::
+
+ euca-describe-instances
+
+You should see this in response:::
+
+ RESERVATION r-0at28z12 IRT
+ INSTANCE i-1b0bh8n ami-g06qbntt 10.0.0.3 10.0.0.3 running mykey (IRT, cloud02) 0 m1.small 2010-10-18 19:02:10.443599
+
+Type or copy/paste the following commands to check that the virtual machine is running.::
+
+ virsh list
+
+You should see this in response:::
+
+ Id Name State
+ ----------------------------------
+ 1 2842445831 running
+
+Type or copy/paste the following commands to ssh to the instance using your private key.::
+
+ ssh -i mykey.priv root@10.0.0.3
+
+
+Troubleshooting Installation
+----------------------------
+
+If you see an "error loading the config file './openssl.cnf'" it means you can copy the openssl.cnf file to the location where Nova expects it and reboot, then try the command again.
+
+::
+
+ cp /etc/ssl/openssl.cnf ~
+ sudo reboot
+
+
+
diff --git a/doc/source/community.rst b/doc/source/community.rst
index 61e2536c2..bfb93414c 100644
--- a/doc/source/community.rst
+++ b/doc/source/community.rst
@@ -61,7 +61,8 @@ Nova on Launchpad
Launchpad is a code hosting service that hosts the Nova source code. From
Launchpad you can report bugs, ask questions, and register blueprints (feature requests).
-`Launchpad Nova Page <http://launchpad.net/nova>`_
+* `Learn about how to use bzr with launchpad <http://wiki.openstack.org/LifeWithBzrAndLaunchpad>`_
+* `Launchpad Nova Page <http://launchpad.net/nova>`_
OpenStack Blog
--------------
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 2f2d97c44..ef447ca81 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -24,9 +24,14 @@ sys.path.insert(0, os.path.abspath('./'))
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-# extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'ext.nova_todo', 'sphinx.ext.coverage', 'sphinx.ext.pngmath', 'sphinx.ext.ifconfig','sphinx.ext.graphviz', 'ext.nova_autodoc']
+
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'ext.nova_todo', 'sphinx.ext.coverage', 'sphinx.ext.pngmath', 'sphinx.ext.ifconfig','sphinx.ext.graphviz']
+# autodoc generation is a bit aggressive and a nuisance when doing heavy text edit cycles.
+# execute "export SPHINX_DEBUG=1" in your terminal to disable
+if not os.getenv('SPHINX_DEBUG'):
+ extensions += ['ext.nova_autodoc']
+
todo_include_todos = True
# Add any paths that contain templates here, relative to this directory.
diff --git a/doc/source/nova.concepts.rst b/doc/source/nova.concepts.rst
index ce251dd14..ddf0f1b82 100644
--- a/doc/source/nova.concepts.rst
+++ b/doc/source/nova.concepts.rst
@@ -31,7 +31,7 @@ run on your host operating system, and exposes functionality over a web API.
This document does not attempt to explain fundamental concepts of cloud
computing, IaaS, virtualization, or other related technologies. Instead, it
-focues on describing how Nova's implementation of those concepts is achieved.
+focuses on describing how Nova's implementation of those concepts is achieved.
This page outlines concepts that you will need to understand as a user or
administrator of an OpenStack installation. Each section links to more more
@@ -121,7 +121,7 @@ This is similar to the flat mode, in that all instances are attached to the same
VLAN DHCP Mode
~~~~~~~~~~~~~~
-This is the default networking mode and supports the most features. For multiple machine installation, it requires a switch that supports host-managed vlan tagging. In this mode, nova will create a vlan and bridge for each project. The project gets a range of private ips that are only accessible from inside the vlan. In order for a user to access the instances in their project, a special vpn instance (code named :ref:`cloudpipe <cloudpipe>`) needs to be created. Nova generates a certificate and key for the userto access the vpn and starts the vpn automatically. More information on cloudpipe can be found :ref:`here <cloudpipe>`.
+This is the default networking mode and supports the most features. For multiple machine installation, it requires a switch that supports host-managed vlan tagging. In this mode, nova will create a vlan and bridge for each project. The project gets a range of private ips that are only accessible from inside the vlan. In order for a user to access the instances in their project, a special vpn instance (code named :ref:`cloudpipe <cloudpipe>`) needs to be created. Nova generates a certificate and key for the user to access the vpn and starts the vpn automatically. More information on cloudpipe can be found :ref:`here <cloudpipe>`.
The following diagram illustrates how the communication that occurs between the vlan (the dashed box) and the public internet (represented by the two clouds)
@@ -168,8 +168,7 @@ Concept: Plugins
Concept: IPC/RPC
----------------
-Rabbit!
-
+Nova utilizes the RabbitMQ implementation of the AMQP messaging standard for performing communication between the various nova services. This message queuing service is used for both local and remote communication because Nova is designed so that there is no requirement that any of the services exist on the same physical machine. RabbitMQ in particular is very robust and provides the efficiency and reliability that Nova needs. More information about RabbitMQ can be found at http://www.rabbitmq.com/.
Concept: Fakes
--------------
diff --git a/doc/source/quickstart.rst b/doc/source/quickstart.rst
index acf303f91..ae2b64d8a 100644
--- a/doc/source/quickstart.rst
+++ b/doc/source/quickstart.rst
@@ -18,8 +18,8 @@
Nova Quickstart
===============
-.. todo::
-
+.. todo::
+ P1 (this is one example of how to use priority syntax)
* Document the assumptions about pluggable interfaces (sqlite3 instead of
mysql, etc) (todd)
* Document env vars that can change things (USE_MYSQL, HOST_IP) (todd)
@@ -56,11 +56,9 @@ By tweaking the environment that nova.sh run in, you can build slightly
different configurations (though for more complex setups you should see
:doc:`/adminguide/getting.started` and :doc:`/adminguide/multi.node.install`).
-HOST_IP
-~~~~~~~
-
-**Default**: address of first interface from the ifconfig command
-**Values**: 127.0.0.1, or any other valid address
+* HOST_IP
+ * Default: address of first interface from the ifconfig command
+ * Values: 127.0.0.1, or any other valid address
TEST
~~~~
@@ -166,3 +164,15 @@ Then you can destroy the screen:
If things get particularly messed up, you might need to do some more intense
cleanup. Be careful, the following command will manually destroy all runnning
virsh instances and attempt to delete all vlans and bridges.
+
+::
+
+ ./nova.sh scrub
+
+You can edit files in the install directory or do a bzr pull to pick up new versions. You only need to do
+
+::
+
+ ./nova.sh run
+
+to run nova after the first install. The database should be cleaned up on each run. \ No newline at end of file
diff --git a/doc/source/service.architecture.rst b/doc/source/service.architecture.rst
index b621dcfa5..28a32bec6 100644
--- a/doc/source/service.architecture.rst
+++ b/doc/source/service.architecture.rst
@@ -17,7 +17,7 @@ Nova’s Cloud Fabric is composed of the following major components:
API Server
--------------------------------------------------
-At the heart of the cloud framework is an API Server. This API Server makes command and control [#f80]_ of the hypervisor, storage, and networking programmatically available to users in realization of the definition of cloud computing.
+At the heart of the cloud framework is an API Server. This API Server makes command and control of the hypervisor, storage, and networking programmatically available to users in realization of the definition of cloud computing.
The API endpoints are basic http web services which handle authentication, authorization, and basic command and control functions using various API interfaces under the Amazon, Rackspace, and related models. This enables API compatibility with multiple existing tool sets created for interaction with offerings from other vendors. This broad compatibility prevents vendor lock-in.
@@ -48,7 +48,7 @@ The Network Controller manages the networking resources on host machines. The A
Volume Workers
--------------------------------------------------
-Volume Workers interact with iSCSI storage to manage LVM-based [#f89]_ instance volumes. Specific functions include:
+Volume Workers interact with iSCSI storage to manage LVM-based instance volumes. Specific functions include:
* Create Volumes
* Delete Volumes
@@ -57,4 +57,4 @@ Volume Workers interact with iSCSI storage to manage LVM-based [#f89]_ instance
Volumes may easily be transferred between instances, but may be attached to only a single instance at a time.
-.. todo:: image store description
+.. todo:: P2: image store description