Standalone Containers based Deployment

Standalone Containers based Deployment

Warning

This currently is only supported in Rocky or newer versions.

This documentation explains how the underlying framework used by the Containterized Undercloud deployment mechanism can be reused to deploy a single node capable of running OpenStack services for development.

Deploying a Standalone OpenStack node

  1. Log into your machine (baremetal or VM) where you want to install the standalone services on as a non-root user.:

    ssh <non-root-user>@<machine>
    
  2. Enable needed repositories:

    RHEL

    Enable optional repo:

    sudo yum install -y yum-utils
    sudo yum-config-manager --enable rhelosp-rhel-7-server-opt
    

    Download and install the python2-tripleo-repos RPM from the current RDO repository. For example

    sudo yum install -y https://trunk.rdoproject.org/centos7/current/python2-tripleo-repos-<version>.el7.centos.noarch.rpm
    

    Note

    tripleo-repos removes any repositories that it manages before each run. This means all repositories must be specified in a single tripleo-repos call. As an example, the correct way to install the current and ceph repos is to run tripleo-repos current ceph, not two separate calls.

    Stable Branch

    Enable the appropriate repos for the desired release, as indicated below. Do not enable any other repos not explicitly marked for that release.

    Newton

    Enable the current Newton repositories

    sudo -E tripleo-repos -b newton current
    

    Ceph

    Include the Ceph repo in the tripleo-repos call

    sudo -E tripleo-repos -b newton current ceph
    

    Ocata

    Enable the current Ocata repositories

    sudo -E tripleo-repos -b ocata current
    

    Ceph

    Include the Ceph repo in the tripleo-repos call

    sudo -E tripleo-repos -b ocata current ceph
    

    Pike

    Enable the current Pike repositories

    sudo -E tripleo-repos -b pike current
    

    Ceph

    Include the Ceph repo in the tripleo-repos call

    sudo -E tripleo-repos -b pike current ceph
    

    Queens

    Enable the current Queens repositories

    sudo -E tripleo-repos -b queens current
    

    Ceph

    Include the Ceph repo in the tripleo-repos call

    sudo -E tripleo-repos -b queens current ceph
    

    Warning

    The remaining repositories configuration steps below should not be done for stable releases!

    Run tripleo-repos to install the appropriate repositories. The option below will enable the latest master TripleO packages and the latest promoted packages for all other OpenStack services and dependencies. There are other repository configurations available in tripleo-repos, see its --help output for details.

    sudo -E tripleo-repos current-tripleo-dev
    

    Ceph

    Include the Ceph repository in the tripleo-repos command

    sudo -E tripleo-repos current-tripleo-dev ceph
    
  3. Install the TripleO CLI, which will pull in all other necessary packages as dependencies:

    sudo yum install -y python-tripleoclient
    

    Ceph

    Install the ceph-ansible package and util-linux.

    sudo yum install -y ceph-ansible util-linux
    
  4. Generate a file with the default ContainerImagePrepare value:

    openstack tripleo container image prepare default \
      --output-env-file $HOME/containers-prepare-parameters.yaml
    

    Ceph

    Create a block device to be used as an OSD.

    sudo dd if=/dev/zero of=/var/lib/ceph-osd.img bs=1 count=0 seek=7G
    sudo losetup /dev/loop3 /var/lib/ceph-osd.img
    

    Create a directory to back up the ceph-ansible fetch directory.

    mkdir /root/ceph_ansible_fetch
    
  5. Configure basic standalone parameters which include network configuration and some deployment options.

    The following configuration can be used for a system with 2 network interfaces. This configuration assumes the first interface is used for management and we will only configure the second interface. The deployment assumes the second interface has a “public” /24 network which will be used for the cloud endpoints and public VM connectivity.

    # EXAMPLE: 2 interfaces
    # NIC1 - management NIC (any address, left untouched)
    # NIC2 - OpenStack & Provider network NIC ($INTERFACE configured with $IP, $NETMASK)
    export IP=192.168.24.2
    export NETMASK=24
    export INTERFACE=eth1
    
    cat <<EOF > $HOME/standalone_parameters.yaml
    parameter_defaults:
      CloudName: $IP
      ControlPlaneStaticRoutes: []
      Debug: true
      DeploymentUser: $USER
      DnsServers:
        - 1.1.1.1
        - 8.8.8.8
      DockerInsecureRegistryAddress:
        - $IP:8787
      NeutronPublicInterface: $INTERFACE
      # domain name used by the host
      NeutronDnsDomain: localdomain
      # re-use ctlplane bridge for public net, defined in the standalone
      # net config (do not change unless you know what you're doing)
      NeutronBridgeMappings: datacentre:br-ctlplane
      NeutronPhysicalBridge: br-ctlplane
      # enable to force metadata for public net
      #NeutronEnableForceMetadata: true
      StandaloneEnableRoutedNetworks: false
      StandaloneHomeDir: $HOME
      StandaloneLocalMtu: 1500
      # Needed if running in a VM, not needed if on baremetal
      NovaComputeLibvirtType: qemu
    EOF
    

    The following configuration can be used for a system with a single network interface. This configuration assumes that the interface is shared for management and cloud functions. This configuration requires there be at least 3 ip addresses available for configuration. 1 ip is used for the cloud endpoints, 1 is used for an internal router and 1 is used as a floating IP.

    # EXAMPLE: 1 interface
    # NIC1 - management, OpenStack, & Provider network ($INTERFACE reconfigured using $IP, $NETMASK, $GATEWAY)
    export IP=192.168.24.2
    export NETMASK=24
    # We need the gateway as we'll be reconfiguring the eth0 interface
    export GATEWAY=192.168.24.1
    export INTERFACE=eth0
    
    cat <<EOF > $HOME/standalone_parameters.yaml
    parameter_defaults:
      CloudName: $IP
      # default gateway
      ControlPlaneStaticRoutes:
        - ip_netmask: 0.0.0.0/0
          next_hop: $GATEWAY
          default: true
      Debug: true
      DeploymentUser: $USER
      DnsServers:
        - 1.1.1.1
        - 8.8.8.8
      # needed for vip & pacemaker
      KernelIpNonLocalBind: 1
      DockerInsecureRegistryAddress:
        - $IP:8787
      NeutronPublicInterface: $INTERFACE
      # domain name used by the host
      NeutronDnsDomain: localdomain
      # re-use ctlplane bridge for public net, defined in the standalone
      # net config (do not change unless you know what you're doing)
      NeutronBridgeMappings: datacentre:br-ctlplane
      NeutronPhysicalBridge: br-ctlplane
      # enable to force metadata for public net
      #NeutronEnableForceMetadata: true
      StandaloneEnableRoutedNetworks: false
      StandaloneHomeDir: $HOME
      StandaloneLocalMtu: 1500
      # Needed if running in a VM, not needed if on baremetal
      NovaComputeLibvirtType: qemu
    EOF
    

    Ceph

    Create an additional environment file which directs ceph-ansible to use the block device and fecth directory backup created earlier. In the same file pass additional Ceph parameters for the OSD scenario and Ceph networks. Set the placement group and replica count to values which fit the number of OSDs being used, e.g. 32 and 1 are used for testing with only one OSD.

    cat <<EOF > $HOME/ceph_parameters.yaml
    parameter_defaults:
      CephAnsibleDisksConfig:
        devices:
          - /dev/loop3
        journal_size: 1024
      LocalCephAnsibleFetchDirectoryBackup: /root/ceph_ansible_fetch
      CephAnsibleExtraConfig:
        osd_scenario: collocated
        osd_objectstore: filestore
        cluster_network: 192.168.24.0/24
        public_network: 192.168.24.0/24
      CephPoolDefaultPgNum: 32
      CephPoolDefaultSize: 1
    EOF
    
  6. Run deploy command:

    sudo openstack tripleo deploy \
      --templates \
      --local-ip=$IP/$NETMASK \
      -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml \
      -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml \
      -e $HOME/containers-prepare-parameters.yaml \
      -e $HOME/standalone_parameters.yaml \
      --output-dir $HOME \
      --standalone
    

    Ceph

    Include the Ceph environment files in the deploy command:

    sudo openstack tripleo deploy \
      --templates \
      --local-ip=$IP/$NETMASK \
      -e /usr/share/openstack-tripleo-heat-templates/environments/standalone.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \
      -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml \
      -e $HOME/containers-prepare-parameters.yaml \
      -e $HOME/standalone_parameters.yaml \
      -e $HOME/ceph_parameters.yaml \
      --output-dir $HOME \
      --standalone
    
  7. Check the deployed OpenStack Services

    At the end of the deployment, a clouds.yaml configuration file is placed in the /root/.config/openstack folder. This can be used with the openstack client to query the OpenStack services.

    export OS_CLOUD=standalone
    openstack endpoint list
    

Manual deployments with ansible

With the --output-only option enabled, the installation stops before Ansible playbooks would be normally executed. Instead, it only creates a Heat stack, then downloads the ansible deployment data and playbooks to --output-dir for the manual execution.

Note

When updating the existing standalone installation, keep in mind the special cases described in Understanding undercloud/standalone stack updates. There is an additional case for the --force-stack-update flag that might need to be used, when in the --output-only mode. That is when you cannot know the results of the actual deployment before ansible has started.

Example: 1 NIC, Using Compute with Tenant and Provider Networks

The following example is based on the single NIC configuration and assumes that the environment had at least 3 total IP addresses available to it. The IPs are used for the following:

  • 1 IP address for the OpenStack services (this is the --local-ip from the deploy command)
  • 1 IP used as a Virtual Router to provide connectivity to the Tenant network is used for the OpenStack services (is automatically assigned in this example)
  • The remaining IP addresses (at least 1) are used for Floating IPs on the provider network.

The following is an example post deployment launching of a VM using the private tenant network and the provider network.

  1. Create helper variables for the configuration:

    # standalone with tenant networking and provider networking
    export OS_CLOUD=standalone
    export GATEWAY=192.168.24.1
    export STANDALONE_HOST=192.168.24.2
    export PUBLIC_NETWORK_CIDR=192.168.24.0/24
    export PRIVATE_NETWORK_CIDR=192.168.100.0/24
    export PUBLIC_NET_START=192.168.24.4
    export PUBLIC_NET_END=192.168.24.5
    export DNS_SERVER=1.1.1.1
    
  2. Initial Nova and Glance setup:

    # nova flavor
    openstack flavor create --ram 512 --disk 1 --vcpu 1 --public tiny
    # basic cirros image
    wget https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    openstack image create cirros --container-format bare --disk-format qcow2 --public --file cirros-0.4.0-x86_64-disk.img
    # nova keypair for ssh
    ssh-keygen
    openstack keypair create --public-key ~/.ssh/id_rsa.pub default
    
  3. Setup a simple network security group:

    # create basic security group to allow ssh/ping/dns
    openstack security group create basic
    # allow ssh
    openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
    # allow ping
    openstack security group rule create --protocol icmp basic
    # allow DNS
    openstack security group rule create --protocol udp --dst-port 53:53 basic
    
  4. Create Neutron Networks:

    openstack network create --external --provider-physical-network datacentre --provider-network-type flat public
    openstack network create --internal private
    openstack subnet create public-net \
        --subnet-range $PUBLIC_NETWORK_CIDR \
        --no-dhcp \
        --gateway $GATEWAY \
        --allocation-pool start=$PUBLIC_NET_START,end=$PUBLIC_NET_END \
        --network public
    openstack subnet create private-net \
        --subnet-range $PRIVATE_NETWORK_CIDR \
        --network private
    
  5. Create Virtual Router:

    # create router
    # NOTE(aschultz): In this case an IP will be automatically assigned
    # out of the allocation pool for the subnet.
    openstack router create vrouter
    openstack router set vrouter --external-gateway public
    openstack router add subnet vrouter private-net
    
  6. Create floating IP:

    # create floating ip
    openstack floating ip create public
    
  7. Launch Instance:

    # launch instance
    openstack server create --flavor tiny --image cirros --key-name default --network private --security-group basic myserver
    
  8. Assign Floating IP:

    openstack server add floating ip myserver <FLOATING_IP>
    
  9. Test SSH:

    # login to vm
    ssh cirros@<FLOATING_IP>
    

Networking Details

Here’s a basic diagram of where the connections occur in the system for this example:

+-------------------------------------------------------+
|Standalone Host                                        |
|                                                       |
|              +----------------------------+           |
|              |          vrouter           |           |
|              |                            |           |
|              +------------+ +-------------+           |
|              |192.168.24.4| |             |           |
|              |192.168.24.3| |192.168.100.1|           |
|              +---------+------+-----------+           |
|      +-------------+   |      |                       |
|      |  myserver   |   |      |                       |
|      |192.168.100.2|   |      |                       |
|      +-------+-----+   |    +-+                       |
|              |         |    |                         |
|              |         |    |                         |
|             ++---------+----+-+   +-----------------+ |
|             |     br-int      +---+   br-ctlplane   | |
|             |                 |   |  192.168.24.2   | |
|             +------+----------+   +--------+--------+ |
|                    |                       |          |
|             +------+----------+            |          |
|             |     br-tun      |            |          |
|             |                 |            |          |
|             +-----------------+       +----+---+      |
|                                       |  eth0  |      |
+---------------------------------------+----+---+------+
                                             |
                                             |
                                     +-------+-----+
                                     |   switch    |
                                     +-------------+

Example: 1 NIC, Using Compute with Provider Network

The following example is based on the single NIC configuration and assumes that the environment had at least 4 total IP addresses available to it. The IPs are used for the following:

  • 1 IP address for the OpenStack services (this is the --local-ip from the deploy command)
  • 1 IP used as a Virtual Router to provide connectivity to the Tenant network is used for the OpenStack services
  • 1 IP used for DHCP on the provider network
  • The remaining IP addresses (at least 1) are used for Floating IPs on the provider network.

The following is an example post deployment launching of a VM using the private tenant network and the provider network.

  1. Create helper variables for the configuration:

    # standalone with provider networking
    export OS_CLOUD=standalone
    export GATEWAY=192.168.24.1
    export STANDALONE_HOST=192.168.24.2
    export VROUTER_IP=192.168.24.3
    export PUBLIC_NETWORK_CIDR=192.168.24.0/24
    export PUBLIC_NET_START=192.168.24.4
    export PUBLIC_NET_END=192.168.24.5
    export DNS_SERVER=1.1.1.1
    
  2. Initial Nova and Glance setup:

    # nova flavor
    openstack flavor create --ram 512 --disk 1 --vcpu 1 --public tiny
    # basic cirros image
    wget https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    openstack image create cirros --container-format bare --disk-format qcow2 --public --file cirros-0.4.0-x86_64-disk.img
    # nova keypair for ssh
    ssh-keygen
    openstack keypair create --public-key ~/.ssh/id_rsa.pub default
    
  3. Setup a simple network security group:

    # create basic security group to allow ssh/ping/dns
    openstack security group create basic
    # allow ssh
    openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
    # allow ping
    openstack security group rule create --protocol icmp basic
    # allow DNS
    openstack security group rule create --protocol udp --dst-port 53:53 basic
    
  4. Create Neutron Networks:

    openstack network create --external --provider-physical-network datacentre --provider-network-type flat public
    openstack subnet create public-net \
        --subnet-range $PUBLIC_NETWORK_CIDR \
        --gateway $GATEWAY \
        --allocation-pool start=$PUBLIC_NET_START,end=$PUBLIC_NET_END \
        --network public \
        --host-route destination=169.254.169.254/32,gateway=$VROUTER_IP \
        --host-route destination=0.0.0.0/0,gateway=$GATEWAY \
        --dns-nameserver $DNS_SERVER
    
  5. Create Virtual Router:

    # vrouter needed for metadata route
    # NOTE(aschultz): In this case we're creating a fixed IP because we need
    # to create a manual route in the subnet for the metadata service
    openstack router create vrouter
    openstack port create --network public --fixed-ip subnet=public-net,ip-address=$VROUTER_IP vrouter-port
    openstack router add port vrouter vrouter-port
    
  6. Launch Instance:

    # launch instance
    openstack server create --flavor tiny --image cirros --key-name default --network public --security-group basic myserver
    
  7. Test SSH:

    # login to vm
    ssh cirros@<VM_IP>
    

Networking Details

Here’s a basic diagram of where the connections occur in the system for this example:

+----------------------------------------------------+
|Standalone Host                                     |
|                                                    |
|    +------------+   +------------+                 |
|    |  myserver  |   |  vrouter   |                 |
|    |192.168.24.4|   |192.168.24.3|                 |
|    +---------+--+   +-+----------+                 |
|              |        |                            |
|          +---+--------+----+   +-----------------+ |
|          |     br-int      +---+   br-ctlplane   | |
|          |                 |   |  192.168.24.2   | |
|          +------+----------+   +--------+--------+ |
|                 |                       |          |
|          +------+----------+            |          |
|          |     br-tun      |            |          |
|          |                 |            |          |
|          +-----------------+       +----+---+      |
|                                    |  eth0  |      |
+------------------------------------+----+---+------+
                                          |
                                          |
                                  +-------+-----+
                                  |   switch    |
                                  +-------------+

Example: 2 NIC, Using Compute with Tenant and Provider Networks

The following example is based on the dual NIC configuration and assumes that the environment has an entire IP range available to it on the provider network. We are assuming the following would be reserved on the provider network:

  • 1 IP address for a gateway on the provider network
  • 1 IP address for OpenStack Endpoints
  • 1 IP used as a Virtual Router to provide connectivity to the Tenant network is used for the OpenStack services (is automatically assigned in this example)
  • The remaining IP addresses (at least 1) are used for Floating IPs on the provider network.

The following is an example post deployment launching of a VM using the private tenant network and the provider network.

  1. Create helper variables for the configuration:

    # standalone with tenant networking and provider networking
    export OS_CLOUD=standalone
    export GATEWAY=192.168.24.1
    export STANDALONE_HOST=192.168.0.2
    export PUBLIC_NETWORK_CIDR=192.168.24.0/24
    export PRIVATE_NETWORK_CIDR=192.168.100.0/24
    export PUBLIC_NET_START=192.168.0.3
    export PUBLIC_NET_END=192.168.24.254
    export DNS_SERVER=1.1.1.1
    
  2. Initial Nova and Glance setup:

    # nova flavor
    openstack flavor create --ram 512 --disk 1 --vcpu 1 --public tiny
    # basic cirros image
    wget https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    openstack image create cirros --container-format bare --disk-format qcow2 --public --file cirros-0.4.0-x86_64-disk.img
    # nova keypair for ssh
    ssh-keygen
    openstack keypair create --public-key ~/.ssh/id_rsa.pub default
    
  3. Setup a simple network security group:

    # create basic security group to allow ssh/ping/dns
    openstack security group create basic
    # allow ssh
    openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
    # allow ping
    openstack security group rule create --protocol icmp basic
    # allow DNS
    openstack security group rule create --protocol udp --dst-port 53:53 basic
    
  4. Create Neutron Networks:

    openstack network create --external --provider-physical-network datacentre --provider-network-type flat public
    openstack network create --internal private
    openstack subnet create public-net \
        --subnet-range $PUBLIC_NETWORK_CIDR \
        --no-dhcp \
        --gateway $GATEWAY \
        --allocation-pool start=$PUBLIC_NET_START,end=$PUBLIC_NET_END \
        --network public
    openstack subnet create private-net \
        --subnet-range $PRIVATE_NETWORK_CIDR \
        --network private
    
  5. Create Virtual Router:

    # create router
    # NOTE(aschultz): In this case an IP will be automatically assigned
    # out of the allocation pool for the subnet.
    openstack router create vrouter
    openstack router set vrouter --external-gateway public
    openstack router add subnet vrouter private-net
    
  6. Create floating IP:

    # create floating ip
    openstack floating ip create public
    
  7. Launch Instance:

    # launch instance
    openstack server create --flavor tiny --image cirros --key-name default --network private --security-group basic myserver
    
  8. Assign Floating IP:

    openstack server add floating ip myserver <FLOATING_IP>
    
  9. Test SSH:

    # login to vm
    ssh cirros@<FLOATING_IP>
    

Networking Details

Here’s a basic diagram of where the connections occur in the system for this example:

+---------------------------------------------------------------------+
|Standalone Host                                                      |
|                                                                     |
|            +----------------------------+                           |
|            |          vrouter           |                           |
|            |                            |                           |
|            +------------+ +-------------+                           |
|            |192.168.24.4| |             |                           |
|            |192.168.24.3| |192.168.100.1|                           |
|            +---------+------+-----------+                           |
|    +-------------+   |      |                                       |
|    |  myserver   |   |      |                                       |
|    |192.168.100.2|   |      |                                       |
|    +-------+-----+   |    +-+                                       |
|            |         |    |                                         |
|           ++---------+----+-+   +-----------------+                 |
|           |     br-int      +---+   br-ctlplane   |                 |
|           |                 |   |  192.168.24.2   |                 |
|           +------+----------+   +------------+----+                 |
|                  |                           |                      |
|           +------+----------+                |                      |
|           |     br-tun      |                |                      |
|           |                 |                |                      |
|           +-----------------+                |       +----------+   |
|                                        +-----+---+   |   eth0   |   |
|                                        |  eth1   |   | 10.0.1.4 |   |
+----------------------------------------+-----+---+---+-----+----+---+
                                               |             |
                                               |             |
                                        +------+------+      |
                                        |   switch    +------+
                                        +-------------+
Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.