summaryrefslogtreecommitdiffstats
path: root/README
diff options
context:
space:
mode:
authorMartin Schwenke <martin@meltin.net>2014-06-27 11:46:18 +1000
committerMartin Schwenke <martin@meltin.net>2014-07-02 14:17:17 +1000
commit346ec14d2a7bf014c81280960d86d90583c34ecd (patch)
tree7ad8f902853ed01407f7a6d997ba1a51051d5980 /README
parentef72de0899d54d341ad8b9cb81e27ea27aa29bf8 (diff)
downloadautocluster-346ec14d2a7bf014c81280960d86d90583c34ecd.tar.gz
autocluster-346ec14d2a7bf014c81280960d86d90583c34ecd.tar.xz
autocluster-346ec14d2a7bf014c81280960d86d90583c34ecd.zip
Update README to reflect latest development and testing
Signed-off-by: Martin Schwenke <martin@meltin.net>
Diffstat (limited to 'README')
-rw-r--r--README135
1 files changed, 72 insertions, 63 deletions
diff --git a/README b/README
index 142d9a0..926598c 100644
--- a/README
+++ b/README
@@ -11,7 +11,9 @@ quickly. You can create a cluster from scratch in less than 30
minutes. Once you have a base image you can then recreate a cluster
or create new virtual clusters in minutes.
-The current implementation creates virtual clusters of RHEL5/6 nodes.
+Autocluster has recently been tested to create virtual clusters of
+RHEL 6/7 nodes. Older versions were tested with RHEL 5 and some
+versions of CentOS.
CONTENTS
@@ -67,11 +69,10 @@ clusters generated by autocluster.
* RHEL/CentOS
- Autocluster should work with the standard RHEL6 qemu-kvm and
- libvirt packages. However, you'll need to tell autocluster
- where the KVM executable is:
-
- KVM=/usr/libexec/qemu-kvm
+ Autocluster should work with the standard RHEL qemu-kvm and
+ libvirt packages. It will try to find the qemu-kvm binary. If
+ you've done something unusual then you'll need to set the KVM
+ configuration variable.
For RHEL5/CentOS5, useful packages for both kvm and libvirt used
to be found here:
@@ -87,8 +88,9 @@ clusters generated by autocluster.
be able to use the settings recommended above for RHEL6.
If you're still running RHEL5.4, you have lots of time, you have
- lots of disk space and you like complexity then see the sections
- below on "iSCSI shared disks" and "Raw IDE system disks".
+ lots of disk space, and you like complexity, then see the
+ sections below on "iSCSI shared disks" and "Raw IDE system
+ disks". :-)
* Fedora
@@ -115,6 +117,9 @@ clusters generated by autocluster.
b) Install guestfish or qemu-nbd and nbd-client.
+ Autocluster needs a method of updating files in the disk image for
+ each node.
+
Recent Linux distributions, including RHEL since 6.0, contain
guestfish. Guestfish (see http://libguestfs.org/ - there are
binary packages for several distros here) is a CLI for
@@ -124,7 +129,8 @@ clusters generated by autocluster.
Autocluster attempts to use the best available method (guestmount
-> guestfish -> loopback) for accessing disk image. If it chooses
- a suboptimal method, you can force the method:
+ a suboptimal method (e.g. nodes created with guestmount sometimes
+ won't boot), you can force the method:
SYSTEM_DISK_ACCESS_METHOD=guestfish
@@ -178,7 +184,9 @@ clusters generated by autocluster.
If you're using a network setup different to the default then pass
your autocluster configuration filename, which should set the
- NETWORKS variable.
+ NETWORKS variable. If you're using a variety of networks for
+ different clusters then you can probably run this script multiple
+ times.
You might also need to set:
@@ -186,7 +194,10 @@ clusters generated by autocluster.
in your environment so that virsh does KVM/QEMU things by default.
- 2) If your install server is far away then you may need a caching web
+ 2) Configure a local web/install server to provide required YUM
+ repositories
+
+ If your install server is far away then you may need a caching web
proxy on your local network.
If you don't have one, then you can install a squid proxy on your
@@ -213,9 +224,9 @@ clusters generated by autocluster.
3) Setup a DNS server on your host. See host_setup/etc/bind/ for a
sample config that is suitable. It needs to redirect DNS queries
- for your virtual domain to your windows domain controller
+ for your virtual domain to your windows domain controller.
- 4) Download a RHEL install ISO.
+ 4) Download a RHEL (or CentOS) install ISO.
CREATING A CLUSTER
@@ -228,11 +239,18 @@ save a lot of disk space on the host machine because they each use the
base disk image - without them the disk image for each cluster node
would need to contain the entire RHEL install.
-The cluster creation process can be broken down into 2 mains steps:
+The cluster creation process can be broken down into several main
+steps:
+
+ 1) Create a base disk image.
+
+ 2) Create per-node disk images and corresponding XML files.
- 1) Creating the base disk image.
+ 3) Update /etc/hosts to include cluster nodes.
- 2) Create the per-node disk images and corresponding XML files.
+ 4) Boot virtual machines for the nodes.
+
+ 5) Post-boot configuration.
However, before you do this you will need to create a configuration
file. See the "CONFIGURATION" section below for more details.
@@ -243,7 +261,7 @@ all of this as root.
1) Create the base disk image using:
- ./autocluster create base
+ ./autocluster base create
The first thing this step does is to check that it can connect to
the YUM server. If this fails make sure that there are no
@@ -254,15 +272,14 @@ all of this as root.
The installation process uses kickstart. The choice of
postinstall script is set using the POSTINSTALL_TEMPLATE variable.
- An example is provided in
- base/all/root/scripts/gpfs-nas-postinstall.sh.
-
- It makes sense to install packages that will be common to all
+ This can be used to install packages that will be common to all
nodes into the base image. This save time later when you're
- setting up the cluster nodes. However, you don't have to do this
- - you can set POSTINSTALL_TEMPLATE to "" instead - but then you
- will lose the quick cluster creation/setup that is a major feature
- of autocluster.
+ setting up the cluster nodes. However, current usage (given that
+ we test many versions of CTDB) is to default POSTINSTALL_TEMPLATE
+ to "" and install packages post-boot. This seems to be a
+ reasonable compromise between flexibility (the base image can be,
+ for example, a pristine RHEL7.0-base.qcow2, CTDB/Samba packages
+ are selected post-base creation) and speed of cluster creation.
When that has finished you should mark that base image immutable
like this:
@@ -273,10 +290,11 @@ all of this as root.
image will be used as a basis file for the per-node images, and if
it changes your cluster will become corrupt
- 2) Now run "autocluster create cluster" specifying a cluster
- name. For example:
+ 2-5)
+ Now run "autocluster cluster build", specifying a configuration
+ file. For example:
- autocluster create cluster c1
+ autocluster -c m1.autocluster cluster build
This will create and install the XML node descriptions and the
disk images for your cluster nodes, and any other nodes you have
@@ -285,28 +303,34 @@ all of this as root.
images are then attached to using guestfish or
loopback-nbd-mounted, and populated with system configuration
files and other potentially useful things (such as scripts).
+ /etc/hosts is updated, the cluster is booted and post-boot
+ setup is done.
+ Instead of doing all of the steps 2-5 using 1 command you call do:
-BOOTING A CLUSTER
-=================
+ 2) autocluster -c m1.autocluster cluster create
+
+ 3) autocluster -c m1.autocluster cluster update_hosts
+
+ 4) autocluster -c m1.autocluster cluster boot
+
+ 5) autocluster -c m1.autocluster cluster configure
+
+BOOTING/DESTROY A CLUSTER
+=========================
-At this point the cluster has been created but isn't yet running.
Autocluster provides a command called "vircmd", which is a thin
wrapper around libvirt's virsh command. vircmd takes a cluster name
instead of a node/domain name and runs the requested command on all
nodes in the cluster.
- 1) Now boot your cluster nodes like this:
-
- vircmd start c1
-
The most useful vircmd commands are:
start : boot a node
shutdown : graceful shutdown of a node
destroy : power off a node immediately
- 2) You can watch boot progress like this:
+ You can watch boot progress like this:
tail -f /var/log/kvm/serial.c1*
@@ -314,36 +338,21 @@ nodes in the cluster.
kernel panic messages and watch the nodes via ssh
-POST-CREATION SETUP
-===================
-
-Now you have a cluster of nodes, which might have a variety of
-packages installed and configured in a common way. Now that the
-cluster is up and running you might need to configure specialised
-subsystems like GPFS or Samba. You can do this by hand or use the
-sample scripts/configurations that are provided.
-
-Now you can ssh into your nodes. You may like to look at the small set
-of scripts in /root/scripts on the nodes for some scripts. In
-particular:
-
- mknsd.sh : sets up the local shared disks as GPFS NSDs
- setup_gpfs.sh : sets up GPFS, creates a filesystem etc
- setup_cluster.sh : sets up clustered Samba and other NAS services
- setup_tsm_server.sh: run this on the TSM node to setup the TSM server
- setup_tsm_client.sh: run this on the GPFS nodes to setup HSM
- setup_ad_server.sh : run this on a node to setup a Samba4 AD
+POST-BOOT SETUP
+===============
+Autocluster copies some scripts to cluster nodes to enable post-boot
+configuration. These are used to configure specialised subsystems
+like GPFS or Samba and are installed in /root/scripts/ on each node.
+The main 2 entry points are install_packages.sh and setup_cluster.sh.
To setup a clustered NAS system you will normally need to run
-setup_gpfs.sh and setup_cluster.sh on one of the nodes.
-
-
-AUTOMATED CLUSTER CREATION
-==========================
-
-The last 2 steps can be automated. An example script for doing this
-can be found in examples/create_cluster.sh.
+setup_gpfs.sh and setup_cluster.sh on one of the nodes. If you want
+to run these manually, see autocluster's cluster_configure() function
+for example usage.
+There are also some older scripts that haven't been used for a while
+and have probably bit-rotted, such as setup_tsm_client.sh and
+setup_tsm_server.sh. However, they are still provided as examples.
CONFIGURATION
=============