summaryrefslogtreecommitdiffstats
path: root/README
diff options
context:
space:
mode:
authorMartin Schwenke <martin@meltin.net>2009-09-23 11:10:22 +1000
committerMartin Schwenke <martin@meltin.net>2009-09-23 11:10:22 +1000
commit93ea81fce9e728e0561d398c41df2347d4cd523a (patch)
treec2fbcceeab1eefb2c1aa412fd0e78dd0b11c7d81 /README
parent55b68f7bd1f0f3bebc8851a474f713f3d4a170ce (diff)
downloadautocluster-93ea81fce9e728e0561d398c41df2347d4cd523a.tar.gz
autocluster-93ea81fce9e728e0561d398c41df2347d4cd523a.tar.xz
autocluster-93ea81fce9e728e0561d398c41df2347d4cd523a.zip
Make README more useful.
Divide README into more logical sections, explain more things, provide more examples, ... Signed-off-by: Martin Schwenke <martin@meltin.net>
Diffstat (limited to 'README')
-rw-r--r--README283
1 files changed, 230 insertions, 53 deletions
diff --git a/README b/README
index 2f59b36..27b0b54 100644
--- a/README
+++ b/README
@@ -5,17 +5,35 @@ Autocluster is set of scripts for building virtual clusters to test
clustered Samba. It uses Linux's libvirt and KVM virtualisation
engine.
+Autocluster is a collection of scripts, template and configuration
+files that allow you to create a cluster of virtual nodes very
+quickly. You can create a cluster from scratch in less than 30
+minutes. Once you have a base image you can then recreate a cluster
+or create new virtual clusters in minutes.
+
+The current implementation creates virtual clusters of RHEL5 nodes.
+
+
CONTENTS
========
-* Basic Setup
+* INSTALLING AUTOCLUSTER
+
+* HOST MACHINE SETUP
+
+* CREATING A CLUSTER
+
+* BOOTING A CLUSTER
+
+* POST-CREATION SETUP
+
+* CONFIGURATION
-* Configuration
+* DEVELOPMENT HINTS
-* Development Hints
-BASIC SETUP
-===========
+INSTALLING AUTOCLUSTER
+======================
Before you start, make sure you have the latest version of
autocluster. To download autocluster do this:
@@ -24,11 +42,25 @@ autocluster. To download autocluster do this:
Or to update it, run "git pull" in the autocluster directory
-To setup a virtual cluster for SoFS with autocluster follow these steps:
+You probably want to add the directory where autocluster is installed
+to your PATH, otherwise things may quickly become tedious.
+
+
+HOST MACHINE SETUP
+==================
+
+This section explains how to setup a host machine to run virtual
+clusters generated by autocluster.
1) Install kvm, libvirt, qemu-nbd and nbd-client.
+ Autocluster creates virtual machines that use libvirt to run under
+ KVM. This means that you will need to install both KVM and
+ libvirt on your host machine. You will also need the qemu-nbd and
+ nbd-client programs, which autocluster uses to loopback-nbd-mount
+ the disk images when configuring each node.
+
For various distros:
* RHEL/CentOS
@@ -41,6 +73,12 @@ To setup a virtual cluster for SoFS with autocluster follow these steps:
You will need to install a matching kmod-kvm package to get the
kernel module.
+ RHEL5.4 ships with KVM but it doesn't have the SCSI disk
+ emulation that autocluster uses by default. There are also
+ problems when autocluster uses virtio on RHEL5.4's KVM. You
+ should use a version from lfarkas.org instead. Hopefully this
+ will change!
+
qemu-nbd is in the kvm package.
Unless you can find an RPM for nbd-client then you need to
@@ -62,7 +100,7 @@ To setup a virtual cluster for SoFS with autocluster follow these steps:
Useful packages ship with Ubuntu 8.10 (Intrepid Ibex) and later.
- qemu-ndb is in the kvm package but is called kvm-nbd, so you
+ qemu-nbd is in the kvm package but is called kvm-nbd, so you
need to set the QEMU_NBD configuration variable.
nbd-client is in the nbd-client package.
@@ -115,43 +153,94 @@ To setup a virtual cluster for SoFS with autocluster follow these steps:
3) Setup a DNS server on your host. See host_setup/etc/bind/ for a
sample config that is suitable. It needs to redirect DNS queries
- for your SOFS virtual domain to your windows domain controller
+ for your virtual domain to your windows domain controller
4) Download a RHEL install ISO.
- 5) Create a 'config' file in the autocluster directory. See the
- "CONFIGURATION" section below for more details.
- 6) Use "./autocluster create base" to create the base install image.
+CREATING A CLUSTER
+==================
+
+A cluster comprises a single base disk image, a copy-on-write disk
+image for each node and some XML files that tell libvirt about each
+node's virtual hardware configuration. The copy-on-write disk images
+save a lot of disk space on the host machine because they each use the
+base disk image - without them the disk image for each cluster node
+would need to contain the entire RHEL install.
+
+The cluster creation process can be broken down into 2 mains steps:
+
+ 1) Creating the base disk image.
+
+ 2) Create the per-node disk images and corresponding XML files.
+
+However, before you do this you will need to create a configuration
+file. See the "CONFIGURATION" section below for more details.
+
+Here are more details on the "create cluster" process. Note that
+unless you have done something extra special then you'll need to run
+all of this as root.
+
+ 1) Create the base disk image using:
+
+ ./autocluster create base
+
+ The first thing this step does is to check that it can connect to
+ the YUM server. If this fails make sure that there are no
+ firewalls blocking your access to the server.
+
The install will take about 10 to 15 minutes and you will see the
packages installing in your terminal
- Before you start create base make sure your web proxy cache is
- authenticated with the Mainz BSO (eg. connect to
- https://9.155.61.11 with a web browser)
+ The installation process uses kickstart. If your configuration
+ uses a SoFS release then the last stage of the kickstart
+ configuration will be a postinstall script that installs and
+ configures packages related to SoFS. The choice of postinstall
+ script is set using the POSTINSTALL_TEMPLATE variable, allowing you
+ to adapt the installation process for different types of clusters.
+ It makes sense to install packages that will be common to all
+ nodes into the base image. This save time later when you're
+ setting up the cluster nodes. However, you don't have to do this
+ - you can set POSTINSTALL_TEMPLATE to "" instead - but then you
+ will lose the quick cluster creation/setup that is a major feature
+ of autocluster.
- 7) When that has finished I recommend you mark that base image
- immutable like this:
+ When that has finished you should mark that base image immutable
+ like this:
- chattr +i /virtual/SoFS-1.5-base.img
+ chattr +i /virtual/ac-base.img
That will ensure it won't change. This is a precaution as the
image will be used as a basis file for the per-node images, and if
it changes your cluster will become corrupt
-
- 8) Now run "./autocluster create cluster" specifying a cluster
+ 2) Now run "autocluster create cluster" specifying a cluster
name. For example:
- ./autocluster create cluster c1
+ autocluster create cluster c1
- That will create your cluster nodes and the TSM server node
+ This will create and install the XML node descriptions and the
+ disk images for your cluster nodes, and any other nodes you have
+ configured. Each disk image is initially created as an "empty"
+ copy-on-write image, which is linked to the base image. Those
+ images are then loopback-nbd-mounted and populated with system
+ configuration files and other potentially useful things (such as
+ scripts).
- 9) Now boot your cluster nodes like this:
+BOOTING A CLUSTER
+=================
+
+At this point the cluster has been created but isn't yet running.
+Autocluster provides a command called "vircmd", which is a thin
+wrapper around libvirt's virsh command. vircmd takes a cluster name
+instead of a node/domain name and runs the requested command on all
+nodes in the cluster.
- ./vircmd start c1
+ 1) Now boot your cluster nodes like this:
+
+ vircmd start c1
The most useful vircmd commands are:
@@ -159,60 +248,119 @@ To setup a virtual cluster for SoFS with autocluster follow these steps:
shutdown : graceful shutdown of a node
destroy : power off a node immediately
-
- 10) You can watch boot progress like this:
+ 2) You can watch boot progress like this:
tail -f /var/log/kvm/serial.c1*
- All the nodes have serial consoles, making it easier to capture
- kernel panic messages and watch the nodes via ssh
+ All the nodes have serial consoles, making it easier to capture
+ kernel panic messages and watch the nodes via ssh
+
+POST-CREATION SETUP
+===================
- 11) Now you can ssh into your nodes. You may like to look at the
+Now you have a cluster of nodes, which might have a variety of
+packages installed and configured in a common way. Now that the
+cluster is up and running you might need to configure specialised
+subsystems like GPFS or Samba. You can do this by hand or use the
+sample scripts/configurations that are provided
+
+ 1) Now you can ssh into your nodes. You may like to look at the
small set of scripts in /root/scripts on the nodes for
some scripts. In particular:
+ mknsd.sh : sets up the local shared disks as GPFS NSDs
+ setup_gpfs.sh : sets up GPFS, creates a filesystem etc
+ setup_samba.sh : sets up Samba and many other system compoents
setup_tsm_server.sh: run this on the TSM node to setup the TSM server
setup_tsm_client.sh: run this on the GPFS nodes to setup HSM
- mknsd.sh : this sets up the local shared disks as GPFS NSDs
- setup_gpfs.sh : this sets GPFS, creates a filesystem etc,
- byppassing the SoFS GUI. Useful for quick tests.
+ To setup a SoFS system you will normally need to run
+ setup_gpfs.sh and setup_samba.sh.
- 12) If using the SoFS GUI, then you may want to lower the memory it
+ 2) If using the SoFS GUI, then you may want to lower the memory it
uses so that it fits easily on the first node. Just edit this
file on the first node:
/opt/IBM/sofs/conf/overrides/sofs.javaopt
-
- 13) For automating the SoFS GUI, you may wish to install the iMacros
+ 3) For automating the SoFS GUI, you may wish to install the iMacros
extension to firefox, and look at some sample macros I have put
in the imacros/ directory of autocluster. They will need editing
for your environment, but they should give you some hints on how
to automate the final GUI stage of the installation of a SoFS
cluster.
+
CONFIGURATION
=============
-* See config.sample for an example of a configuration file. Note that
- all items in the sample file are commented out by default
+Basics
+======
+
+Autocluster uses configuration files containing Unix shell style
+variables. For example,
+
+ FIRSTIP=30
+
+indicates that the last octet of the first IP address in the cluster
+will be 30. If an option contains multiple words then they will be
+separated by underscores ('_'), as in:
+
+ ISO_DIR=/data/ISOs
+
+All options have an equivalent command-line option, such
+as:
+
+ --firstip=30
+
+Command-line options are lowercase. Words are separated by dashes
+('-'), as in:
+
+ --iso-dir=/data/ISOs
+
+Normally you would use a configuration file with variables so that you
+can repeat steps easily. The command-line equivalents are useful for
+trying things out without resorting to an editor. You can specify a
+configuration file to use on the autocluster command-line using the -c
+option. For example:
+
+ autocluster -c config-foo create base
+
+If you don't provide a configuration variable then autocluster will
+look for a file called "config" in the current directory.
+
+You can also use environment variables to override the default values
+of configuration variables. However, both command-line options and
+configuration file entries will override environment variables.
-* Configuration options are defined in config.d/*.defconf. All
- configuration options have an equivalent command-line option.
+Potentially useful information:
* Use "autocluster --help" to list all available command-line options
- all the items listed under "configuration options:" are the
- equivalents of the settings for config files.
+ equivalents of the settings for config files. This output also
+ shows descriptions of the options.
-* Run "autocluster --dump > config.defaults" (or similar) to create a
- file containing the default values for all options that you can set.
- I don't recommend that you use this as a configuration file but it
- can be handy as a reference.
+* You can use the --dump option to check the current value of
+ configuration variables. This is most useful when used in
+ combination with grep:
- I recommend that you aim for the smallest possible configuration
- file. Perhaps start with:
+ autocluster --dump | grep ISO_DIR
+
+ In the past we recommended using --dump to create initial
+ configuration file. Don't do this - it is a bad idea! There are a
+ lot of options and you'll create a huge file that you don't
+ understand and can't debug!
+
+* Configuration options are defined in config.d/*.defconf. You
+ shouldn't need to look in these files... but sometimes they contain
+ comments about options that are too long to fit into help strings.
+
+Keep it simple
+==============
+
+* I recommend that you aim for the smallest possible configuration file.
+ Perhaps start with:
FIRSTIP=<whatever>
@@ -233,8 +381,13 @@ CONFIGURATION
with_release "SoFS-1.5.3"
So the smallest possible config file would have something like this
- as the first line and would then set FIRSTIP. Add other options as
- you need them.
+ as the first line and would then set FIRSTIP:
+
+ with_release "SoFS-1.5.3"
+
+ FIRSTIP=<whatever>
+
+ Add other options as you need them.
The release definitions are stored in releases/*.release. The
available releases are listed in the output of "autocluster --help".
@@ -247,9 +400,11 @@ CONFIGURATION
appear before with_release so that they can be used within a release
definition - the most obvious one is the (rarely used) RHEL_ARCH
option, which is used in the default ISO setting for each release.
+ If things don't work as expected use --dump to confirm that
+ configuration variables have the values that you expect.
-* The NODES configuration variable control the types of nodes that are
- created. At the time of writing, the default value is:
+* The NODES configuration variable controls the types of nodes that
+ are created. At the time of writing, the default value is:
NODES="rhel_base:0-3"
@@ -262,7 +417,7 @@ CONFIGURATION
NODES="tsm_server:0 sofs_gui:1 sofs_front:2-4"
- which should produce a set of nodes the same as the previous SoFS
+ which should produce a set of nodes the same as the old SoFS
default. You can add extra rhel_base nodes if you need them for
test clients or some other purpose:
@@ -277,9 +432,12 @@ CONFIGURATION
However, these options can't be used to create nodes without
specifying IP offsets - except WITH_TSM_NODE, which checks to see if
IP offset 0 is vacant. Therefore, for many uses you can ignore the
- NODES variable. However, NODES is very useful for specifying
- alternative mixes of node types, especially with the addition of new
- node types.
+ NODES variable.
+
+ However, NODES is the recommended mechanism for specifying the nodes
+ that you want in your cluster. It is powerful, easy to read and
+ centralises the information in a single line of your configuration
+ file.
DEVELOPMENT HINTS
=================
@@ -296,3 +454,22 @@ This prints templates/node.xml with all appropriate substitutions
done. Some internal variables (e.g. CLUSTER, DISK, UUID, NAME) are
given fairly arbitrary values but the various MAC address strings are
set using the function set_macaddrs().
+
+The -e option is also useful when writing scripts that use
+autocluster. Given the complexities of the configuration system you
+probably don't want to parse configuration files yourself to determine
+the current settings. Instead, you can ask autocluster to tell you
+useful pieces of information. For example, say you want to script
+creating a base disk image and you want to ensure the image is
+marked immutable:
+
+ base_image=$(autocluster -c $CONFIG -e 'echo $VIRTBASE/$BASENAME.img')
+ chattr -V -i "$base_image"
+
+ if autocluster -c $CONFIG create base ; then
+ chattr -V +i "$base_image"
+ ...
+
+Note that the command that autocluster should run is enclosed in
+single quotes. This means that $VIRTBASE and $BASENAME will be expand
+within autocluster after the configuration file has been loaded.