summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMartin Schwenke <martin@meltin.net>2012-05-22 14:17:43 +1000
committerMartin Schwenke <martin@meltin.net>2012-05-22 14:29:14 +1000
commite056fd365d31c5bcbea4348ea3dcf57ea5fd34e9 (patch)
tree52c0a029a334f8afc86f6bf3a16b57eaf008309c
parent864ba6508fe561949e04065a962eaceb59a4e93e (diff)
Update README
Signed-off-by: Martin Schwenke <martin@meltin.net>
-rw-r--r--README151
1 files changed, 44 insertions, 107 deletions
diff --git a/README b/README
index d42ce5e..1a59fd2 100644
--- a/README
+++ b/README
@@ -11,7 +11,7 @@ quickly. You can create a cluster from scratch in less than 30
minutes. Once you have a base image you can then recreate a cluster
or create new virtual clusters in minutes.
-The current implementation creates virtual clusters of RHEL5 nodes.
+The current implementation creates virtual clusters of RHEL5/6 nodes.
CONTENTS
@@ -68,11 +68,9 @@ clusters generated by autocluster.
* RHEL/CentOS
Autocluster should work with the standard RHEL6 qemu-kvm and
- libvirt packages. However, RHEL's KVM doesn't support the SCSI
- emulation, so you will need these settings:
+ libvirt packages. However, you'll need to tell autocluster
+ where the KVM executable is:
- SYSTEM_DISK_TYPE=ide
- SHARED_DISK_TYPE=virtio
KVM=/usr/libexec/qemu-kvm
For RHEL5/CentOS5, useful packages for both kvm and libvirt used
@@ -92,11 +90,10 @@ clusters generated by autocluster.
lots of disk space and you like complexity then see the sections
below on "iSCSI shared disks" and "Raw IDE system disks".
- * Fedora Core
+ * Fedora
Useful packages ship with Fedora Core 10 (Cambridge) and later.
- Some of the above notes on RHEL might apply to Fedora Core's
- KVM.
+ Some of the above notes on RHEL might apply to Fedora's KVM.
* Ubuntu
@@ -118,23 +115,19 @@ clusters generated by autocluster.
b) Install guestfish or qemu-nbd and nbd-client.
- Recent Linux distributions, including RHEL6.0, contain guestfish.
- Guestfish (see http://libguestfs.org/ - there are binary packages
- for several distros here) is a CLI for manipulating KVM/QEMU disk
- images. Autocluster supports guestfish, so if guestfish is
- available then you should use it. It should be more reliable than
- NBD.
+ Recent Linux distributions, including RHEL since 6.0, contain
+ guestfish. Guestfish (see http://libguestfs.org/ - there are
+ binary packages for several distros here) is a CLI for
+ manipulating KVM/QEMU disk images. Autocluster supports
+ guestfish, so if guestfish is available then you should use it.
+ It should be more reliable than NBD.
- Guestfish isn't yet the default autocluster method for disk image
- manipulation. To use it put this in your configuration file:
+ Autocluster attempts to use the best available method (guestmount
+ -> guestfish -> loopback) for accessing disk image. If it chooses
+ a suboptimal method, you can force the method:
SYSTEM_DISK_ACCESS_METHOD=guestfish
- Note that autocluster's guestfish support is new and was written
- to work around some bugs in RHEL6.0's version of guestfish... so
- might not work well with newer, non-buggy versions. If so, please
- report bugs!
-
If you can't use guestfish then you'll have to use NBD. For this
you will need the qemu-nbd and nbd-client programs, which
autocluster uses to loopback-nbd-mount the disk images when
@@ -260,12 +253,10 @@ all of this as root.
The install will take about 10 to 15 minutes and you will see the
packages installing in your terminal
- The installation process uses kickstart. If your configuration
- uses a SoFS release then the last stage of the kickstart
- configuration will be a postinstall script that installs and
- configures packages related to SoFS. The choice of postinstall
- script is set using the POSTINSTALL_TEMPLATE variable, allowing you
- to adapt the installation process for different types of clusters.
+ The installation process uses kickstart. The choice of
+ postinstall script is set using the POSTINSTALL_TEMPLATE variable.
+ An example is provided in
+ base/all/root/scripts/gpfs-nas-postinstall.sh.
It makes sense to install packages that will be common to all
nodes into the base image. This save time later when you're
@@ -331,33 +322,28 @@ Now you have a cluster of nodes, which might have a variety of
packages installed and configured in a common way. Now that the
cluster is up and running you might need to configure specialised
subsystems like GPFS or Samba. You can do this by hand or use the
-sample scripts/configurations that are provided
+sample scripts/configurations that are provided.
- 1) Now you can ssh into your nodes. You may like to look at the
- small set of scripts in /root/scripts on the nodes for
- some scripts. In particular:
+Now you can ssh into your nodes. You may like to look at the small set
+of scripts in /root/scripts on the nodes for some scripts. In
+particular:
- mknsd.sh : sets up the local shared disks as GPFS NSDs
- setup_gpfs.sh : sets up GPFS, creates a filesystem etc
- setup_samba.sh : sets up Samba and many other system compoents
- setup_tsm_server.sh: run this on the TSM node to setup the TSM server
- setup_tsm_client.sh: run this on the GPFS nodes to setup HSM
+ mknsd.sh : sets up the local shared disks as GPFS NSDs
+ setup_gpfs.sh : sets up GPFS, creates a filesystem etc
+ setup_cluster.sh : sets up clustered Samba and other NAS services
+ setup_tsm_server.sh: run this on the TSM node to setup the TSM server
+ setup_tsm_client.sh: run this on the GPFS nodes to setup HSM
+ setup_ad_server.sh : run this on a node to setup a Samba4 AD
- To setup a SoFS system you will normally need to run
- setup_gpfs.sh and setup_samba.sh.
+To setup a clustered NAS system you will normally need to run
+setup_gpfs.sh and setup_cluster.sh on one of the nodes.
- 2) If using the SoFS GUI, then you may want to lower the memory it
- uses so that it fits easily on the first node. Just edit this
- file on the first node:
- /opt/IBM/sofs/conf/overrides/sofs.javaopt
+AUTOMATED CLUSTER CREATION
+==========================
- 3) For automating the SoFS GUI, you may wish to install the iMacros
- extension to firefox, and look at some sample macros I have put
- in the imacros/ directory of autocluster. They will need editing
- for your environment, but they should give you some hints on how
- to automate the final GUI stage of the installation of a SoFS
- cluster.
+The last 2 steps can be automated. An example script for doing this
+can be found in examples/create_cluster.sh.
CONFIGURATION
@@ -434,66 +420,16 @@ Keep it simple
and move on from there.
-* Use the --with-release option on the command-line or the
- with_release function in a configuration file to get default values
- for building virtual clusters for releases of particular "products".
- Currently there are only release definitions for SoFS.
-
- For example, you can setup default values for SoFS-1.5.3 by running:
-
- autocluster --with-release=SoFS-1.5.3 ...
-
- Equivalently you can use the following syntax in a configuration
- file:
-
- with_release "SoFS-1.5.3"
-
- So the smallest possible config file would have something like this
- as the first line and would then set FIRSTIP:
-
- with_release "SoFS-1.5.3"
-
- FIRSTIP=<whatever>
-
- Add other options as you need them.
-
- The release definitions are stored in releases/*.release. The
- available releases are listed in the output of "autocluster --help".
-
- NOTE: Occasionally you will need to consider the position of
- with_release in your configuration. If you want to override options
- handled by a release definition then you will obviously need to set
- them later in your configuration. This will be the case for most
- options you will want to set. However, some options will need to
- appear before with_release so that they can be used within a release
- definition - the most obvious one is the (rarely used) RHEL_ARCH
- option, which is used in the default ISO setting for each release.
- If things don't work as expected use --dump to confirm that
- configuration variables have the values that you expect.
-
* The NODES configuration variable controls the types of nodes that
are created. At the time of writing, the default value is:
- NODES="rhel_base:0-3"
-
- This means that you get 4 nodes, at IP offsets 0, 1, 2, & 3 from
- FIRSTIP, all part of the CTDB cluster. That is, with standard
- settings and FIRSTIP=35, 4 nodes will be created in the IP range
- 10.0.0.35 to 10.0.0.38.
+ NODES="sofs_front:0-3 rhel_base:4"
- The SoFS releases use a default of:
-
- NODES="tsm_server:0 sofs_gui:1 sofs_front:2-4"
-
- which should produce a set of nodes the same as the old SoFS
- default. You can add extra rhel_base nodes if you need them for
- test clients or some other purpose:
-
- NODES="$NODES rhel_base:7,8"
-
- This produces an additional 2 base RHEL nodes at IP offsets 7 & 8
- from FIRSTIP. Since sofs_* nodes are present, these base nodes will
- not be part of the CTDB cluster - they're just extra.
+ This means that you get 4 clustered NAS nodes, at IP offsets 0, 1,
+ 2, & 3 from FIRSTIP, all part of the CTDB cluster. You also get an
+ additional utility node at IP offset 4 that can be used, for
+ example, as a test client. Since sofs_* nodes are present, the base
+ node will not be part of the CTDB cluster - it is just extra.
For many standard use cases the nodes specified by NODES can be
modified by setting NUMNODES, WITH_SOFS_GUI and WITH_TSM_NODE.
@@ -543,8 +479,9 @@ cluster will need to have a different setting for ISCSI_TID.
Raw IDE system disks
====================
-The RHEL5 version of KVM does not support the SCSI block device
-emulation. Therefore, you can use virtio or ide system disks.
+RHEL versions of KVM do not support the SCSI block device emulation,
+so autocluster now defaults to using an IDE system disk instead of a
+SCSI one. Therefore, you can use virtio or ide system disks.
However, writeback caching, qcow2 and virtio are incompatible and
result in I/O corruption. So, you can use either virtio system disks
without any caching, accepting reduced performance, or you can use IDE
@@ -586,7 +523,7 @@ This is useful for testing and debugging.
One good use of this option is to test template substitution using the
function substitute_vars(). For example:
- ./autocluster --with-release=SoFS-1.5.3 -e 'CLUSTER=foo; DISK=foo.qcow2; UUID=abcdef; NAME=foon1; set_macaddrs; substitute_vars templates/node.xml'
+ ./autocluster -c example.autocluster -e 'CLUSTER=foo; DISK=foo.qcow2; UUID=abcdef; NAME=foon1; set_macaddrs; substitute_vars templates/node.xml'
This prints templates/node.xml with all appropriate substitutions
done. Some internal variables (e.g. CLUSTER, DISK, UUID, NAME) are