summaryrefslogtreecommitdiffstats
path: root/ctdb
diff options
context:
space:
mode:
Diffstat (limited to 'ctdb')
-rw-r--r--ctdb/web/configuring.html116
-rw-r--r--ctdb/web/testing.html60
2 files changed, 124 insertions, 52 deletions
diff --git a/ctdb/web/configuring.html b/ctdb/web/configuring.html
index 6aa6ebeed9..c592f0e8f5 100644
--- a/ctdb/web/configuring.html
+++ b/ctdb/web/configuring.html
@@ -60,19 +60,22 @@ There is no default for this parameter.
<h3>CTDB_NODES</h3>
-This file needs to be created and should contain a list of the private IP addresses that the CTDB daemons will use in your cluster. One ip address for each node in the cluster.<br>
+This file needs to be created and should contain a list of the private
+IP addresses that the CTDB daemons will use in your cluster. One IP
+address for each node in the cluster.<p>
-This should be a private non-routable subnet which is only used for internal cluster traffic.<br>
+This should be a private non-routable subnet which is only used for
+internal cluster traffic. This file must be the same on all nodes in
+the cluster.<p>
-This file must be the same on all nodes in the cluster.<br><br>
-
-Make sure that these ip addresses are automatically started when the linux host boots and that each node can ping each other node.<br><br>
+Make sure that these IP addresses are automatically started when the
+cluster node boots and that each node can ping each other node.<p>
Example 4 node cluster:
<pre>
CTDB_NODES=/etc/ctdb/nodes
</pre>
-Content of /etc/ctdb/nodes :
+Content of /etc/ctdb/nodes:
<pre>
10.1.1.1
10.1.1.2
@@ -85,26 +88,37 @@ The default for this file is /etc/ctdb/nodes.
<h3>CTDB_PUBLIC_INTERFACE</h3>
-This parameter is used to tell CTDB which network interface is used to hold the public ip addresses when CTDB is used to manage IP takeover.<br>
-This can be the same network interface as is used for the private addresses in the CTDB_NODES list but it is recommended that you use a different interface.<br><br>
+This parameter is used to tell CTDB which network interface is used to
+hold the public ip addresses when CTDB is used to manage IP
+takeover.<p>
+This can be the same network interface as is used for the private
+addresses in the CTDB_NODES list but it is recommended that you use a
+different interface.<p>
Example using eth0 for the public interface:
<pre>
CTDB_PUBLIC_INTERFACE=eth0
</pre>
-It is strongly recommended that you use CTDB with IP takeover.<br>
-When you use this parameter you must also specify the CTDB_PUBLIC_ADDRESSES parameter.<br>
+It is strongly recommended that you use CTDB with IP takeover.<p>
+When you use this parameter you must also specify the
+CTDB_PUBLIC_ADDRESSES parameter.
<h3>CTDB_PUBLIC_ADDRESSES</h3>
-In order to use IP takeover you must specify a file containing a list of public IP addresses. One IP address for each node.<br><br>
+
+In order to use IP takeover you must specify a file containing a list
+of public IP addresses. One IP address for each node.<p>
-This file contains a list of public cluster addresses.<br>
-These are the addresses that the SMBD daemons and other services will bind to and which clients will use to connect to the cluster.<br>
-This file must contain one address for each node, i.e. it must have the same number of entries as the nodes file. This file must also be the same for all nodes in the cluster.<br><br>
+This file contains a list of public cluster addresses.<p>
+
+These are the addresses that the SMBD daemons and other services will
+bind to and which clients will use to connect to the cluster. This
+file must contain one address for each node, i.e. it must have the
+same number of entries as the nodes file. This file must also be the
+same for all nodes in the cluster.<p>
Example 4 node cluster:
<pre>
@@ -118,47 +132,81 @@ Content of /etc/ctdb/public_addresses:
192.168.2.2/24
</pre>
-These are the IP addresses that you should configure in DNS for the name of the clustered samba server and are the addresses that CIFS clients will connect to.<br>
-Configure it as one DNS A record (==name) with multiple ip addresses and let round-robin DNS distribute the clients across the nodes of the cluster.<br><br>
+These are the IP addresses that you should configure in DNS for the
+name of the clustered samba server and are the addresses that CIFS
+clients will connect to.<p>
+
+Configure it as one DNS A record (==name) with multiple IP addresses
+and let round-robin DNS distribute the clients across the nodes of the
+cluster.<p>
-The CTDB cluster utilizes IP takeover techniques to ensure that as long as at least one node in the cluster is available, all the public IP addresses will always be available to clients.<br>
-This means that if one physical node fails, the public address of that node will be taken over by a different node in the cluster. This provides a guarantee that all ip addresses exposed to clients will always be reachable by clients even if a node has been powered off or has crashed.<br><br>
+The CTDB cluster utilizes IP takeover techniques to ensure that as long as at least one node in the cluster is available, all the public IP addresses will always be available to clients.<p>
-CTDB nodes will only take over IP addresses that are inside the same subnet as its own public IP address.<br>
-In the example above, nodes 0 and 1 would be able to take over each others public ip and analog for nodes 2 and 3, but node 0 and 1 would NOT be able to take over the IP addresses for nodes 2 or 3 since they are on a different subnet.<br><br>
+This means that if one physical node fails, the public address of that
+node will be taken over by a different node in the cluster. This
+provides a guarantee that all ip addresses exposed to clients will
+always be reachable by clients even if a node has been powered off or
+has crashed.<p>
-Do not assign these addresses to any of the interfaces on the host. CTDB will add and remove these addresses automatically at runtime.<br>
+CTDB nodes will only take over IP addresses that are inside the same
+subnet as its own public IP address. In the example above, nodes 0 and
+1 would be able to take over each others public ip and analog for
+nodes 2 and 3, but node 0 and 1 would NOT be able to take over the IP
+addresses for nodes 2 or 3 since they are on a different
+subnet.<p>
-This parameter is used when CTDB operated in takeover ip mode.<br><br>
+Do not assign these addresses to any of the interfaces on the
+host. CTDB will add and remove these addresses automatically at
+runtime.<p>
+This parameter is used when CTDB operated in takeover ip mode.<p>
-The default for this file is /etc/ctdb/public_addresses .<br>
-If you use this you <strong>must</strong> also specify the CTDB_PUBLIC_INTERFACE parameter.<br>
+The usual location for this file is /etc/ctdb/public_addresses. If you
+use this you <strong>must</strong> also specify the
+CTDB_PUBLIC_INTERFACE parameter.<p>
<h2>Event scripts</h2>
-CTDB comes with a number of application specific event scripts that are used to do service specific tasks when the cluster has been reconfigured.<br>
-These scripts are stored in /etc/ctdb/events.d/ .<br><br>
-You do not need to modify these scripts if you just want to use cluster samba or nfs but they serve as examples in case you want to add clustering support for other application servers we do not yet proivide event scripts for.<br><br>
-Please see the service scripts that installed by ctdb in /etc/ctdb/events.d for examples of how to configure other services to be aware of the HA features of CTDB.
+
+CTDB comes with a number of application specific event scripts that
+are used to do service specific tasks when the cluster has been
+reconfigured. These scripts are stored in /etc/ctdb/events.d/<p>
+
+You do not need to modify these scripts if you just want to use
+clustered Samba or NFS but they serve as examples in case you want to
+add clustering support for other application servers we do not yet
+proivide event scripts for.<p>
+
+Please see the service scripts that installed by ctdb in
+/etc/ctdb/events.d for examples of how to configure other services to
+be aware of the HA features of CTDB.
<h2>TCP port to use for CTDB</h2>
-CTDB defaults to use TCP port 9001 for its traffic.<br>
-Configuring a different port to use for CTDB traffic is done by adding a ctdb entry to the /etc/services file.<br><br>
+CTDB defaults to use TCP port 9001 for its traffic.<p>
+
+Configuring a different port to use for CTDB traffic is done by adding
+a ctdb entry to the /etc/services file.<p>
Example: for change CTDB to use port 9999 add the following line to /etc/services
<pre>
ctdb 9999/tcp
</pre>
-Note: all nodes in the cluster MUST use the same port or else CTDB will not start correctly.
+Note: all nodes in the cluster MUST use the same port or else CTDB
+will not start correctly.
<h2>Name resolution</h2>
-You need to setup some method for your Windows and NFS clients to find the nodes of the cluster, and automatically balance the load between the nodes.<br><br>
-We recommend that you use public ip addresses using CTDB_PUBLIC_INTERFACE/CTDB_PUBLIC_ADDRESSES and that you setup a round-robin DNS entry for your cluster, listing all the public IP addresses that CTDB will be managing as a single DNS A record.<br><br>
+You need to setup some method for your Windows and NFS clients to find
+the nodes of the cluster, and automatically balance the load between
+the nodes.<p>
-You may also wish to setup a static WINS server entry listing all of your cluster nodes IP addresses.
+We recommend that you use public ip addresses using
+CTDB_PUBLIC_INTERFACE/CTDB_PUBLIC_ADDRESSES and that you setup a
+round-robin DNS entry for your cluster, listing all the public IP
+addresses that CTDB will be managing as a single DNS A record.<p>
+You may also wish to setup a static WINS server entry listing all of
+your cluster nodes IP addresses.
<!--#include virtual="footer.html" -->
diff --git a/ctdb/web/testing.html b/ctdb/web/testing.html
index 8c7eb6ad55..a766a6f9b5 100644
--- a/ctdb/web/testing.html
+++ b/ctdb/web/testing.html
@@ -7,26 +7,33 @@
<H2 align="center">Starting and testing CTDB</h2>
-The CTDB log is in /var/log/log.ctdb so look in this file if something diud not start correctly.<br><br>
+The CTDB log is in /var/log/log.ctdb so look in this file if something
+did not start correctly.<p>
-Log in to all of the nodes in the cluster and start the ctdb service using
+You can ensure that ctdb is running on all nodes using
<pre>
- service ctdb start
+ onnode all service ctdb start
</pre>
Verify that the CTDB daemon started properly. There should normally be at least 2 processes started for CTDB, one for the main daemon and one for the recovery daemon.
<pre>
- pidof ctdbd
+ onnode all pidof ctdbd
</pre>
-Once all CTDB nodes have started, verify that they are correctly talking to eachothers.<br>
-There should be one TCP connection from the private ip address on each node to TCP port 9001 on each of the other nodes in the cluster.
+Once all CTDB nodes have started, verify that they are correctly
+talking to each other.<p>
+
+There should be one TCP connection from the private ip address on each
+node to TCP port 9001 on each of the other nodes in the cluster.
<pre>
- netstat -a -n | grep 9001
+ onnode all netstat -tn | grep 9001
</pre>
<h2>Automatically restarting CTDB</h2>
-If you wish to cope with software faults in ctdb, or want ctdb to automatically restart when an administration kills it, then you may wish to add a cron entry for root like this:
+
+If you wish to cope with software faults in ctdb, or want ctdb to
+automatically restart when an administration kills it, then you may
+wish to add a cron entry for root like this:
<pre>
* * * * * /etc/init.d/ctdb cron > /dev/null 2>&1
@@ -39,7 +46,9 @@ Once your cluster is up and running, you may wish to know how to test that it is
<h3>The ctdb tool</h3>
-The ctdb package comes with a utility called ctdb that can be used to view the behaviour of the ctdb cluster.<br>
+The ctdb package comes with a utility called ctdb that can be used to
+view the behaviour of the ctdb cluster.<p>
+
If you run it with no options it will provide some terse usage information. The most commonly used commands are:
<pre>
ctdb status
@@ -48,7 +57,9 @@ If you run it with no options it will provide some terse usage information. The
</pre>
<h3>ctdb status</h3>
-The status command provides basic information about the cluster and the status of the nodes. when you run it you will get some output like :
+
+The status command provides basic information about the cluster and the status of the nodes. when you run it you will get some output like:
+
<pre>
<strong>Number of nodes:4
vnn:0 10.1.1.1 OK (THIS NODE)
@@ -65,13 +76,24 @@ hash:3 lmaster:3
Recovery master:0
</pre>
-The important parts are in bold. This tells us that all 4 nodes are in a healthy state.<br>
-It also tells us that recovery mode is normal, which means that the cluster has finished a recovery and is running in a normal fully operational state.<br>
-Recovery state will briefly change to "RECOVERY" when there ahs been a node failure or something is wrong with the cluster.<br>
-If the cluster remains in RECOVERY state for very long (many seconds) there might be something wrong with the configuration. See /var/log/log.ctdb
+The important parts are in bold. This tells us that all 4 nodes are in
+a healthy state.<p>
+
+It also tells us that recovery mode is normal, which means that the
+cluster has finished a recovery and is running in a normal fully
+operational state.<p>
+
+Recovery state will briefly change to "RECOVERY" when there ahs been a
+node failure or something is wrong with the cluster.<p>
+
+If the cluster remains in RECOVERY state for very long (many seconds)
+there might be something wrong with the configuration. See
+/var/log/log.ctdb.
<h3>ctdb ip</h3>
+
This command prints the current status of the public ip addresses and which physical node is currently serving that ip.
+
<pre>
Number of nodes:4
192.168.1.1 0
@@ -83,10 +105,12 @@ Number of nodes:4
<h3>ctdb ping</h3>
this command tries to "ping" each of the CTDB daemons in the cluster.
<pre>
-response from 0 time=0.000050 sec (13 clients)
-response from 1 time=0.000154 sec (27 clients)
-response from 2 time=0.000114 sec (17 clients)
-response from 3 time=0.000115 sec (59 clients)
+ ctdb ping -n all
+
+ response from 0 time=0.000050 sec (13 clients)
+ response from 1 time=0.000154 sec (27 clients)
+ response from 2 time=0.000114 sec (17 clients)
+ response from 3 time=0.000115 sec (59 clients)
</pre>
<!--#include virtual="footer.html" -->