From 1bf776bb52d66b9243b6948ddb5930351663ea4b Mon Sep 17 00:00:00 2001 From: Andrew Tridgell Date: Tue, 12 Jun 2007 12:27:45 +1000 Subject: minor doc updates (This used to be ctdb commit 20c824dbce877575c423cb08943c5b9ff6d0c4a1) --- ctdb/web/testing.html | 60 +++++++++++++++++++++++++++++++++++---------------- 1 file changed, 42 insertions(+), 18 deletions(-) (limited to 'ctdb/web/testing.html') diff --git a/ctdb/web/testing.html b/ctdb/web/testing.html index 8c7eb6ad55..a766a6f9b5 100644 --- a/ctdb/web/testing.html +++ b/ctdb/web/testing.html @@ -7,26 +7,33 @@

Starting and testing CTDB

-The CTDB log is in /var/log/log.ctdb so look in this file if something diud not start correctly.

+The CTDB log is in /var/log/log.ctdb so look in this file if something +did not start correctly.

-Log in to all of the nodes in the cluster and start the ctdb service using +You can ensure that ctdb is running on all nodes using

-  service ctdb start
+  onnode all service ctdb start
 
Verify that the CTDB daemon started properly. There should normally be at least 2 processes started for CTDB, one for the main daemon and one for the recovery daemon.
-  pidof ctdbd
+  onnode all pidof ctdbd
 
-Once all CTDB nodes have started, verify that they are correctly talking to eachothers.
-There should be one TCP connection from the private ip address on each node to TCP port 9001 on each of the other nodes in the cluster. +Once all CTDB nodes have started, verify that they are correctly +talking to each other.

+ +There should be one TCP connection from the private ip address on each +node to TCP port 9001 on each of the other nodes in the cluster.

-  netstat -a -n | grep 9001
+  onnode all netstat -tn | grep 9001
 

Automatically restarting CTDB

-If you wish to cope with software faults in ctdb, or want ctdb to automatically restart when an administration kills it, then you may wish to add a cron entry for root like this: + +If you wish to cope with software faults in ctdb, or want ctdb to +automatically restart when an administration kills it, then you may +wish to add a cron entry for root like this:
  * * * * * /etc/init.d/ctdb cron > /dev/null 2>&1
@@ -39,7 +46,9 @@ Once your cluster is up and running, you may wish to know how to test that it is
 
 

The ctdb tool

-The ctdb package comes with a utility called ctdb that can be used to view the behaviour of the ctdb cluster.
+The ctdb package comes with a utility called ctdb that can be used to +view the behaviour of the ctdb cluster.

+ If you run it with no options it will provide some terse usage information. The most commonly used commands are:

  ctdb status
@@ -48,7 +57,9 @@ If you run it with no options it will provide some terse usage information. The
 

ctdb status

-The status command provides basic information about the cluster and the status of the nodes. when you run it you will get some output like : + +The status command provides basic information about the cluster and the status of the nodes. when you run it you will get some output like: +
 Number of nodes:4
 vnn:0 10.1.1.1       OK (THIS NODE)
@@ -65,13 +76,24 @@ hash:3 lmaster:3
 Recovery master:0
 
-The important parts are in bold. This tells us that all 4 nodes are in a healthy state.
-It also tells us that recovery mode is normal, which means that the cluster has finished a recovery and is running in a normal fully operational state.
-Recovery state will briefly change to "RECOVERY" when there ahs been a node failure or something is wrong with the cluster.
-If the cluster remains in RECOVERY state for very long (many seconds) there might be something wrong with the configuration. See /var/log/log.ctdb +The important parts are in bold. This tells us that all 4 nodes are in +a healthy state.

+ +It also tells us that recovery mode is normal, which means that the +cluster has finished a recovery and is running in a normal fully +operational state.

+ +Recovery state will briefly change to "RECOVERY" when there ahs been a +node failure or something is wrong with the cluster.

+ +If the cluster remains in RECOVERY state for very long (many seconds) +there might be something wrong with the configuration. See +/var/log/log.ctdb.

ctdb ip

+ This command prints the current status of the public ip addresses and which physical node is currently serving that ip. +
 Number of nodes:4
 192.168.1.1         0
@@ -83,10 +105,12 @@ Number of nodes:4
 

ctdb ping

this command tries to "ping" each of the CTDB daemons in the cluster.
-response from 0 time=0.000050 sec  (13 clients)
-response from 1 time=0.000154 sec  (27 clients)
-response from 2 time=0.000114 sec  (17 clients)
-response from 3 time=0.000115 sec  (59 clients)
+  ctdb ping -n all
+
+  response from 0 time=0.000050 sec  (13 clients)
+  response from 1 time=0.000154 sec  (27 clients)
+  response from 2 time=0.000114 sec  (17 clients)
+  response from 3 time=0.000115 sec  (59 clients)
 
-- cgit