summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorAndrew Tridgell <tridge@samba.org>2007-09-14 15:23:23 +1000
committerAndrew Tridgell <tridge@samba.org>2007-09-14 15:23:23 +1000
commited75f988d5e811531bc15becc04acecf917dfd6a (patch)
tree39f1d4ff6358307368086997acd806ffc9f9e180
parentc62490569b81408a7577234434663a072059c798 (diff)
parent2d0261afeb3946d4ab46cabc7f99ea04427cab40 (diff)
merge from ronnie
(This used to be ctdb commit 913c33a7d2f67570548fecc568dba874e5f72dd2)
-rw-r--r--ctdb/doc/ctdb.118
-rw-r--r--ctdb/doc/ctdb.1.html76
-rw-r--r--ctdb/doc/ctdb.1.xml16
-rw-r--r--ctdb/doc/ctdbd.158
-rw-r--r--ctdb/doc/ctdbd.1.html92
-rw-r--r--ctdb/doc/ctdbd.1.xml90
-rw-r--r--ctdb/tools/ctdb.c62
-rw-r--r--ctdb/web/configuring.html45
8 files changed, 264 insertions, 193 deletions
diff --git a/ctdb/doc/ctdb.1 b/ctdb/doc/ctdb.1
index 127e940780..f20780c3b1 100644
--- a/ctdb/doc/ctdb.1
+++ b/ctdb/doc/ctdb.1
@@ -1,11 +1,11 @@
.\" Title: ctdb
.\" Author:
.\" Generator: DocBook XSL Stylesheets v1.71.0 <http://docbook.sf.net/>
-.\" Date: 09/03/2007
+.\" Date: 09/14/2007
.\" Manual:
.\" Source:
.\"
-.TH "CTDB" "1" "09/03/2007" "" ""
+.TH "CTDB" "1" "09/14/2007" "" ""
.\" disable hyphenation
.nh
.\" disable justification (adjust text to left margin only)
@@ -22,11 +22,11 @@ ctdb \- clustered tdb database management utility
ctdb is a utility to view and manage a ctdb cluster.
.SH "OPTIONS"
.PP
-\-n <vnn>
+\-n <pnn>
.RS 3n
-This specifies the virtual node number on which to execute the command. Default is to run the command on the deamon running on the local host.
+This specifies the physical node number on which to execute the command. Default is to run the command on the deamon running on the local host.
.sp
-The virtual node number is an integer that describes the node in the cluster. The first node has virtual node number 0.
+The physical node number is an integer that describes the node in the cluster. The first node has physical node number 0.
.RE
.PP
\-Y
@@ -138,10 +138,10 @@ Example output:
.RS 3n
.nf
Number of nodes:4
-vnn:0 11.1.2.200 OK (THIS NODE)
-vnn:1 11.1.2.201 OK
-vnn:2 11.1.2.202 OK
-vnn:3 11.1.2.203 OK
+pnn:0 11.1.2.200 OK (THIS NODE)
+pnn:1 11.1.2.201 OK
+pnn:2 11.1.2.202 OK
+pnn:3 11.1.2.203 OK
Generation:1362079228
Size:4
hash:0 lmaster:0
diff --git a/ctdb/doc/ctdb.1.html b/ctdb/doc/ctdb.1.html
index e3c665f6a5..4e0d056815 100644
--- a/ctdb/doc/ctdb.1.html
+++ b/ctdb/doc/ctdb.1.html
@@ -1,12 +1,12 @@
<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>ctdb</title><meta name="generator" content="DocBook XSL Stylesheets V1.71.0"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="refentry" lang="en"><a name="ctdb.1"></a><div class="titlepage"></div><div class="refnamediv"><h2>Name</h2><p>ctdb &#8212; clustered tdb database management utility</p></div><div class="refsynopsisdiv"><h2>Synopsis</h2><div class="cmdsynopsis"><p><code class="command">ctdb [ OPTIONS ] COMMAND ...</code> </p></div><div class="cmdsynopsis"><p><code class="command">ctdb</code> [-n &lt;node&gt;] [-Y] [-t &lt;timeout&gt;] [-? --help] [--usage] [-d --debug=&lt;INTEGER&gt;] [--socket=&lt;filename&gt;]</p></div></div><div class="refsect1" lang="en"><a name="id2480829"></a><h2>DESCRIPTION</h2><p>
ctdb is a utility to view and manage a ctdb cluster.
- </p></div><div class="refsect1" lang="en"><a name="id2480839"></a><h2>OPTIONS</h2><div class="variablelist"><dl><dt><span class="term">-n &lt;vnn&gt;</span></dt><dd><p>
- This specifies the virtual node number on which to execute the
+ </p></div><div class="refsect1" lang="en"><a name="id2480839"></a><h2>OPTIONS</h2><div class="variablelist"><dl><dt><span class="term">-n &lt;pnn&gt;</span></dt><dd><p>
+ This specifies the physical node number on which to execute the
command. Default is to run the command on the deamon running on
the local host.
</p><p>
- The virtual node number is an integer that describes the node in the
- cluster. The first node has virtual node number 0.
+ The physical node number is an integer that describes the node in the
+ cluster. The first node has physical node number 0.
</p></dd><dt><span class="term">-Y</span></dt><dd><p>
Produce output in machine readable form for easier parsing by scripts. Not all commands support this option.
</p></dd><dt><span class="term">-t &lt;timeout&gt;</span></dt><dd><p>
@@ -40,7 +40,7 @@
UNHEALTHY - A service provided by this node is malfunctioning and should be investigated. The CTDB daemon itself is operational and participates in the cluster. Its public IP address has been taken over by a different node and no services are currnetly being hosted. All unhealthy nodes should be investigated and require an administrative action to rectify.
</p><p>
BANNED - This node failed too many recovery attempts and has been banned from participating in the cluster for a period of RecoveryBanPeriod seconds. Any public IP address has been taken over by other nodes. This node does not provide any services. All banned nodes should be investigated and require an administrative action to rectify. This node does not perticipate in the CTDB cluster but can still be communicated with. I.e. ctdb commands can be sent to it.
- </p></div><div class="refsect3" lang="en"><a name="id2481204"></a><h4>generation</h4><p>
+ </p></div><div class="refsect3" lang="en"><a name="id2481203"></a><h4>generation</h4><p>
The generation id is a number that indicates the current generation
of a cluster instance. Each time a cluster goes through a
reconfiguration or a recovery its generation id will be changed.
@@ -59,10 +59,10 @@
Example: ctdb status
</p><p>Example output:</p><pre class="screen">
Number of nodes:4
-vnn:0 11.1.2.200 OK (THIS NODE)
-vnn:1 11.1.2.201 OK
-vnn:2 11.1.2.202 OK
-vnn:3 11.1.2.203 OK
+pnn:0 11.1.2.200 OK (THIS NODE)
+pnn:1 11.1.2.201 OK
+pnn:2 11.1.2.202 OK
+pnn:3 11.1.2.203 OK
Generation:1362079228
Size:4
hash:0 lmaster:0
@@ -71,7 +71,7 @@ hash:2 lmaster:2
hash:3 lmaster:3
Recovery mode:NORMAL (0)
Recovery master:0
- </pre></div><div class="refsect2" lang="en"><a name="id2481284"></a><h3>ping</h3><p>
+ </pre></div><div class="refsect2" lang="en"><a name="id2481285"></a><h3>ping</h3><p>
This command will "ping" all CTDB daemons in the cluster to verify that they are processing commands correctly.
</p><p>
Example: ctdb ping
@@ -82,7 +82,7 @@ response from 0 time=0.000054 sec (3 clients)
response from 1 time=0.000144 sec (2 clients)
response from 2 time=0.000105 sec (2 clients)
response from 3 time=0.000114 sec (2 clients)
- </pre></div><div class="refsect2" lang="en"><a name="id2481310"></a><h3>ip</h3><p>
+ </pre></div><div class="refsect2" lang="en"><a name="id2481311"></a><h3>ip</h3><p>
This command will display the list of public addresses that are provided by the cluster and which physical node is currently serving this ip.
</p><p>
Example: ctdb ip
@@ -102,11 +102,11 @@ Number of addresses:4
Example output:
</p><pre class="screen">
MaxRedirectCount = 3
- </pre></div><div class="refsect2" lang="en"><a name="id2528417"></a><h3>setvar &lt;name&gt; &lt;value&gt;</h3><p>
+ </pre></div><div class="refsect2" lang="en"><a name="id2528419"></a><h3>setvar &lt;name&gt; &lt;value&gt;</h3><p>
Set the runtime value of a tuneable variable.
</p><p>
Example: ctdb setvar MaxRedirectCount 5
- </p></div><div class="refsect2" lang="en"><a name="id2528432"></a><h3>listvars</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528434"></a><h3>listvars</h3><p>
List all tuneable variables.
</p><p>
Example: ctdb listvars
@@ -128,7 +128,7 @@ MonitorInterval = 15
EventScriptTimeout = 20
RecoveryGracePeriod = 60
RecoveryBanPeriod = 300
- </pre></div><div class="refsect2" lang="en"><a name="id2528460"></a><h3>statistics</h3><p>
+ </pre></div><div class="refsect2" lang="en"><a name="id2528462"></a><h3>statistics</h3><p>
Collect statistics from the CTDB daemon about how many calls it has served.
</p><p>
Example: ctdb statistics
@@ -170,43 +170,43 @@ CTDB version 1
max_hop_count 0
max_call_latency 4.948321 sec
max_lockwait_latency 0.000000 sec
- </pre></div><div class="refsect2" lang="en"><a name="id2528504"></a><h3>statisticsreset</h3><p>
+ </pre></div><div class="refsect2" lang="en"><a name="id2528505"></a><h3>statisticsreset</h3><p>
This command is used to clear all statistics counters in a node.
</p><p>
Example: ctdb statisticsreset
- </p></div><div class="refsect2" lang="en"><a name="id2528518"></a><h3>getdebug</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528519"></a><h3>getdebug</h3><p>
Get the current debug level for the node. the debug level controls what information is written to the log file.
- </p></div><div class="refsect2" lang="en"><a name="id2528529"></a><h3>setdebug &lt;debuglevel&gt;</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528530"></a><h3>setdebug &lt;debuglevel&gt;</h3><p>
Set the debug level of a node. This is a number between 0 and 9 and controls what information will be written to the logfile.
- </p></div><div class="refsect2" lang="en"><a name="id2528541"></a><h3>getpid</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528542"></a><h3>getpid</h3><p>
This command will return the process id of the ctdb daemon.
- </p></div><div class="refsect2" lang="en"><a name="id2528551"></a><h3>disable</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528552"></a><h3>disable</h3><p>
This command is used to administratively disable a node in the cluster.
A disabled node will still participate in the cluster and host
clustered TDB records but its public ip address has been taken over by
a different node and it no longer hosts any services.
- </p></div><div class="refsect2" lang="en"><a name="id2528565"></a><h3>enable</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528566"></a><h3>enable</h3><p>
Re-enable a node that has been administratively disabled.
- </p></div><div class="refsect2" lang="en"><a name="id2528575"></a><h3>ban &lt;bantime|0&gt;</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528576"></a><h3>ban &lt;bantime|0&gt;</h3><p>
Administratively ban a node for bantime seconds. A bantime of 0 means that the node should be permanently banned.
</p><p>
A banned node does not participate in the cluster and does not host any records for the clustered TDB. Its ip address has been taken over by an other node and no services are hosted.
</p><p>
Nodes are automatically banned if they are the cause of too many
cluster recoveries.
- </p></div><div class="refsect2" lang="en"><a name="id2528598"></a><h3>unban</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528599"></a><h3>unban</h3><p>
This command is used to unban a node that has either been
administratively banned using the ban command or has been automatically
banned by the recovery daemon.
- </p></div><div class="refsect2" lang="en"><a name="id2528610"></a><h3>shutdown</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528611"></a><h3>shutdown</h3><p>
This command will shutdown a specific CTDB daemon.
- </p></div><div class="refsect2" lang="en"><a name="id2528620"></a><h3>recover</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528621"></a><h3>recover</h3><p>
This command will trigger the recovery daemon to do a cluster
recovery.
- </p></div><div class="refsect2" lang="en"><a name="id2528631"></a><h3>killtcp &lt;srcip:port&gt; &lt;dstip:port&gt;</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528632"></a><h3>killtcp &lt;srcip:port&gt; &lt;dstip:port&gt;</h3><p>
This command will kill the specified TCP connection by issuing a
TCP RST to the srcip:port endpoint.
- </p></div><div class="refsect2" lang="en"><a name="id2528642"></a><h3>tickle &lt;srcip:port&gt; &lt;dstip:port&gt;</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528643"></a><h3>tickle &lt;srcip:port&gt; &lt;dstip:port&gt;</h3><p>
This command will will send a TCP tickle to the source host for the
specified TCP connection.
A TCP tickle is a TCP ACK packet with an invalid sequence and
@@ -218,12 +218,12 @@ CTDB version 1
TCP connection has been disrupted and that the client will need
to reestablish. This greatly speeds up the time it takes for a client
to detect and reestablish after an IP failover in the ctdb cluster.
- </p></div></div><div class="refsect1" lang="en"><a name="id2528668"></a><h2>Debugging Commands</h2><p>
+ </p></div></div><div class="refsect1" lang="en"><a name="id2528669"></a><h2>Debugging Commands</h2><p>
These commands are primarily used for CTDB development and testing and
should not be used for normal administration.
- </p><div class="refsect2" lang="en"><a name="id2528679"></a><h3>process-exists &lt;pid&gt;</h3><p>
+ </p><div class="refsect2" lang="en"><a name="id2528680"></a><h3>process-exists &lt;pid&gt;</h3><p>
This command checks if a specific process exists on the CTDB host. This is mainly used by Samba to check if remote instances of samba are still running or not.
- </p></div><div class="refsect2" lang="en"><a name="id2528691"></a><h3>getdbmap</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528692"></a><h3>getdbmap</h3><p>
This command lists all clustered TDB databases that the CTDB daemon has attahced to.
</p><p>
Example: ctdb getdbmap
@@ -235,21 +235,21 @@ dbid:0x42fe72c5 name:locking.tdb path:/var/ctdb/locking.tdb.0
dbid:0x1421fb78 name:brlock.tdb path:/var/ctdb/brlock.tdb.0
dbid:0x17055d90 name:connections.tdb path:/var/ctdb/connections.tdb.0
dbid:0xc0bdde6a name:sessionid.tdb path:/var/ctdb/sessionid.tdb.0
- </pre></div><div class="refsect2" lang="en"><a name="id2528718"></a><h3>catdb &lt;dbname&gt;</h3><p>
+ </pre></div><div class="refsect2" lang="en"><a name="id2528719"></a><h3>catdb &lt;dbname&gt;</h3><p>
This command will dump a clustered TDB database to the screen. This is a debugging command.
- </p></div><div class="refsect2" lang="en"><a name="id2528729"></a><h3>getmonmode</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528730"></a><h3>getmonmode</h3><p>
This command returns the monutoring mode of a node. The monitoring mode is either ACTIVE or DISABLED. Normally a node will continously monitor that all other nodes that are expected are in fact connected and that they respond to commands.
</p><p>
ACTIVE - This is the normal mode. The node is actively monitoring all other nodes, both that the transport is connected and also that the node responds to commands. If a node becomes unavailable, it will be marked as DISCONNECTED and a recovery is initiated to restore the cluster.
</p><p>
DISABLED - This node is not monitoring that other nodes are available. In this mode a node failure will not be detected and no recovery will be performed. This mode is useful when for debugging purposes one wants to attach GDB to a ctdb process but wants to prevent the rest of the cluster from marking this node as DISCONNECTED and do a recovery.
- </p></div><div class="refsect2" lang="en"><a name="id2528760"></a><h3>setmonmode &lt;0|1&gt;</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528761"></a><h3>setmonmode &lt;0|1&gt;</h3><p>
This command can be used to explicitely disable/enable monitoring mode on a node. The main purpose is if one wants to attach GDB to a running ctdb daemon but wants to prevent the other nodes from marking it as DISCONNECTED and issuing a recovery. To do this, set monitoring mode to 0 on all nodes before attaching with GDB. Remember to set monitoring mode back to 1 afterwards.
- </p></div><div class="refsect2" lang="en"><a name="id2528776"></a><h3>attach &lt;dbname&gt;</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528777"></a><h3>attach &lt;dbname&gt;</h3><p>
This is a debugging command. This command will make the CTDB daemon create a new CTDB database and attach to it.
- </p></div><div class="refsect2" lang="en"><a name="id2528787"></a><h3>dumpmemory</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528788"></a><h3>dumpmemory</h3><p>
This is a debugging command. This command will make the ctdb daemon to write a fill memory allocation map to the log file.
- </p></div><div class="refsect2" lang="en"><a name="id2528798"></a><h3>freeze</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528799"></a><h3>freeze</h3><p>
This command will lock all the local TDB databases causing clients
that are accessing these TDBs such as samba3 to block until the
databases are thawed.
@@ -257,12 +257,12 @@ dbid:0xc0bdde6a name:sessionid.tdb path:/var/ctdb/sessionid.tdb.0
This is primarily used by the recovery daemon to stop all samba
daemons from accessing any databases while the database is recovered
and rebuilt.
- </p></div><div class="refsect2" lang="en"><a name="id2528816"></a><h3>thaw</h3><p>
+ </p></div><div class="refsect2" lang="en"><a name="id2528817"></a><h3>thaw</h3><p>
Thaw a previously frozen node.
- </p></div></div><div class="refsect1" lang="en"><a name="id2528827"></a><h2>SEE ALSO</h2><p>
+ </p></div></div><div class="refsect1" lang="en"><a name="id2528828"></a><h2>SEE ALSO</h2><p>
ctdbd(1), onnode(1)
<a href="http://ctdb.samba.org/" target="_top">http://ctdb.samba.org/</a>
- </p></div><div class="refsect1" lang="en"><a name="id2528840"></a><h2>COPYRIGHT/LICENSE</h2><div class="literallayout"><p><br>
+ </p></div><div class="refsect1" lang="en"><a name="id2528841"></a><h2>COPYRIGHT/LICENSE</h2><div class="literallayout"><p><br>
Copyright (C) Andrew Tridgell 2007<br>
Copyright (C) Ronnie sahlberg 2007<br>
<br>
diff --git a/ctdb/doc/ctdb.1.xml b/ctdb/doc/ctdb.1.xml
index bcb6646d68..26d9ab9e6e 100644
--- a/ctdb/doc/ctdb.1.xml
+++ b/ctdb/doc/ctdb.1.xml
@@ -42,16 +42,16 @@
<title>OPTIONS</title>
<variablelist>
- <varlistentry><term>-n &lt;vnn&gt;</term>
+ <varlistentry><term>-n &lt;pnn&gt;</term>
<listitem>
<para>
- This specifies the virtual node number on which to execute the
+ This specifies the physical node number on which to execute the
command. Default is to run the command on the deamon running on
the local host.
</para>
<para>
- The virtual node number is an integer that describes the node in the
- cluster. The first node has virtual node number 0.
+ The physical node number is an integer that describes the node in the
+ cluster. The first node has physical node number 0.
</para>
</listitem>
</varlistentry>
@@ -184,10 +184,10 @@
<para>Example output:</para>
<screen format="linespecific">
Number of nodes:4
-vnn:0 11.1.2.200 OK (THIS NODE)
-vnn:1 11.1.2.201 OK
-vnn:2 11.1.2.202 OK
-vnn:3 11.1.2.203 OK
+pnn:0 11.1.2.200 OK (THIS NODE)
+pnn:1 11.1.2.201 OK
+pnn:2 11.1.2.202 OK
+pnn:3 11.1.2.203 OK
Generation:1362079228
Size:4
hash:0 lmaster:0
diff --git a/ctdb/doc/ctdbd.1 b/ctdb/doc/ctdbd.1
index ac83b3944a..5d05635582 100644
--- a/ctdb/doc/ctdbd.1
+++ b/ctdb/doc/ctdbd.1
@@ -1,11 +1,11 @@
.\" Title: ctdbd
.\" Author:
.\" Generator: DocBook XSL Stylesheets v1.71.0 <http://docbook.sf.net/>
-.\" Date: 09/03/2007
+.\" Date: 09/14/2007
.\" Manual:
.\" Source:
.\"
-.TH "CTDBD" "1" "09/03/2007" "" ""
+.TH "CTDBD" "1" "09/14/2007" "" ""
.\" disable hyphenation
.nh
.\" disable justification (adjust text to left margin only)
@@ -16,7 +16,7 @@ ctdbd \- The CTDB cluster daemon
.HP 6
\fBctdbd\fR
.HP 6
-\fBctdbd\fR {\-\-reclock=<filename>} {\-\-nlist=<filename>} {\-\-dbdir=<directory>} [\-?\ \-\-help] [\-\-usage] [\-i\ \-\-interactive] [\-\-public\-addresses=<filename>] [\-\-event\-script=<filename>] [\-\-logfile=<filename>] [\-\-listen=<address>] [\-\-transport=<STRING>] [\-\-socket=<filename>] [\-d\ \-\-debug=<INTEGER>] [\-\-torture]
+\fBctdbd\fR {\-\-reclock=<filename>} {\-\-nlist=<filename>} {\-\-dbdir=<directory>} [\-?\ \-\-help] [\-\-usage] [\-i\ \-\-interactive] [\-\-public\-addresses=<filename>] [\-\-event\-script\-dir=<directory>] [\-\-logfile=<filename>] [\-\-listen=<address>] [\-\-transport=<STRING>] [\-\-socket=<filename>] [\-d\ \-\-debug=<INTEGER>] [\-\-torture]
.SH "DESCRIPTION"
.PP
ctdbd is the main ctdb daemon.
@@ -66,16 +66,16 @@ By default ctdbd will detach itself from the shell and run in the background as
.PP
\-\-public_addresses=<filename>
.RS 3n
-When used with IP takeover this specifies a file containing the public ip addresses to use on the cluster. This file contains a list of ip addresses netmasks and interfaces. When ctdb is operational it iwll distribute these public ip addresses evenly across the availabel nodes.
+When used with IP takeover this specifies a file containing the public ip addresses to use on the cluster. This file contains a list of ip addresses netmasks and interfaces. When ctdb is operational it will distribute these public ip addresses evenly across the available nodes.
.sp
This is usually the file /etc/ctdb/public_addresses
.RE
.PP
-\-\-event\-script=<filename>
+\-\-event\-script\-dir=<directory>
.RS 3n
-This option is used to specify which events script that ctdbd will use to manage services when the cluster configuration changes.
+This option is used to specify the directory where the CTDB event scripts are stored.
.sp
-This will normally be /etc/ctdb/events which is part of the ctdb distribution.
+This will normally be /etc/ctdb/events.d which is part of the ctdb distribution.
.RE
.PP
\-\-logfile=<filename>
@@ -122,11 +122,11 @@ When used for ip takeover in a HA environment, each node in a ctdb cluster has m
.PP
This is the physical ip address of the node which is configured in linux and attached to a physical interface. This address uniquely identifies a physical node in the cluster and is the ip addresses that ctdbd will use to communicate with the ctdbd daemons on the other nodes in the cluster.
.PP
-The private addresses are configured in /etc/ctdb/nodes (unless the \-\-nlist option is used) and contain one line for each node in the cluster. Each line contains the private ip address for one node in the cluster.
-.PP
-Each node is assigned an internal node number which corresponds to which line in the nodes file that has the local private address of the node.
+The private addresses are configured in /etc/ctdb/nodes (unless the \-\-nlist option is used) and contain one line for each node in the cluster. Each line contains the private ip address for one node in the cluster. This file must be the same on all nodes in the cluster.
.PP
Since the private addresses are only available to the network when the corresponding node is up and running you should not use these addresses for clients to connect to services provided by the cluster. Instead client applications should only attach to the public addresses since these are guaranteed to always be available.
+.PP
+When using ip takeover, it is strongly recommended that the private addresses are configured on a private network physically separated from the rest of the network and that this private network is dedicated to CTDB traffic.
Example /etc/ctdb/nodes for a four node cluster:
@@ -142,15 +142,15 @@ Since the private addresses are only available to the network when the correspon
.RE
.SS "Public address"
.PP
-A public address on the other hand is not attached to an interface. This address is managed by ctdbd itself and is attached/detached to a physical node at runtime. You should NOT have this address configured to an interface in linux. Let ctdbd manage these addresses.
+A public address on the other hand is not attached to an interface. This address is managed by ctdbd itself and is attached/detached to a physical node at runtime.
.PP
-The ctdb cluster will assign/reassign these public addresses across the available healthy nodes in the cluster. When one node fails, its public address will be migrated to and taken over by a different node in the cluster to ensure that all public addresses are always available to clients.
+The ctdb cluster will assign/reassign these public addresses across the available healthy nodes in the cluster. When one node fails, its public address will be migrated to and taken over by a different node in the cluster to ensure that all public addresses are always available to clients as long as there are still nodes available capable of hosting this address.
.PP
These addresses are not physically attached to a specific node. The 'ctdb ip' command can be used to view the current assignment of public addresses and which physical node is currently serving it.
.PP
-The list of public addresses also contain the netmask and the interface where this address should be attached.
+On each node this file contains a list of the public addresses that this node is capable of hosting. The list also contain the netmask and the interface where this address should be attached for the case where you may want to serve data out through multiple different interfaces.
- Example /etc/ctdb/public_addresses for a four node cluster:
+ Example /etc/ctdb/public_addresses for a node that can host 4 public addresses:
.sp
.RS 3n
@@ -163,9 +163,35 @@ The list of public addresses also contain the netmask and the interface where th
.fi
.RE
.PP
-In this example, two nodes in the cluster will serve 11.1.1.1 and 11.1.1.2 through interface eth0 and two (possibly other) nodes will serve 11.1.2.1 and 11.1.2.2 through eth1.
+In most cases this file would be the same on all nodes in a cluster but there are exceptions when one may want to use different files on different nodes.
+
+ Example: 4 nodes partitioned into two subgroups :
+
+.sp
+.RS 3n
+.nf
+ Node 0:/etc/ctdb/public_addresses
+ 10.1.1.1/24 eth0
+ 10.1.1.2/24 eth0
+
+ Node 1:/etc/ctdb/public_addresses
+ 10.1.1.1/24 eth0
+ 10.1.1.2/24 eth0
+
+ Node 2:/etc/ctdb/public_addresses
+ 10.2.1.1/24 eth0
+ 10.2.1.2/24 eth0
+
+ Node 3:/etc/ctdb/public_addresses
+ 10.2.1.1/24 eth0
+ 10.2.1.2/24 eth0
+
+.fi
+.RE
+.PP
+In this example nodes 0 and 1 host two public addresses on the 10.1.1.x network while nodes 2 and 3 host two public addresses for the 10.2.1.x network.
.PP
-The public address file must be the same on all nodes. Since this file also specifies which interface the address should be attached to it is imporant that all nodes use the same naming convention for interfaces.
+Ip address 10.1.1.1 can be hosted by either of nodes 0 or 1 and will be available to clients as long as at least one of these two nodes are available. If both nodes 0 and node 1 become unavailable 10.1.1.1 also becomes unavailable. 10.1.1.1 can not be failed over to node 2 or node 3 since these nodes do not have this ip address listed in their public addresses file.
.SH "NODE STATUS"
.PP
The current status of each node in the cluster can be viewed by the 'ctdb status' command.
diff --git a/ctdb/doc/ctdbd.1.html b/ctdb/doc/ctdbd.1.html
index b600785f89..8a5059e730 100644
--- a/ctdb/doc/ctdbd.1.html
+++ b/ctdb/doc/ctdbd.1.html
@@ -1,4 +1,4 @@
-<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>ctdbd</title><meta name="generator" content="DocBook XSL Stylesheets V1.71.0"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="refentry" lang="en"><a name="ctdbd.1"></a><div class="titlepage"></div><div class="refnamediv"><h2>Name</h2><p>ctdbd &#8212; The CTDB cluster daemon</p></div><div class="refsynopsisdiv"><h2>Synopsis</h2><div class="cmdsynopsis"><p><code class="command">ctdbd</code> </p></div><div class="cmdsynopsis"><p><code class="command">ctdbd</code> {--reclock=&lt;filename&gt;} {--nlist=&lt;filename&gt;} {--dbdir=&lt;directory&gt;} [-? --help] [--usage] [-i --interactive] [--public-addresses=&lt;filename&gt;] [--event-script=&lt;filename&gt;] [--logfile=&lt;filename&gt;] [--listen=&lt;address&gt;] [--transport=&lt;STRING&gt;] [--socket=&lt;filename&gt;] [-d --debug=&lt;INTEGER&gt;] [--torture]</p></div></div><div class="refsect1" lang="en"><a name="id2480886"></a><h2>DESCRIPTION</h2><p>
+<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>ctdbd</title><meta name="generator" content="DocBook XSL Stylesheets V1.71.0"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="refentry" lang="en"><a name="ctdbd.1"></a><div class="titlepage"></div><div class="refnamediv"><h2>Name</h2><p>ctdbd &#8212; The CTDB cluster daemon</p></div><div class="refsynopsisdiv"><h2>Synopsis</h2><div class="cmdsynopsis"><p><code class="command">ctdbd</code> </p></div><div class="cmdsynopsis"><p><code class="command">ctdbd</code> {--reclock=&lt;filename&gt;} {--nlist=&lt;filename&gt;} {--dbdir=&lt;directory&gt;} [-? --help] [--usage] [-i --interactive] [--public-addresses=&lt;filename&gt;] [--event-script-dir=&lt;directory&gt;] [--logfile=&lt;filename&gt;] [--listen=&lt;address&gt;] [--transport=&lt;STRING&gt;] [--socket=&lt;filename&gt;] [-d --debug=&lt;INTEGER&gt;] [--torture]</p></div></div><div class="refsect1" lang="en"><a name="id2480886"></a><h2>DESCRIPTION</h2><p>
ctdbd is the main ctdb daemon.
</p><p>
ctdbd provides a clustered version of the TDB database with automatic rebuild/recovery of the databases upon nodefailures.
@@ -28,14 +28,14 @@
By default ctdbd will detach itself from the shell and run in
the background as a daemon. This option makes ctdbd to start in interactive mode.
</p></dd><dt><span class="term">--public_addresses=&lt;filename&gt;</span></dt><dd><p>
- When used with IP takeover this specifies a file containing the public ip addresses to use on the cluster. This file contains a list of ip addresses netmasks and interfaces. When ctdb is operational it iwll distribute these public ip addresses evenly across the availabel nodes.
+ When used with IP takeover this specifies a file containing the public ip addresses to use on the cluster. This file contains a list of ip addresses netmasks and interfaces. When ctdb is operational it will distribute these public ip addresses evenly across the available nodes.
</p><p>
This is usually the file /etc/ctdb/public_addresses
- </p></dd><dt><span class="term">--event-script=&lt;filename&gt;</span></dt><dd><p>
- This option is used to specify which events script that ctdbd will
- use to manage services when the cluster configuration changes.
+ </p></dd><dt><span class="term">--event-script-dir=&lt;directory&gt;</span></dt><dd><p>
+ This option is used to specify the directory where the CTDB event
+ scripts are stored.
</p><p>
- This will normally be /etc/ctdb/events which is part of the ctdb distribution.
+ This will normally be /etc/ctdb/events.d which is part of the ctdb distribution.
</p></dd><dt><span class="term">--logfile=&lt;filename&gt;</span></dt><dd><p>
This is the file where ctdbd will write its log. This is usually /var/log/log.ctdb .
</p></dd><dt><span class="term">--listen=&lt;address&gt;</span></dt><dd><p>
@@ -56,10 +56,10 @@
This option is only used for development and testing of ctdbd. It adds artificial errors and failures to the common codepaths in ctdbd to verify that ctdbd can recover correctly for failures.
</p><p>
You do NOT want to use this option unless you are developing and testing new functionality in ctdbd.
- </p></dd></dl></div></div><div class="refsect1" lang="en"><a name="id2528418"></a><h2>Private vs Public addresses</h2><p>
+ </p></dd></dl></div></div><div class="refsect1" lang="en"><a name="id2528417"></a><h2>Private vs Public addresses</h2><p>
When used for ip takeover in a HA environment, each node in a ctdb
cluster has multiple ip addresses assigned to it. One private and one or more public.
- </p><div class="refsect2" lang="en"><a name="id2528428"></a><h3>Private address</h3><p>
+ </p><div class="refsect2" lang="en"><a name="id2528427"></a><h3>Private address</h3><p>
This is the physical ip address of the node which is configured in
linux and attached to a physical interface. This address uniquely
identifies a physical node in the cluster and is the ip addresses
@@ -69,17 +69,19 @@
The private addresses are configured in /etc/ctdb/nodes
(unless the --nlist option is used) and contain one line for each
node in the cluster. Each line contains the private ip address for one
- node in the cluster.
- </p><p>
- Each node is assigned an internal node number which corresponds to
- which line in the nodes file that has the local private address
- of the node.
+ node in the cluster. This file must be the same on all nodes in the
+ cluster.
</p><p>
Since the private addresses are only available to the network when the
corresponding node is up and running you should not use these addresses
for clients to connect to services provided by the cluster. Instead
client applications should only attach to the public addresses since
these are guaranteed to always be available.
+ </p><p>
+ When using ip takeover, it is strongly recommended that the private
+ addresses are configured on a private network physically separated
+ from the rest of the network and that this private network is dedicated
+ to CTDB traffic.
</p>
Example /etc/ctdb/nodes for a four node cluster:
<pre class="screen">
@@ -87,40 +89,68 @@
10.1.1.2
10.1.1.3
10.1.1.4
- </pre></div><div class="refsect2" lang="en"><a name="id2528475"></a><h3>Public address</h3><p>
+ </pre></div><div class="refsect2" lang="en"><a name="id2528476"></a><h3>Public address</h3><p>
A public address on the other hand is not attached to an interface.
This address is managed by ctdbd itself and is attached/detached to
- a physical node at runtime. You should NOT have this address configured
- to an interface in linux. Let ctdbd manage these addresses.
+ a physical node at runtime.
</p><p>
The ctdb cluster will assign/reassign these public addresses across the
available healthy nodes in the cluster. When one node fails, its public address
will be migrated to and taken over by a different node in the cluster
- to ensure that all public addresses are always available to clients.
+ to ensure that all public addresses are always available to clients as
+ long as there are still nodes available capable of hosting this address.
</p><p>
These addresses are not physically attached to a specific node.
The 'ctdb ip' command can be used to view the current assignment of
public addresses and which physical node is currently serving it.
</p><p>
- The list of public addresses also contain the netmask and the
- interface where this address should be attached.
+ On each node this file contains a list of the public addresses that
+ this node is capable of hosting.
+ The list also contain the netmask and the
+ interface where this address should be attached for the case where you
+ may want to serve data out through multiple different interfaces.
</p>
- Example /etc/ctdb/public_addresses for a four node cluster:
+ Example /etc/ctdb/public_addresses for a node that can host 4 public addresses:
<pre class="screen">
11.1.1.1/24 eth0
11.1.1.2/24 eth0
11.1.2.1/24 eth1
11.1.2.2/24 eth1
</pre><p>
- In this example, two nodes in the cluster will serve 11.1.1.1 and
- 11.1.1.2 through interface eth0 and two (possibly other) nodes will
- serve 11.1.2.1 and 11.1.2.2 through eth1.
- </p><p>
- The public address file must be the same on all nodes.
- Since this file also specifies which interface the address should be
- attached to it is imporant that all nodes use the same naming convention
- for interfaces.
- </p></div></div><div class="refsect1" lang="en"><a name="id2528534"></a><h2>Node status</h2><p>
+ In most cases this file would be the same on all nodes in a cluster but
+ there are exceptions when one may want to use different files
+ on different nodes.
+ </p>
+ Example: 4 nodes partitioned into two subgroups :
+ <pre class="screen">
+ Node 0:/etc/ctdb/public_addresses
+ 10.1.1.1/24 eth0
+ 10.1.1.2/24 eth0
+
+ Node 1:/etc/ctdb/public_addresses
+ 10.1.1.1/24 eth0
+ 10.1.1.2/24 eth0
+
+ Node 2:/etc/ctdb/public_addresses
+ 10.2.1.1/24 eth0
+ 10.2.1.2/24 eth0
+
+ Node 3:/etc/ctdb/public_addresses
+ 10.2.1.1/24 eth0
+ 10.2.1.2/24 eth0
+ </pre><p>
+ In this example nodes 0 and 1 host two public addresses on the
+ 10.1.1.x network while nodes 2 and 3 host two public addresses for the
+ 10.2.1.x network.
+ </p><p>
+ Ip address 10.1.1.1 can be hosted by either of nodes 0 or 1 and will be
+ available to clients as long as at least one of these two nodes are
+ available.
+ If both nodes 0 and node 1 become unavailable 10.1.1.1 also becomes
+ unavailable. 10.1.1.1 can not be failed over to node 2 or node 3 since
+ these nodes do not have this ip address listed in their public
+ addresses file.
+ </p></div></div><div class="refsect1" lang="en"><a name="id2528564"></a><h2>Node status</h2><p>
The current status of each node in the cluster can be viewed by the
'ctdb status' command.
</p><p>
@@ -151,10 +181,10 @@
investigated and require an administrative action to rectify. This node
does not perticipate in the CTDB cluster but can still be communicated
with. I.e. ctdb commands can be sent to it.
- </p></div><div class="refsect1" lang="en"><a name="id2528591"></a><h2>SEE ALSO</h2><p>
+ </p></div><div class="refsect1" lang="en"><a name="id2528621"></a><h2>SEE ALSO</h2><p>
ctdb(1), onnode(1)
<a href="http://ctdb.samba.org/" target="_top">http://ctdb.samba.org/</a>
- </p></div><div class="refsect1" lang="en"><a name="id2528604"></a><h2>COPYRIGHT/LICENSE</h2><div class="literallayout"><p><br>
+ </p></div><div class="refsect1" lang="en"><a name="id2528634"></a><h2>COPYRIGHT/LICENSE</h2><div class="literallayout"><p><br>
Copyright (C) Andrew Tridgell 2007<br>
Copyright (C) Ronnie sahlberg 2007<br>
<br>
diff --git a/ctdb/doc/ctdbd.1.xml b/ctdb/doc/ctdbd.1.xml
index fdda489a57..1f052940a6 100644
--- a/ctdb/doc/ctdbd.1.xml
+++ b/ctdb/doc/ctdbd.1.xml
@@ -27,7 +27,7 @@
<arg choice="opt">--usage</arg>
<arg choice="opt">-i --interactive</arg>
<arg choice="opt">--public-addresses=&lt;filename&gt;</arg>
- <arg choice="opt">--event-script=&lt;filename&gt;</arg>
+ <arg choice="opt">--event-script-dir=&lt;directory&gt;</arg>
<arg choice="opt">--logfile=&lt;filename&gt;</arg>
<arg choice="opt">--listen=&lt;address&gt;</arg>
<arg choice="opt">--transport=&lt;STRING&gt;</arg>
@@ -121,7 +121,7 @@
<varlistentry><term>--public_addresses=&lt;filename&gt;</term>
<listitem>
<para>
- When used with IP takeover this specifies a file containing the public ip addresses to use on the cluster. This file contains a list of ip addresses netmasks and interfaces. When ctdb is operational it iwll distribute these public ip addresses evenly across the availabel nodes.
+ When used with IP takeover this specifies a file containing the public ip addresses to use on the cluster. This file contains a list of ip addresses netmasks and interfaces. When ctdb is operational it will distribute these public ip addresses evenly across the available nodes.
</para>
<para>
This is usually the file /etc/ctdb/public_addresses
@@ -129,14 +129,14 @@
</listitem>
</varlistentry>
- <varlistentry><term>--event-script=&lt;filename&gt;</term>
+ <varlistentry><term>--event-script-dir=&lt;directory&gt;</term>
<listitem>
<para>
- This option is used to specify which events script that ctdbd will
- use to manage services when the cluster configuration changes.
+ This option is used to specify the directory where the CTDB event
+ scripts are stored.
</para>
<para>
- This will normally be /etc/ctdb/events which is part of the ctdb distribution.
+ This will normally be /etc/ctdb/events.d which is part of the ctdb distribution.
</para>
</listitem>
</varlistentry>
@@ -222,12 +222,8 @@
The private addresses are configured in /etc/ctdb/nodes
(unless the --nlist option is used) and contain one line for each
node in the cluster. Each line contains the private ip address for one
- node in the cluster.
- </para>
- <para>
- Each node is assigned an internal node number which corresponds to
- which line in the nodes file that has the local private address
- of the node.
+ node in the cluster. This file must be the same on all nodes in the
+ cluster.
</para>
<para>
Since the private addresses are only available to the network when the
@@ -236,6 +232,12 @@
client applications should only attach to the public addresses since
these are guaranteed to always be available.
</para>
+ <para>
+ When using ip takeover, it is strongly recommended that the private
+ addresses are configured on a private network physically separated
+ from the rest of the network and that this private network is dedicated
+ to CTDB traffic.
+ </para>
Example /etc/ctdb/nodes for a four node cluster:
<screen format="linespecific">
10.1.1.1
@@ -248,14 +250,14 @@
<para>
A public address on the other hand is not attached to an interface.
This address is managed by ctdbd itself and is attached/detached to
- a physical node at runtime. You should NOT have this address configured
- to an interface in linux. Let ctdbd manage these addresses.
+ a physical node at runtime.
</para>
<para>
The ctdb cluster will assign/reassign these public addresses across the
available healthy nodes in the cluster. When one node fails, its public address
will be migrated to and taken over by a different node in the cluster
- to ensure that all public addresses are always available to clients.
+ to ensure that all public addresses are always available to clients as
+ long as there are still nodes available capable of hosting this address.
</para>
<para>
These addresses are not physically attached to a specific node.
@@ -263,27 +265,57 @@
public addresses and which physical node is currently serving it.
</para>
<para>
- The list of public addresses also contain the netmask and the
- interface where this address should be attached.
+ On each node this file contains a list of the public addresses that
+ this node is capable of hosting.
+ The list also contain the netmask and the
+ interface where this address should be attached for the case where you
+ may want to serve data out through multiple different interfaces.
</para>
- Example /etc/ctdb/public_addresses for a four node cluster:
+ Example /etc/ctdb/public_addresses for a node that can host 4 public addresses:
<screen format="linespecific">
11.1.1.1/24 eth0
11.1.1.2/24 eth0
11.1.2.1/24 eth1
11.1.2.2/24 eth1
</screen>
- <para>
- In this example, two nodes in the cluster will serve 11.1.1.1 and
- 11.1.1.2 through interface eth0 and two (possibly other) nodes will
- serve 11.1.2.1 and 11.1.2.2 through eth1.
- </para>
- <para>
- The public address file must be the same on all nodes.
- Since this file also specifies which interface the address should be
- attached to it is imporant that all nodes use the same naming convention
- for interfaces.
- </para>
+
+ <para>
+ In most cases this file would be the same on all nodes in a cluster but
+ there are exceptions when one may want to use different files
+ on different nodes.
+ </para>
+ Example: 4 nodes partitioned into two subgroups :
+ <screen format="linespecific">
+ Node 0:/etc/ctdb/public_addresses
+ 10.1.1.1/24 eth0
+ 10.1.1.2/24 eth0
+
+ Node 1:/etc/ctdb/public_addresses
+ 10.1.1.1/24 eth0
+ 10.1.1.2/24 eth0
+
+ Node 2:/etc/ctdb/public_addresses
+ 10.2.1.1/24 eth0
+ 10.2.1.2/24 eth0
+
+ Node 3:/etc/ctdb/public_addresses
+ 10.2.1.1/24 eth0
+ 10.2.1.2/24 eth0
+ </screen>
+ <para>
+ In this example nodes 0 and 1 host two public addresses on the
+ 10.1.1.x network while nodes 2 and 3 host two public addresses for the
+ 10.2.1.x network.
+ </para>
+ <para>
+ Ip address 10.1.1.1 can be hosted by either of nodes 0 or 1 and will be
+ available to clients as long as at least one of these two nodes are
+ available.
+ If both nodes 0 and node 1 become unavailable 10.1.1.1 also becomes
+ unavailable. 10.1.1.1 can not be failed over to node 2 or node 3 since
+ these nodes do not have this ip address listed in their public
+ addresses file.
+ </para>
</refsect2>
</refsect1>
diff --git a/ctdb/tools/ctdb.c b/ctdb/tools/ctdb.c
index ca10aed0cc..b4f2ae575c 100644
--- a/ctdb/tools/ctdb.c
+++ b/ctdb/tools/ctdb.c
@@ -25,7 +25,6 @@
#include "cmdline.h"
#include "../include/ctdb.h"
#include "../include/ctdb_private.h"
-#include "../common/rb_tree.h"
static void usage(void);
@@ -521,73 +520,32 @@ static int tickle_tcp(struct ctdb_context *ctdb, int argc, const char **argv)
}
-static void *store_ip(void *p, void *d)
-{
- return p;
-}
-static void print_ip(void *param, void *data)
-{
- struct ctdb_public_ip *ip = (struct ctdb_public_ip *)data;
-
- if(options.machinereadable){
- printf(":%s:%d:\n", inet_ntoa(ip->sin.sin_addr), ip->pnn);
- } else {
- printf("%-16s %d\n", inet_ntoa(ip->sin.sin_addr), ip->pnn);
- }
-}
-
/*
display public ip status
*/
static int control_ip(struct ctdb_context *ctdb, int argc, const char **argv)
{
- int i, j, ret;
+ int i, ret;
TALLOC_CTX *tmp_ctx = talloc_new(ctdb);
- trbt_tree_t *tree;
- struct ctdb_node_map *nodemap=NULL;
struct ctdb_all_public_ips *ips;
- struct ctdb_public_ip *ip;
-
- ret = ctdb_ctrl_getnodemap(ctdb, TIMELIMIT(), options.pnn, tmp_ctx, &nodemap);
+ /* read the public ip list from this node */
+ ret = ctdb_ctrl_get_public_ips(ctdb, TIMELIMIT(), options.pnn, tmp_ctx, &ips);
if (ret != 0) {
- DEBUG(0, ("Unable to get nodemap from node %u\n", options.pnn));
+ DEBUG(0, ("Unable to get public ips from node %u\n", options.pnn));
talloc_free(tmp_ctx);
return ret;
}
- /* create a tree to store the public addresses in indexed by s_addr */
- tree = trbt_create(tmp_ctx, 0);
- CTDB_NO_MEMORY(ctdb, tree);
-
- for (i=0;i<nodemap->num;i++) {
- /* dont read the public ip list from disconnected nodes */
- if (nodemap->nodes[i].flags & NODE_FLAGS_DISCONNECTED) {
- continue;
- }
-
- /* read the public ip list from this node */
- ret = ctdb_ctrl_get_public_ips(ctdb, TIMELIMIT(), i, tmp_ctx, &ips);
- if (ret != 0) {
- DEBUG(0, ("Unable to get public ips from node %u\n", i));
- talloc_free(tmp_ctx);
- return ret;
- }
-
-
- /* store the public ip */
- for(j=0;j<ips->num;j++){
- ip = talloc_memdup(tmp_ctx, &ips->ips[j], sizeof(struct ctdb_public_ip));
- /* ntohl() so that we sort by the first octet */
- trbt_insert32_callback(tree, ntohl(ips->ips[j].sin.sin_addr.s_addr), store_ip, ip);
- }
+ if (options.machinereadable){
+ printf(":Public IP:Node:\n");
+ } else {
+ printf("Public IPs on node %u\n", options.pnn);
}
- /* traverse the tree and read back all the public ips one by one */
- if(options.machinereadable){
- printf(":Public IP:Node:\n");
+ for (i=0;i<ips->num;i++) {
+ printf("%s %d\n", inet_ntoa(ips->ips[i].sin.sin_addr), ips->ips[i].pnn);
}
- trbt_traversearray32(tree, 1, print_ip, NULL);
talloc_free(tmp_ctx);
return 0;
diff --git a/ctdb/web/configuring.html b/ctdb/web/configuring.html
index ffc605ffc2..f91cd4519a 100644
--- a/ctdb/web/configuring.html
+++ b/ctdb/web/configuring.html
@@ -83,17 +83,15 @@ The default for this file is /etc/ctdb/nodes.
<h3>CTDB_PUBLIC_ADDRESSES</h3>
-This file specifies a list of public ip addresses which the cluster will
-serve. This file must be the same on all nodes.<p>
+Each node in a CTDB cluster contains a list of public addresses which that
+particular node can host.<p>
+While running the CTDB cluster will assign each public address that exists in the entire cluster to one node that will host that public address.<p>
These are the addresses that the SMBD daemons and other services will
-bind to and which clients will use to connect to the cluster. This
-file must contain one address for each node, i.e. it must have the
-same number of entries as the nodes file. This file must also be the
-same for all nodes in the cluster.<p>
+bind to and which clients will use to connect to the cluster.<p>
-Example 4 node cluster:
+<h3>Example 4 node cluster:</h3>
<pre>
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
</pre>
@@ -118,8 +116,7 @@ The CTDB cluster utilizes IP takeover techniques to ensure that as long as at le
This means that if one physical node fails, the public addresses that
node was serving will be taken over by a different node in the cluster. This
provides a guarantee that all ip addresses exposed to clients will
-always be reachable by clients even if a node has been powered off or
-has crashed.<p>
+always be reachable by clients as long as at least one node still remains available in the cluster with the capability to host that public address (i.e. the public address exists in that nodes public_addresses file).
Do not assign these addresses to any of the interfaces on the
host. CTDB will add and remove these addresses automatically at
@@ -127,7 +124,35 @@ runtime.<p>
This parameter is used when CTDB operated in takeover ip mode.<p>
-The usual location for this file is /etc/ctdb/public_addresses.
+The usual location for this file is /etc/ctdb/public_addresses.<p><p>
+
+<h3>Example 2:</h3>
+By using different public_addresses files on different nodes it is possible to
+partition the cluster into subsets of nodes.
+
+<pre>
+Node 0 : /etc/ctdb/public_addresses
+10.1.1.1/24 eth0
+10.1.2.1/24 eth1
+</pre>
+
+<pre>
+Node 1 : /etc/ctdb/public_addresses
+10.1.2.1/24 eth1
+10.1.3.1/24 eth2
+</pre>
+
+<pre>
+Node 2 : /etc/ctdb/public_addresses
+10.1.3.2/24 eth2
+</pre>
+
+In this example we have three nodes but a total of 4 public addresses.<p>
+
+10.1.2.1 can be hosted by either node 0 or node 1 and will be available to clients as long as at least one of these nodes are available. Only if both nodes 0 and 1 fails will this public address become unavailable to clients.<p>
+
+All other public addresses can only be served by one single node respectively and will therefore only be avialable if the respective node is also available.
+
<h2>Event scripts</h2>