summaryrefslogtreecommitdiffstats
path: root/ctdb/web
diff options
context:
space:
mode:
Diffstat (limited to 'ctdb/web')
-rw-r--r--ctdb/web/bar1.jpgbin0 -> 2594 bytes
-rw-r--r--ctdb/web/building.html42
-rw-r--r--ctdb/web/clamd.html78
-rw-r--r--ctdb/web/configuring.html202
-rw-r--r--ctdb/web/ctdblogo.pngbin0 -> 10145 bytes
-rw-r--r--ctdb/web/documentation.html43
-rw-r--r--ctdb/web/download.html50
-rw-r--r--ctdb/web/footer.html39
-rw-r--r--ctdb/web/ftp.html102
-rw-r--r--ctdb/web/header.html44
-rw-r--r--ctdb/web/index.html141
-rw-r--r--ctdb/web/iscsi.html113
-rw-r--r--ctdb/web/nfs.html96
-rw-r--r--ctdb/web/prerequisites.html30
-rw-r--r--ctdb/web/samba.html97
-rw-r--r--ctdb/web/testing.html112
16 files changed, 1189 insertions, 0 deletions
diff --git a/ctdb/web/bar1.jpg b/ctdb/web/bar1.jpg
new file mode 100644
index 00000000000..7c6acf3c7c7
--- /dev/null
+++ b/ctdb/web/bar1.jpg
Binary files differ
diff --git a/ctdb/web/building.html b/ctdb/web/building.html
new file mode 100644
index 00000000000..74750789429
--- /dev/null
+++ b/ctdb/web/building.html
@@ -0,0 +1,42 @@
+<!--#set var="TITLE" value="Building CTDB" -->
+<!--#include virtual="header.html" -->
+
+<H2 align="center">Building CTDB and Samba</h2>
+
+<h2>CTDB</h2>
+To build a copy of CTDB code from a git tree you should do this:
+<pre>
+ cd ctdb
+ ./autogen.sh
+ ./configure
+ make
+ make install
+</pre>
+
+To build a copy of CTDB code from a tarball you should do this:
+<pre>
+ tar xf ctdb-x.y.tar.gz
+ cd ctdb-x.y
+ ./configure
+ make
+ make install
+</pre>
+You need to install ctdb on all nodes of your cluster.
+
+
+<h2>Samba3</h2>
+
+To build a copy of Samba3 with clustering and ctdb support you should do this:
+<pre>
+ cd samba_3_0_ctdb/source
+ ./autogen.sh
+ ./configure --with-ctdb=/usr/src/ctdb --with-cluster-support --enable-pie=no
+ make proto
+ make
+</pre>
+
+Once compiled, you should install Samba on all cluster nodes.<br><br>
+
+The /usr/src/ctdb path should be replaced with the path to the ctdb sources that you downloaded above.
+
+<!--#include virtual="footer.html" -->
diff --git a/ctdb/web/clamd.html b/ctdb/web/clamd.html
new file mode 100644
index 00000000000..4edb4cf23d7
--- /dev/null
+++ b/ctdb/web/clamd.html
@@ -0,0 +1,78 @@
+<!--#set var="TITLE" value="CTDB and ClamAV Daemon" -->
+<!--#include virtual="header.html" -->
+
+<h1>Setting up ClamAV with CTDB</h1>
+
+<h2>Prereqs</h2>
+Configure CTDB as above and set it up to use public ipaddresses.<br>
+Verify that the CTDB cluster works.
+
+<h2>Configuration</h2>
+
+Configure clamd on each node on the cluster.<br><br>
+For details how to configure clamd check its documentation.
+
+<h2>/etc/sysconfig/ctdb</h2>
+
+Add the following lines to the /etc/sysconfig/ctdb configuration file.
+<pre>
+ CTDB_MANAGES_CLAMD=yes
+ CTDB_CLAMD_SOCKET="/path/to/clamd.sock"
+</pre>
+
+Disable clamd in chkconfig so that it does not start by default. Instead CTDB will start/stop clamd as required.
+<pre>
+ chkconfig clamd off
+</pre>
+
+<h2>Events script</h2>
+
+The CTDB distribution already comes with an events script for clamd in the file /etc/ctdb/events.d/31.clamd<br><br>
+There should not be any need to edit this file.
+What you need is to set it as executable, with command like this:
+<pre>
+ chmod +x /etc/ctdb/events.d/31.clamd
+</pre>
+To check if ctdb monitoring and handling with clamd, you can check outpout of command:
+<pre>
+ ctdb scriptstatus
+</pre>
+
+<h2>Restart your cluster</h2>
+Next time your cluster restarts, CTDB will start managing the clamd service.<br><br>
+If the cluster is already in production you may not want to restart the entire cluster since this would disrupt services.<br>
+
+Insted you can just disable/enable the nodes one by one. Once a node becomes enabled again it will start the clamd service.<br><br>
+
+Follow the procedure below for each node, one node at a time :
+
+<h3>1 Disable the node</h3>
+Use the ctdb command to disable the node :
+<pre>
+ ctdb -n NODE disable
+</pre>
+
+<h3>2 Wait until the cluster has recovered</h3>
+
+Use the ctdb tool to monitor until the cluster has recovered, i.e. Recovery mode is NORMAL. This should happen within seconds of when you disabled the node.
+<pre>
+ ctdb status
+</pre>
+
+<h3>3 Enable the node again</h3>
+
+Re-enable the node again which will start the newly configured vsftp service.
+<pre>
+ ctdb -n NODE enable
+</pre>
+
+<h2>See also</h2>
+
+The CLAMAV section in the ctdbd manpage.
+
+<pre>
+ man ctdbd
+</pre>
+
+<!--#include virtual="footer.html" -->
+
diff --git a/ctdb/web/configuring.html b/ctdb/web/configuring.html
new file mode 100644
index 00000000000..b8272903c06
--- /dev/null
+++ b/ctdb/web/configuring.html
@@ -0,0 +1,202 @@
+<!--#set var="TITLE" value="Configuring CTDB" -->
+<!--#include virtual="header.html" -->
+
+<H2 align="center">Configuring CTDB</H2>
+
+<h2>Clustering Model</h2>
+
+The setup instructions on this page are modelled on setting up a cluster of N
+nodes that function in nearly all respects as a single multi-homed node.
+So the cluster will export N IP interfaces, each of which is equivalent
+(same shares) and which offers coherent CIFS file access across all
+nodes.<p>
+
+The clustering model utilizes IP takeover techniques to ensure that
+the full set of public IP addresses assigned to services on the
+cluster will always be available to the clients even when some nodes
+have failed and become unavailable.
+
+<h2>CTDB Cluster Configuration</h2>
+
+These are the primary configuration files for CTDB.<p>
+
+When CTDB is installed, it will install template versions of these
+files which you need to edit to suit your system.
+
+<h3>/etc/sysconfig/ctdb</h3>
+
+This file contains the startup parameters for ctdb.<p>
+
+When you installed ctdb, a template config file should have been
+installed in /etc/sysconfig/ctdb.<p>
+
+Edit this file, following the instructions in the template.<p>
+
+The most important options are:
+<ul>
+<li>CTDB_NODES
+<li>CTDB_RECOVERY_LOCK
+<li>CTDB_PUBLIC_ADDRESSES
+</ul>
+
+Please verify these parameters carefully.
+
+<h4>CTDB_RECOVERY_LOCK</h4>
+
+This parameter specifies the lock file that the CTDB daemons use to arbitrate
+which node is acting as a recovery master.<br>
+
+This file MUST be held on shared storage so that all CTDB daemons in the cluster will access/lock the same file.<br><br>
+
+You <strong>must</strong> specify this parameter.<br>
+There is no default for this parameter.
+
+<h3>CTDB_NODES</h3>
+
+This file needs to be created and should contain a list of the private
+IP addresses that the CTDB daemons will use in your cluster. One IP
+address for each node in the cluster.<p>
+
+This should be a private non-routable subnet which is only used for
+internal cluster traffic. This file must be the same on all nodes in
+the cluster.<p>
+
+Make sure that these IP addresses are automatically started when the
+cluster node boots and that each node can ping each other node.<p>
+
+Example 4 node cluster:
+<pre>
+ CTDB_NODES=/etc/ctdb/nodes
+</pre>
+Content of /etc/ctdb/nodes:
+<pre>
+ 10.1.1.1
+ 10.1.1.2
+ 10.1.1.3
+ 10.1.1.4
+</pre>
+
+The default for this file is /etc/ctdb/nodes.
+
+
+<h3>CTDB_PUBLIC_ADDRESSES</h3>
+
+Each node in a CTDB cluster contains a list of public addresses which that
+particular node can host.<p>
+
+While running the CTDB cluster will assign each public address that exists in the entire cluster to one node that will host that public address.<p>
+
+These are the addresses that the SMBD daemons and other services will
+bind to and which clients will use to connect to the cluster.<p>
+
+<h3>Example 4 node cluster:</h3>
+<pre>
+ CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
+</pre>
+Content of /etc/ctdb/public_addresses:
+<pre>
+ 192.168.1.1/24 eth0
+ 192.168.1.2/24 eth0
+ 192.168.2.1/24 eth1
+ 192.168.2.2/24 eth1
+</pre>
+
+These are the IP addresses that you should configure in DNS for the
+name of the clustered samba server and are the addresses that CIFS
+clients will connect to.<p>
+
+Configure it as one DNS A record (==name) with multiple IP addresses
+and let round-robin DNS distribute the clients across the nodes of the
+cluster.<p>
+
+The CTDB cluster utilizes IP takeover techniques to ensure that as long as at least one node in the cluster is available, all the public IP addresses will always be available to clients.<p>
+
+This means that if one physical node fails, the public addresses that
+node was serving will be taken over by a different node in the cluster. This
+provides a guarantee that all ip addresses exposed to clients will
+always be reachable by clients as long as at least one node still remains available in the cluster with the capability to host that public address (i.e. the public address exists in that nodes public_addresses file).
+
+Do not assign these addresses to any of the interfaces on the
+host. CTDB will add and remove these addresses automatically at
+runtime.<p>
+
+This parameter is used when CTDB operated in takeover ip mode.<p>
+
+The usual location for this file is /etc/ctdb/public_addresses.<p><p>
+
+<h3>Example 2:</h3>
+By using different public_addresses files on different nodes it is possible to
+partition the cluster into subsets of nodes.
+
+<pre>
+Node 0 : /etc/ctdb/public_addresses
+10.1.1.1/24 eth0
+10.1.2.1/24 eth1
+</pre>
+
+<pre>
+Node 1 : /etc/ctdb/public_addresses
+10.1.2.1/24 eth1
+10.1.3.1/24 eth2
+</pre>
+
+<pre>
+Node 2 : /etc/ctdb/public_addresses
+10.1.3.2/24 eth2
+</pre>
+
+In this example we have three nodes but a total of 4 public addresses.<p>
+
+10.1.2.1 can be hosted by either node 0 or node 1 and will be available to clients as long as at least one of these nodes are available. Only if both nodes 0 and 1 fails will this public address become unavailable to clients.<p>
+
+All other public addresses can only be served by one single node respectively and will therefore only be avialable if the respective node is also available.
+
+
+<h2>Event scripts</h2>
+
+CTDB comes with a number of application specific event scripts that
+are used to do service specific tasks when the cluster has been
+reconfigured. These scripts are stored in /etc/ctdb/events.d/<p>
+
+You do not need to modify these scripts if you just want to use
+clustered Samba or NFS but they serve as examples in case you want to
+add clustering support for other application servers we do not yet
+proivide event scripts for.<p>
+
+Please see the service scripts that installed by ctdb in
+/etc/ctdb/events.d for examples of how to configure other services to
+be aware of the HA features of CTDB.<p>
+
+Also see /etc/ctdb/events.d/README for additional documentation on how to
+create and manage event scripts.
+
+<h2>TCP port to use for CTDB</h2>
+
+CTDB defaults to use TCP port 4379 for its traffic.<p>
+
+Configuring a different port to use for CTDB traffic is done by adding
+a ctdb entry to the /etc/services file.<p>
+
+Example: for change CTDB to use port 9999 add the following line to /etc/services
+<pre>
+ ctdb 9999/tcp
+</pre>
+
+Note: all nodes in the cluster MUST use the same port or else CTDB
+will not start correctly.
+
+<h2>Name resolution</h2>
+
+You need to setup some method for your Windows and NFS clients to find
+the nodes of the cluster, and automatically balance the load between
+the nodes.<p>
+
+We recommend that you use public ip addresses using
+CTDB_PUBLIC_INTERFACE/CTDB_PUBLIC_ADDRESSES and that you setup a
+round-robin DNS entry for your cluster, listing all the public IP
+addresses that CTDB will be managing as a single DNS A record.<p>
+
+You may also wish to setup a static WINS server entry listing all of
+your cluster nodes IP addresses.
+
+<!--#include virtual="footer.html" -->
diff --git a/ctdb/web/ctdblogo.png b/ctdb/web/ctdblogo.png
new file mode 100644
index 00000000000..68304a21062
--- /dev/null
+++ b/ctdb/web/ctdblogo.png
Binary files differ
diff --git a/ctdb/web/documentation.html b/ctdb/web/documentation.html
new file mode 100644
index 00000000000..86ec332a338
--- /dev/null
+++ b/ctdb/web/documentation.html
@@ -0,0 +1,43 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
+<HTML>
+<!--#set var="TITLE" value="CTDB Documentation" -->
+<!--#include virtual="header.html" -->
+
+<h1>CTDB Documentation</h1>
+
+The following documentation should get you started with CTDB.
+
+<ul>
+<li><a href="prerequisites.html">Prerequisites</a>
+<li><a href="download.html">Downloading CTDB</a>
+<li><a href="building.html">Building CTDB</a>
+<li><a href="configuring.html">Configuring CTDB</a>
+<li><a href="testing.html">Testing CTDB</a>
+<li><a href="samba.html">Setting up Samba with CTDB</a>
+<li><a href="ftp.html">Setting up FTP with CTDB</a>
+<li><a href="nfs.html">Setting up NFS with CTDB</a>
+<li><a href="iscsi.html">Setting up iSCSI with CTDB</a>
+<li><a href="clamd.html">Setting up CLAMD with CTDB</a>
+<li><a href="http://wiki.samba.org/index.php/CTDB_Setup">CTDB Wiki</a>
+</ul>
+
+Man pages:
+<ul>
+<li><a href="http://ctdb.samba.org/manpages/ctdb.1.html">ctdb (1)</a>
+<li><a href="http://ctdb.samba.org/manpages/ctdbd.1.html">ctdbd (1)</a>
+<li><a href="http://ctdb.samba.org/manpages/ctdbd_wrapper.1.html">ctdbd_wrapper (1)</a>
+<li><a href="http://ctdb.samba.org/manpages/ctdbd.conf.5.html">ctdbd.conf (5)</a>
+<li><a href="http://ctdb.samba.org/manpages/ctdb.7.html">ctdb (7)</a>
+<li><a href="http://ctdb.samba.org/manpages/ctdb-tunables.7.html">ctdb-tunables (7)</a>
+<li><a href="http://ctdb.samba.org/manpages/onnode.1.html">onnode (1)</a>
+<li><a href="http://ctdb.samba.org/manpages/ltdbtool.1.html">ltdbtool (1)</a>
+<li><a href="http://ctdb.samba.org/manpages/ping_pong.1.html">ping_pong (1)</a>
+</ul>
+
+Articles:
+<ul>
+<li><a href="http://samba.org/~obnox/presentations/sambaXP-2009/">Michael
+ Adam's clustered NAS articles</a>
+</ul>
+
+<!--#include virtual="footer.html" -->
diff --git a/ctdb/web/download.html b/ctdb/web/download.html
new file mode 100644
index 00000000000..dce75fe078a
--- /dev/null
+++ b/ctdb/web/download.html
@@ -0,0 +1,50 @@
+<!--#set var="TITLE" value="Downloading CTDB" -->
+<!--#include virtual="header.html" -->
+
+<H2 align="center">Getting the code</h2>
+
+You need two source trees, one is a copy of Samba3 and the other is the
+ctdb code itself.<p>
+
+Both source trees are stored in git repositories.<p>
+
+<h2>CTDB</h2>
+You can download ctdb source code via <a href="ftp://ftp.samba.org/pub/ctdb">ftp</a>
+and <a href="http://ftp.samba.org/pub/ctdb">http</a>. <br><br>
+
+You can also get the latest development version of ctdb using git:
+<pre>
+ git clone git://git.samba.org/ctdb.git ctdb
+</pre>
+
+To update this tree when improvements are made in the upstream code do this:
+<pre>
+ cd ctdb
+ git pull
+</pre>
+
+If you don't have git and can't easily install it, then you can
+instead use the following command to fetch ctdb or update it:
+<pre>
+ rsync -avz samba.org::ftp/unpacked/ctdb .
+</pre>
+
+
+<h2>Samba3 ctdb version</h2>
+<p>
+With Samba version 3.3 all cluster-relevant changes have been merged
+to the mainstream Samba code. Please refer to the <a
+href="http://www.samba.org/">Samba website</a> for the current release
+information.
+</p>
+
+<h2>Binary Packages</h2>
+
+Note that packages are so far only available for RHEL5. Other packages
+may come later. <p>
+
+See <a href="http://ftp.samba.org/pub/ctdb/packages/">packages</a> directory for package
+downloads.
+
+
+<!--#include virtual="footer.html" -->
diff --git a/ctdb/web/footer.html b/ctdb/web/footer.html
new file mode 100644
index 00000000000..a9758e8bc3c
--- /dev/null
+++ b/ctdb/web/footer.html
@@ -0,0 +1,39 @@
+</td>
+</tr>
+
+ <TR ALIGN="center">
+ <TD><BR><a name="search"></a><img src="/bar1.jpg" WIDTH="493" HEIGHT="26" BORDER="0" alt="=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=">
+
+<!-- SiteSearch Google -->
+<form method="get" action="http://www.google.com/custom">
+<table border="0">
+<tr><td nowrap="nowrap" valign="top" align="left" height="32">
+<a href="http://www.google.com/"><img src="http://www.google.com/logos/Logo_25wht.gif" border="0" alt="Google" /></a>
+</td><td nowrap="nowrap">
+<input type="hidden" name="domains" value="samba.org" />
+<input type="text" name="q" size="31" maxlength="255" value="CTDB " />
+<input type="submit" name="sa" value="Search" />
+</td></tr><tr><td>&nbsp;</td>
+<td nowrap="nowrap">
+<table><tr><td>
+<input type="radio" name="sitesearch" value="" />
+<font size="-1" color="#000000">Search WWW</font>
+</td><td>
+<input type="radio" name="sitesearch" value="samba.org" checked="checked" />
+<font size="-1" color="#000000">Search samba.org</font>
+</td></tr></table>
+<input type="hidden" name="client" value="pub-1444957896811922" />
+<input type="hidden" name="forid" value="1" />
+<input type="hidden" name="ie" value="ISO-8859-1" />
+<input type="hidden" name="oe" value="ISO-8859-1" />
+<input type="hidden" name="cof"
+ value="GALT:#008000;GL:1;DIV:#336699;VLC:663399;AH:center;BGC:FFFFFF;LBGC:FFFFFF;ALC:0000FF;LC:0000FF;T:000000;GFNT:0000FF;GIMP:0000FF;LH:60;LW:470;L:http://samba.org/samba/images/samba_banner.gif;S:http://samba.org/;FORID:1;"
+ />
+<input type="hidden" name="hl" value="en" />
+</td></tr></table>
+</form>
+<!-- SiteSearch Google -->
+
+ </TD>
+ </TR>
+</TABLE>
diff --git a/ctdb/web/ftp.html b/ctdb/web/ftp.html
new file mode 100644
index 00000000000..82acd1d9094
--- /dev/null
+++ b/ctdb/web/ftp.html
@@ -0,0 +1,102 @@
+<!--#set var="TITLE" value="CTDB and ftp" -->
+<!--#include virtual="header.html" -->
+
+<h1>Setting up clustered FTP</h1>
+
+<h2>Prereqs</h2>
+Configure CTDB as above and set it up to use public ipaddresses.<br>
+Verify that the CTDB cluster works.
+
+<h2>Configuration</h2>
+
+Setting up a vsftpd cluster is really easy.<br>
+Configure vsftpd on each node on the cluster.<br><br>
+Set up vsftpd to export directories from the shared cluster filesystem.
+
+<h2>/etc/sysconfig/ctdb</h2>
+
+Add the following line to the /etc/sysconfig/ctdb configuration file.
+<pre>
+ CTDB_MANAGES_VSFTPD=yes
+</pre>
+
+Disable vsftpd in chkconfig so that it does not start by default. Instead CTDB will start/stop vsftdp as required.
+<pre>
+ chkconfig vsftpd off
+</pre>
+
+<h2>PAM configuration</h2>
+PAM must be configured to allow authentication of CIFS users so that the ftp
+daemon can authenticate the users logging in.
+
+Make sure the following line is present in /etc/pam.d/system-auth
+<pre>
+auth sufficient pam_winbind.so use_first_pass
+
+</pre>
+If this line is missing you must enable winbind authentication by running
+<pre>
+authconfig --enablewinbindauth --update
+authconfig --enablewinbind --update
+</pre>
+
+<h2>Default shell</h2>
+To log in to the ftp server, the user must have a shell configured in smb.conf.
+
+Add the following line to the globals section of /etc/samba/smb.conf
+<pre>
+ template shell = /bin/bash
+</pre>
+
+<h2>Home directory</h2>
+FTP users must have a home directory configured so they can log in.
+Configure samba to provide home directories for domain users. These home
+directories should be stored on shared storage so they are available from
+all nodes in the cluster.<br>
+
+
+A simple way to create homedirectories are to add
+<pre>
+ template homedir = /&lt;shared storage&gt;/homedir/%D/%U
+</pre>
+to /etc/samba/smb.conf .<br>
+
+The homedirectory must exist or the user will not be able to log in with FTP.
+
+
+<h2>Events script</h2>
+
+The CTDB distribution already comes with an events script for vsftp in the file /etc/ctdb/events.d/40.vsftpd<br><br>
+There should not be any need to edit this file.
+
+
+<h2>Restart your cluster</h2>
+Next time your cluster restarts, CTDB will start managing the vsftp service.<br><br>
+If the cluster is already in production you may not want to restart the entire cluster since this would disrupt services.<br>
+
+Insted you can just disable/enable the nodes one by one. Once a node becomes enabled again it will start the vsftp service.<br><br>
+
+Follow the procedure below for each node, one node at a time :
+
+<h3>1 Disable the node</h3>
+Use the ctdb command to disable the node :
+<pre>
+ ctdb -n NODE disable
+</pre>
+
+<h3>2 Wait until the cluster has recovered</h3>
+
+Use the ctdb tool to monitor until the cluster has recovered, i.e. Recovery mode is NORMAL. This should happen within seconds of when you disabled the node.
+<pre>
+ ctdb status
+</pre>
+
+<h3>3 Enable the node again</h3>
+
+Re-enable the node again which will start the newly configured vsftp service.
+<pre>
+ ctdb -n NODE enable
+</pre>
+
+<!--#include virtual="footer.html" -->
+
diff --git a/ctdb/web/header.html b/ctdb/web/header.html
new file mode 100644
index 00000000000..a356b08e6ca
--- /dev/null
+++ b/ctdb/web/header.html
@@ -0,0 +1,44 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
+<HTML>
+<HEAD>
+<TITLE><!--#echo var="TITLE" --></TITLE>
+<meta http-equiv="Content-Type" content="text/html;charset=utf-8" >
+</HEAD>
+
+<BODY BGCOLOR="#ffffff" TEXT="#000000" VLINK="#292555" LINK="#292555"
+ ALINK="#cc0033">
+<TABLE BORDER=0 WIDTH="75%" ALIGN="CENTER">
+ <tr VALIGN="middle">
+ <td ALIGN="left">
+ <ul>
+ <li><small><a href="/">home</a></small>
+ <li><small><a href="/documentation.html">documentation</a></small>
+ <li><small><a href="/configuring.html">configuring</a></small>
+ <li><small><a href="/building.html">building</a></small>
+ </ul>
+ </td>
+ <td align="center">
+ <a href="."><img src="/ctdblogo.png" border="0" alt="CTDB"></a>
+ </td>
+ <td align="left">
+ <ul>
+ <li><small><a href="/download.html">download</a></small>
+ <li><small><a href="/testing.html">testing</a></small>
+ <li><small><a href="http://wiki.samba.org/index.php/CTDB_Setup">wiki</a></small>
+ <li><small><a href="http://bugzilla.samba.org/">bug-tracking</a></small>
+ </ul>
+ </td>
+ </tr>
+
+ <TR ALIGN="center">
+ <TD COLSPAN="3">
+ <img src="/bar1.jpg" WIDTH="493" HEIGHT="26"
+ BORDER="0"
+ alt="=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=">
+ </TD>
+ </TR>
+</TABLE>
+
+<TABLE BORDER=0 WIDTH="60%" ALIGN="CENTER">
+ <tr VALIGN="middle">
+ <td ALIGN="left">
diff --git a/ctdb/web/index.html b/ctdb/web/index.html
new file mode 100644
index 00000000000..91f87e87f48
--- /dev/null
+++ b/ctdb/web/index.html
@@ -0,0 +1,141 @@
+<!--#set var="TITLE" value="CTDB" -->
+<!--#include virtual="header.html" -->
+
+<H2 align="center">Welcome to the CTDB web pages</H2>
+
+CTDB is a cluster implementation of the TDB database used by Samba and
+other projects to store temporary data. If an application is already
+using TDB for temporary data it is very easy to convert that
+application to be cluster aware and use CTDB instead.
+
+<p>CTDB provides the same types of functions as TDB but in a clustered
+ fashion, providing a TDB-style database that spans multiple physical
+ hosts in a cluster.
+
+<p>Features include:
+<ul>
+<li>CTDB provides a TDB that has consistent data and consistent locking across
+all nodes in a cluster.
+<li>CTDB is very fast.
+<li>In case of node failures, CTDB will automatically recover and
+ repair all TDB databases that it manages.
+<li>CTDB is the core component that provides <strong>pCIFS</strong>
+("parallel CIFS") with Samba3/4.
+<li>CTDB provides HA features such as node monitoring, node failover,
+ and IP takeover.
+<li>CTDB provides a reliable messaging transport to allow applications
+ linked with CTDB to communicate to other instances of the application
+ running on different nodes in the cluster.
+<li>CTDB has pluggable transport backends. Currently implemented backends are TCP
+ and Infiniband.
+<li>CTDB supports a system of application specific management scripts,
+ allowing applications that depend on network or filesystem resources
+ to be managed in a highly available manner on a cluster.
+</ul>
+
+<h2>Requirements</h2>
+
+CTDB relies on a clustered filesystem being available and shared on
+all nodes that participate in the CTDB cluster. This filesystem must
+be mounted and available on all nodes in the CTDB cluster.
+
+<p>On top of this cluster filesystem, CTDB then provides clustered HA
+features so that data from the clustered filesystem can be exported
+through multiple nodes in the CTDB cluster using various
+services. Currently included with CTDB are the necessary hooks for Samba, NFS
+ and ftp exports. Support for new service types can easily be added.
+
+<h2>TDB</h2>
+
+TDB is a very fast simple database that was originally developed for
+use in Samba. Today several other projects use TDB to store their data.
+
+<p>See the <a
+href="http://samba.org/ftp/unpacked/tdb/docs/README">TDB
+README file</a> for a description of how TDB is used.
+
+<h2>Documentation</h2>
+
+<a href="./documentation.html">CTDB documentation</a><br><br>
+
+Additional documentation on how to install and configure CTDB is available in the
+<a href="http://wiki.samba.org/index.php/CTDB_Setup">CTDB
+ Wiki</a>. Please read all of the documentation carefully.
+
+<h2>High Availability Features</h2>
+
+The CTDB nodes in a cluster designates one node as a recovery master
+through an election process. If the recovery master node fails a
+new election is initiated so that the cluster will always guarantee
+there will be a recovery master. The recovery master will
+continuously monitor the cluster to verify that all nodes contain a
+consistent configuration and view of the cluster and will initiate a
+recovery process when required.
+
+<p>During the recovery phase, the recovery master will automatically
+rebuild/recover all clustered TDB database to ensure that the
+databases are consistent. Recovery typically takes between 1 and 3
+seconds. During the recovery period the databases are 'frozen', and
+all database IO operations by ctdb clients are suspended.
+
+<h3>Is CTDB a HA solution?</h3>
+
+Yes and no.<p>
+
+CTDB alone is not a HA solution, but when you combine CTDB with a clustered
+filesystem it becomes one.<p>
+
+CTDB is primarily developed around the concept of having a shared
+cluster filesystem across all the nodes in the cluster to provide the
+features required for building a NAS cluster.<p>
+
+Thus CTDB relies on an external component (the cluster filesystem) to
+provide the mechanisms for avoiding split-brain and other core
+clustering tasks.<p>
+
+However, if you do have a clustered filesystem for all the nodes, in
+that scenario CTDB will provide a very easy to install and manage
+solution for your clustering HA needs.
+
+<h3>IP Takeover</h3>
+
+When a node in a cluster fails, CTDB will arrange that a different
+node takes over the IP address of the failed node to ensure that the
+IP addresses for the services provided are always available.
+
+<p>To speed up the process of IP takeover and when clients attached to
+a failed node recovers as fast as possible, CTDB will automatically
+generate gratuitous ARP packets to inform all nodes of the changed MAC
+address for that IP. CTDB will also send "tickle ACK" packets to all
+attached clients to trigger the clients to immediately recognize that
+the TCP connection needs to be re-established and to shortcut any TCP
+retransmission timeouts that may be active in the clients.
+
+<h2>Discussion and bug reports</h2>
+
+For discussions please use
+the <a href="https://lists.samba.org/mailman/listinfo/samba-technical">samba-technical</a>
+mailing list. To submit a bug report, please use
+the <a href="http://bugzilla.samba.org/">Samba bugzilla</a> bug
+tracking system.
+
+<p>We would be very interested in hearing from and work with other
+projects that want to make their services cluster aware using CTDB.
+
+<p>CTDB discussions also happen on the #ctdb IRC channel on freenode.net
+
+
+<hr>
+<h2>Developers</h2>
+<ul>
+<li><a href="http://samba.org/~tridge/">Andrew Tridgell</a></li>
+<li><a href="http://samba.org/~sahlberg/">Ronnie Sahlberg</a></li>
+<li><a href="http://samba.org/~obnox/">Michael Adam</a></li>
+<li>Peter Somogyi</li>
+<li><a href="http://sernet.de/Samba/">Volker Lendecke</a></li>
+<li>Stefan Metzmacher</li>
+<li><a href="http://meltin.net/people/martin/">Martin Schwenke</a></li>
+<li>Amitay Isaacs</li>
+</ul>
+
+<!--#include virtual="footer.html" -->
diff --git a/ctdb/web/iscsi.html b/ctdb/web/iscsi.html
new file mode 100644
index 00000000000..1385e18e60c
--- /dev/null
+++ b/ctdb/web/iscsi.html
@@ -0,0 +1,113 @@
+<!--#set var="TITLE" value="CTDB and iSCSI" -->
+<!--#include virtual="header.html" -->
+
+<h1>Setting up HA iSCSI with CTDB</h1>
+
+<p>
+You can use CTDB to create a HA iSCSI Target.
+</p>
+
+<p>
+Since the iSCSI Target is not
+clusterized nor integrated with CTDB in the same sense Samba is, this
+implementation will only create a HA solution for iSCSI where each public address is assinged its own iscsi target name and the LUNs that are created are only accessible through one specific target (i.e. one public address at a time).
+
+</p>
+
+<p>
+! This feature ONLY works when public addresses are used. It is not supported, nor does it work, if you use the LVS feature to present the entire cluster as one single ip address. !
+
+</p>
+
+<h2>Prereqs</h2>
+Configure CTDB as above and set it up to use public ipaddresses.<br>
+Verify that the CTDB cluster works.
+
+<h2>Install the iSCSI target software on all nodes</h2>
+On RHEL5 this package is called "scsi-target-utils" and it needs to be installed
+on all nodes in the cluster. The easiest way to install this package is by using :
+
+<pre>
+onnode all yum install scsi-target-utils -y
+</pre>
+
+Make sure that the service is not started automatically when booting, we want CTDB to start/stop this service :
+<pre>
+onnode all chkconfig tgtd off
+</pre>
+
+<h2>/etc/sysconfig/iscsi</h2>
+
+Create this file and add the following three lines to it :
+
+<pre>
+ CTDB_START_ISCSI_SCRIPTS=/gpfs/iscsi/
+</pre>
+
+<p>
+CTDB_START_ISCSI_SCRIPTS=<directory on shared storage>
+This is a directory on shared storage where the scripts to start and configure the iscsi service are held. There is one script for each public address named <public address>.sh .
+</p>
+
+
+<h2>/etc/sysconfig/ctdb</h2>
+
+Add the following line to /etc/sysconfig/ctdb :
+
+<pre>
+ CTDB_MANAGES_ISCSI=yes
+</pre>
+
+<p>
+CTDB_MANAGES_ISCSI=yes just tells CTDB event script for iSCSI that CTDB should start and stop the iSCSI target service as required.
+</p>
+
+
+<h2>Example: create a LUN that will be hosted on public ip address 10.1.1.1</h2>
+<p>
+Before you cna export a LUN you must create it as a file in the shared filesystem. When doing so, make sure you create it as a real file and not a sparse file!<br />
+While it is much quicker to create a sparse file if you want a file with filesize 100Gb, SCSI has no concept of "disk full" so if you run out of backing space for the sparse file, the scsi initiators will be "surprised" and "unhappy".
+</p>
+<pre>
+dd if=/dev/zero of=/gpfs/iscsi/10.1.1.1.lun.1 bs=1024 count=102400
+</pre>
+<p>
+to create a 100MByte file to export as an iSCSI LUN.
+</p>
+
+<h2>Example: 10.1.1.1.sh</h2>
+<p>
+This example shellscript is used to configure the iscsi target that is hosted onthe public address 10.1.1.1
+</p>
+<pre>
+#!/bin/sh
+# script to set up the iscsi target and luns hosted by public address
+# 10.1.1.1
+
+
+#create a target
+tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.2007-11.com.ctdb:iscsi.target.10.1.1.1
+
+#attach a lun
+tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /gpfs/iscsi/10.1.1.1.lun.1
+
+# no security, allow everyone to access this lun
+tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL
+</pre>
+
+
+<p>
+iqn.2007-11.com.ctdb:iscsi.target.10.1.1.1 in the example above is the iscsi name that is assigned to the target. Dont use this name, pick your own name!
+</p>
+
+<p>
+See the documentation for the tgtadm command for more information on how you want to set up your environment.
+</p>
+
+<h2>Perform a ctdb recovery to start the iscsi service</h2>
+<pre>
+ctdb recover
+</pre>
+
+<!--#include virtual="footer.html" -->
+
diff --git a/ctdb/web/nfs.html b/ctdb/web/nfs.html
new file mode 100644
index 00000000000..a4a6fb5e042
--- /dev/null
+++ b/ctdb/web/nfs.html
@@ -0,0 +1,96 @@
+<!--#set var="TITLE" value="CTDB and NFS" -->
+<!--#include virtual="header.html" -->
+
+<h1>Setting up clustered NFS</h1>
+
+NFS v2/v3 has been successfully tested with exporting the same
+data/network share from multiple nodes in a CTDB cluster with correct
+file locking behaviour and lock recovery.<br><br>
+
+Also see <a href="http://wiki.samba.org/index.php/CTDB_Setup#Setting_up_CTDB_for_clustered_NFS">Configuring
+NFS for CTDB clustering</a> at samba.org for additional information.
+
+<h2>Prereqs</h2>
+Configure CTDB as above and set it up to use public ipaddresses.<br>
+Verify that the CTDB cluster works.
+
+<h2>/etc/exports</h2>
+
+Export the same directory from all nodes.<br>
+Make sure to specify the fsid export option so that all nodes will present the same fsid to clients.<br>
+
+Clients can get "upset" if the fsid on a mount suddenly changes.<br>
+Example /etc/exports :
+<pre>
+ /gpfs0/data *(rw,fsid=1235)
+</pre>
+
+<h2>/etc/sysconfig/nfs</h2>
+
+This file must be edited to point statd to keep its state directory on
+shared storage instead of in a local directory.<br><br>
+
+We must also make statd use a fixed port to listen on that is the same for
+all nodes in the cluster.<br>
+
+If we don't specify a fixed port, the statd port will change during failover
+which causes problems on some clients.<br>
+(some clients are very slow to realize when the port has changed)<br><br>
+
+This file should look something like :
+<pre>
+ NFS_HOSTNAME=ctdb
+ STATD_PORT=595
+ STATD_OUTGOING_PORT=596
+ MOUNTD_PORT=597
+ RQUOTAD_PORT=598
+ LOCKD_TCPPORT=599
+ LOCKD_UDPPORT=599
+ STATD_HOSTNAME="$NFS_HOSTNAME -H /etc/ctdb/statd-callout -p 97"
+ RPCNFSDARGS="-N 4"
+
+</pre>
+
+You need to make sure that the lock manager runs on the same port on all nodes in the cluster since some clients will have "issues" and take very long to recover if the port suddenly changes.<br>
+599 above is only an example. You can run the lock manager on any available port as long as you use the same port on all nodes.<br><br>
+
+NFS_HOSTNAME is the dns name for the ctdb cluster and which is used when clients map nfs shares. This name must be in DNS and resolve back into the public ip addresses of the cluster.<br>
+Always use the same name here as you use for the samba hostname.
+
+RPCNFSDARGS is used to disable support for NFSv4 which is not yet supported by CTDB.
+
+<h2>/etc/sysconfig/ctdb</h2>
+Add the following line to /etc/sysconfig/ctdb :
+
+<pre>
+ CTDB_MANAGES_NFS=yes
+</pre>
+The CTDB_MANAGES_NFS line tells the events scripts that CTDB is to manage startup and shutdown of the NFS and NFSLOCK services.<br>
+
+With this set to yes, CTDB will start/stop/restart these services as required.<br><br>
+
+
+<h2>chkconfig</h2>
+
+Since CTDB will manage and start/stop/restart the nfs and the nfslock services, you must disable them using chkconfig.
+<pre>
+ chkconfig nfs off
+ chkconfig nfslock off
+</pre>
+
+
+<h2>Event scripts</h2>
+
+CTDB clustering for NFS relies on two event scripts /etc/ctdb/events.d/60.nfs and /etc/ctdb/events.d/61.nfstickle.<br>
+
+These two scripts are provided by the RPM package and there should not be any need to change them.
+
+<h2><strong>IMPORTANT</strong></h2>
+
+Never ever mount the same nfs share on a client from two different nodes in the cluster at the same time!<br><br>
+
+The client side caching in NFS is very fragile and assumes/relies on that an object can only be accessed through one single path at a time.
+
+
+<!--#include virtual="footer.html" -->
+
diff --git a/ctdb/web/prerequisites.html b/ctdb/web/prerequisites.html
new file mode 100644
index 00000000000..5a563009411
--- /dev/null
+++ b/ctdb/web/prerequisites.html
@@ -0,0 +1,30 @@
+<!--#set var="TITLE" value="CTDB prerequisites" -->
+<!--#include virtual="header.html" -->
+
+<h1>Prerequisites</h1>
+
+Before you can start using CTDB you must first install and configure a
+bunch of linux boxes.<br><br>
+
+After that you need to install and configure a cluster filesystem and
+mount that cluster filesystem on all the linux boxes that will form
+your cluster.<br><br>
+
+Also, ensure that the cluster filesystem supports correct
+posix locking semantics. A simple way to test this is to run <a
+href="https://wiki.samba.org/index.php/Ping_pong">ping_pong</a> utility
+bundled with CTDB.<br><br>
+
+<h1>Cluster filesystems</h1>
+We have primarily used the GPFS filesystem for our testing but any
+cluster filesystem should work as long as it provides correct file
+locking.<br><br>
+
+While we primarily test with GPFS, CTDB should work with almost any
+other cluster filesystem as well.<br><br>
+
+Please let us know your experiences in using other cluster filesystems.
+
+
+<!--#include virtual="footer.html" -->
+
diff --git a/ctdb/web/samba.html b/ctdb/web/samba.html
new file mode 100644
index 00000000000..fb17d0f3a3b
--- /dev/null
+++ b/ctdb/web/samba.html
@@ -0,0 +1,97 @@
+<!--#set var="TITLE" value="CTDB and Samba" -->
+<!--#include virtual="header.html" -->
+
+<h1>Setting up clustered samba</h1>
+
+It is assumed tou have already installed the ctdb version of samba and also installed, configured and tested CTDB.
+
+<h2>Create a user account</h2>
+
+First you need to initialise the Samba password database so that you have some user that can authenticate to the samba service.<br>
+Do this by running:
+<pre>
+ smbpasswd -a root
+</pre>
+
+Samba with clustering must use the tdbsam or ldap SAM passdb backends (it must not use the default smbpasswd backend), or must be configured to be a member of a domain.<br>
+The rest of the configuration of Samba is exactly as it is done on a normal system.<br><br>
+See the docs on http://samba.org/ for details.
+
+<h2>Critical smb.conf parameters</h2>
+
+A clustered Samba install must set some specific configuration parameters
+<pre>
+ clustering = yes
+ idmap backend = tdb2
+</pre>
+
+<h2>Using smbcontrol</h2>
+
+You can check for connectivity to the smbd daemons on each node using smbcontrol
+<pre>
+ smbcontrol smbd ping
+</pre>
+
+<h2>Using Samba4 smbtorture</h2>
+
+The Samba4 version of smbtorture has several tests that can be used to
+benchmark a CIFS cluster. You can download Samba 4 from Samba website.
+
+The particular tests that are helpful for cluster benchmarking are the RAW-BENCH-OPEN, RAW-BENCH-LOCK and BENCH-NBENCH tests.<br>
+These tests take a unclist that allows you to spread the workload out over more than one node. For example:
+
+<pre>
+ smbtorture //localhost/data -Uuser%password RAW-BENCH-LOCK --unclist=unclist.txt --num-progs=32 -t60
+</pre>
+
+The file unclist.txt should contain a list of server names in your cluster prefixed by //. For example
+<pre>
+ //192.168.1.1
+ //192.168.1.2
+ //192.168.2.1
+ //192.168.2.2
+</pre>
+
+For NBENCH testing you need a client.txt file.<br>
+A suitable file can be found in the dbench distribution at http://samba.org/ftp/tridge/dbench/
+
+
+<h3>CTDB_MANAGES_SAMBA</h3>
+This is a parameter in /etc/sysconfig/ctdb<br><br>
+When this parameter is set to "yes" CTDB will start/stop/restart the local samba daemon as the cluster configuration changes.<br><br>
+When this parameter is set you should also make sure that samba is NOT started by default by the linux system when it boots, e.g.
+<pre>
+ chkconfig smb off
+</pre>
+on a Redhat system and
+<pre>
+ chkconfig smb off
+ chkconfig nmb off
+</pre>
+on a SuSE system.
+
+Example:
+<pre>
+ CTDB_MANAGES_SAMBA="yes"
+</pre>
+
+It is strongly recommended that you set this parameter to "yes" if you intend to use clustered samba.
+
+<h3>CTDB_MANAGES_WINBIND</h3>
+This is a parameter in /etc/sysconfig/ctdb<br><br>
+When this parameter is set to "yes" CTDB will start/stop/restart the local winbind daemon as the cluster configuration changes.<br><br>
+When this parameter is set you should also make sure that winbind is NOT started by default by the linux system when it boots:
+<pre>
+ chkconfig winbind off
+</pre>
+
+Example:
+<pre>
+ CTDB_MANAGES_WINBIND="yes"
+</pre>
+
+It is strongly recommended that you set this parameter to "yes" if you
+intend to use clustered samba in DOMAIN or ADS security mode.
+
+<!--#include virtual="footer.html" -->
+
diff --git a/ctdb/web/testing.html b/ctdb/web/testing.html
new file mode 100644
index 00000000000..d0d39a35f8d
--- /dev/null
+++ b/ctdb/web/testing.html
@@ -0,0 +1,112 @@
+<!--#set var="TITLE" value="CTDB Testing" -->
+<!--#include virtual="header.html" -->
+
+<H2 align="center">Starting and testing CTDB</h2>
+
+The CTDB log is in /var/log/log.ctdb so look in this file if something
+did not start correctly.<p>
+
+You can ensure that ctdb is running on all nodes using
+<pre>
+ onnode all service ctdb start
+</pre>
+Verify that the CTDB daemon started properly. There should normally be at least 2 processes started for CTDB, one for the main daemon and one for the recovery daemon.
+<pre>
+ onnode all pidof ctdbd
+</pre>
+
+Once all CTDB nodes have started, verify that they are correctly
+talking to each other.<p>
+
+There should be one TCP connection from the private ip address on each
+node to TCP port 4379 on each of the other nodes in the cluster.
+<pre>
+ onnode all netstat -tn | grep 4379
+</pre>
+
+
+<h2>Automatically restarting CTDB</h2>
+
+If you wish to cope with software faults in ctdb, or want ctdb to
+automatically restart when an administration kills it, then you may
+wish to add a cron entry for root like this:
+
+<pre>
+ * * * * * /etc/init.d/ctdb cron > /dev/null 2>&1
+</pre>
+
+
+<h2>Testing CTDB</h2>
+
+Once your cluster is up and running, you may wish to know how to test that it is functioning correctly. The following tests may help with that
+
+<h3>The ctdb tool</h3>
+
+The ctdb package comes with a utility called ctdb that can be used to
+view the behaviour of the ctdb cluster.<p>
+
+If you run it with no options it will provide some terse usage information. The most commonly used commands are:
+<pre>
+ ctdb status
+ ctdb ip
+ ctdb ping
+</pre>
+
+<h3>ctdb status</h3>
+
+The status command provides basic information about the cluster and the status of the nodes. when you run it you will get some output like:
+
+<pre>
+<strong>Number of nodes:4
+vnn:0 10.1.1.1 OK (THIS NODE)
+vnn:1 10.1.1.2 OK
+vnn:2 10.1.1.3 OK
+vnn:3 10.1.1.4 OK</strong>
+Generation:1362079228
+Size:4
+hash:0 lmaster:0
+hash:1 lmaster:1
+hash:2 lmaster:2
+hash:3 lmaster:3
+<strong>Recovery mode:NORMAL (0)</strong>
+Recovery master:0
+</pre>
+
+The important parts are in bold. This tells us that all 4 nodes are in
+a healthy state.<p>
+
+It also tells us that recovery mode is normal, which means that the
+cluster has finished a recovery and is running in a normal fully
+operational state.<p>
+
+Recovery state will briefly change to "RECOVERY" when there ahs been a
+node failure or something is wrong with the cluster.<p>
+
+If the cluster remains in RECOVERY state for very long (many seconds)
+there might be something wrong with the configuration. See
+/var/log/log.ctdb.
+
+<h3>ctdb ip</h3>
+
+This command prints the current status of the public ip addresses and which physical node is currently serving that ip.
+
+<pre>
+Number of nodes:4
+192.168.1.1 0
+192.168.1.2 1
+192.168.2.1 2
+192.168.2.1 3
+</pre>
+
+<h3>ctdb ping</h3>
+this command tries to "ping" each of the CTDB daemons in the cluster.
+<pre>
+ ctdb ping -n all
+
+ response from 0 time=0.000050 sec (13 clients)
+ response from 1 time=0.000154 sec (27 clients)
+ response from 2 time=0.000114 sec (17 clients)
+ response from 3 time=0.000115 sec (59 clients)
+</pre>
+
+<!--#include virtual="footer.html" -->