diff options
author | Gaurav Kumar Garg <ggarg@redhat.com> | 2015-07-02 18:23:51 +0530 |
---|---|---|
committer | Atin Mukherjee <amukherj@redhat.com> | 2015-08-24 22:18:55 -0700 |
commit | 8e0bf30dc40fed45078c702dec750b5e8bbf5734 (patch) | |
tree | 9ea9881af268472cb62bc548a1fb07108fde2b00 /tests | |
parent | d5e03b7f02f68b3a9aaccf586e1f6ed901224ba7 (diff) | |
download | glusterfs-8e0bf30dc40fed45078c702dec750b5e8bbf5734.tar.gz glusterfs-8e0bf30dc40fed45078c702dec750b5e8bbf5734.tar.xz glusterfs-8e0bf30dc40fed45078c702dec750b5e8bbf5734.zip |
glusterd: stop all the daemons services on peer detach
Currently glusterd is not stopping all the deamon service on peer detach
With this fix it will do peer detach cleanup properlly and will stop all
the daemon which was running before peer detach on the node.
Change-Id: Ifed403ed09187e84f2a60bf63135156ad1f15775
BUG: 1255386
Signed-off-by: Gaurav Kumar Garg <ggarg@redhat.com>
Reviewed-on: http://review.gluster.org/11509
Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Tested-by: Gluster Build System <jenkins@build.gluster.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Diffstat (limited to 'tests')
-rw-r--r-- | tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t | 41 | ||||
-rw-r--r-- | tests/volume.rc | 16 |
2 files changed, 49 insertions, 8 deletions
diff --git a/tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t b/tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t new file mode 100644 index 0000000000..9ff1758f9c --- /dev/null +++ b/tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t @@ -0,0 +1,41 @@ +#!/bin/bash + +## Test case for stopping all running daemons service on peer detach. + +. $(dirname $0)/../../include.rc +. $(dirname $0)/../../volume.rc +. $(dirname $0)/../../cluster.rc + +cleanup; + + +## Start a 2 node virtual cluster +TEST launch_cluster 2; + +## Peer probe server 2 from server 1 cli +TEST $CLI_1 peer probe $H2; + +EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count + + +## Creating and starting volume +TEST $CLI_1 volume create $V0 $H1:$B1/${V0}0 $H1:$B1/${V0}1 +TEST $CLI_1 volume start $V0 + +## To Do: Add test case for quota and snapshot daemon. Currently quota +## Daemon is not working in cluster framework. And sanpd daemon +## Start only in one node in cluster framework. Add test case +## once patch http://review.gluster.org/#/c/11666/ merged, + +## We are having 2 node "nfs" daemon should run on both node. +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "2" get_nfs_count + +## Detach 2nd node from the cluster. +TEST $CLI_1 peer detach $H2; + + +## After detaching 2nd node we will have only 1 nfs and quota daemon running. +EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" get_nfs_count + +cleanup; + diff --git a/tests/volume.rc b/tests/volume.rc index e397f093a1..a100bde55a 100644 --- a/tests/volume.rc +++ b/tests/volume.rc @@ -547,6 +547,14 @@ function get_quotad_count { ps auxww | grep glusterfs | grep quotad.pid | grep -v grep | wc -l } +function get_nfs_count { + ps auxww | grep glusterfs | grep nfs.pid | grep -v grep | wc -l +} + +function get_snapd_count { + ps auxww | grep glusterfs | grep snapd.pid | grep -v grep | wc -l +} + function drop_cache() { case $OSTYPE in Linux) @@ -601,12 +609,4 @@ function quota_hl_exceeded() } -function get_nfs_count { - ps auxww | grep glusterfs | grep nfs.pid | grep -v grep | wc -l -} - -function get_snapd_count { - ps auxww | grep glusterfs | grep snapd.pid | grep -v grep | wc -l -} - |