diff options
author | Joseph Fernandes <josferna@redhat.com> | 2014-06-30 08:07:36 +0530 |
---|---|---|
committer | Kaushal M <kaushal@redhat.com> | 2014-07-04 01:27:22 -0700 |
commit | 9a50211cdb3d6decac140a31a035bd6e145f5f2f (patch) | |
tree | 7abc63564e7a3892370715c476e42f6cbf573ef6 /tests | |
parent | dc46d5e84f88c5cc869b78ba9db32ed4035b9121 (diff) | |
download | glusterfs-9a50211cdb3d6decac140a31a035bd6e145f5f2f.tar.gz glusterfs-9a50211cdb3d6decac140a31a035bd6e145f5f2f.tar.xz glusterfs-9a50211cdb3d6decac140a31a035bd6e145f5f2f.zip |
glusterd/snapshot: fixing glusterd quorum during snap operation
During a snapshot operation, glusterd quorum will be checked only
on transaction peers, which are selected in the begin of the
operation, and not on the entire peer list which is susceptible
for change for any peer attach operation.
Change-Id: I089e3262cb45bc1ea4a3cef48408a9039d3fbdb9
BUG: 1114403
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Reviewed-on: http://review.gluster.org/8200
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Reviewed-by: Kaushal M <kaushal@redhat.com>
Tested-by: Kaushal M <kaushal@redhat.com>
Diffstat (limited to 'tests')
-rwxr-xr-x | tests/bugs/bug-1112559.t | 58 |
1 files changed, 58 insertions, 0 deletions
diff --git a/tests/bugs/bug-1112559.t b/tests/bugs/bug-1112559.t new file mode 100755 index 0000000000..2190609fa1 --- /dev/null +++ b/tests/bugs/bug-1112559.t @@ -0,0 +1,58 @@ +#!/bin/bash + +. $(dirname $0)/../include.rc +. $(dirname $0)/../cluster.rc +. $(dirname $0)/../volume.rc +. $(dirname $0)/../snapshot.rc + +function check_peers { + $CLI_1 peer status | grep 'Peer in Cluster (Connected)' | wc -l +} + +function check_snaps_status { + $CLI_1 snapshot status | grep 'Snap Name : ' | wc -l +} + +function check_snaps_bricks_health { + $CLI_1 snapshot status | grep 'Brick Running : Yes' | wc -l +} + + +SNAP_COMMAND_TIMEOUT=20 +NUMBER_OF_BRICKS=2 + +cleanup; +TEST verify_lvm_version +TEST launch_cluster 3 +TEST setup_lvm 3 + +TEST $CLI_1 peer probe $H2 +EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count + +TEST $CLI_1 volume create $V0 $H1:$L1 $H2:$L2 + +TEST $CLI_1 volume start $V0 + +#Create snapshot and add a peer together +$CLI_1 snapshot create ${V0}_snap1 ${V0} & +PID_1=$! +$CLI_1 peer probe $H3 +wait $PID_1 + +#Snapshot should be created and in the snaplist +TEST snapshot_exists 1 ${V0}_snap1 + +#Not being paranoid! Just checking for the status of the snapshot +#During the testing of the bug the snapshot would list but actually +#not be created.Therefore check for health of the snapshot +EXPECT_WITHIN $SNAP_COMMAND_TIMEOUT 1 check_snaps_status +EXPECT_WITHIN $SNAP_COMMAND_TIMEOUT $NUMBER_OF_BRICKS check_snaps_bricks_health + +#check if the peer is added successfully +EXPECT_WITHIN $PROBE_TIMEOUT 2 peer_count + +TEST $CLI_1 snapshot delete ${V0}_snap1 + +cleanup; + + |