diff options
author | Sanju Rakonde <srakonde@redhat.com> | 2019-10-22 15:06:29 +0530 |
---|---|---|
committer | Atin Mukherjee <amukherj@redhat.com> | 2019-11-12 06:17:06 +0000 |
commit | 50b6806bb2697246bdc1b9ac5ef19af61584e010 (patch) | |
tree | 901a9aea32744edcdd31cf27eca3d14bb6ead8d3 /tests | |
parent | 5304eaa662b263791baf0e5a9bd616446a3919ef (diff) | |
download | glusterfs-50b6806bb2697246bdc1b9ac5ef19af61584e010.tar.gz glusterfs-50b6806bb2697246bdc1b9ac5ef19af61584e010.tar.xz glusterfs-50b6806bb2697246bdc1b9ac5ef19af61584e010.zip |
cli: display detailed rebalance info
Problem: When one of the node is down in cluster,
rebalance status is not displaying detailed
information.
Cause: In glusterd_volume_rebalance_use_rsp_dict()
we are aggregating rsp from all the nodes into a
dictionary and sending it to cli for printing. While
assigning a index to keys we are considering all the
peers instead of considering only the peers which are
up. Because of which, index is not reaching till 1.
while parsing the rsp cli unable to find status-1
key in dictionary and going out without printing
any information.
Solution: The simplest fix for this without much
code change is to continue to look for other keys
when status-1 key is not found.
fixes: bz#1764119
Change-Id: I0062839933c9706119eb85416256eade97e976dc
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
Diffstat (limited to 'tests')
-rw-r--r-- | tests/bugs/glusterd/rebalance-in-cluster.t | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/tests/bugs/glusterd/rebalance-in-cluster.t b/tests/bugs/glusterd/rebalance-in-cluster.t index 9565faef01..469ec6cd48 100644 --- a/tests/bugs/glusterd/rebalance-in-cluster.t +++ b/tests/bugs/glusterd/rebalance-in-cluster.t @@ -4,6 +4,10 @@ . $(dirname $0)/../../cluster.rc . $(dirname $0)/../../volume.rc +function rebalance_status_field_1 { + $CLI_1 volume rebalance $1 status | awk '{print $7}' | sed -n 3p +} + cleanup; TEST launch_cluster 2; TEST $CLI_1 peer probe $H2; @@ -29,6 +33,11 @@ TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1 TEST $CLI_1 volume rebalance $V0 start EXPECT_WITHIN $REBALANCE_TIMEOUT "completed" cluster_rebalance_status_field 1 $V0 +#bug - 1764119 - rebalance status should display detailed info when any of the node is dowm +TEST kill_glusterd 2 +EXPECT_WITHIN $REBALANCE_TIMEOUT "completed" rebalance_status_field_1 $V0 + +TEST start_glusterd 2 #bug-1245142 $CLI_1 volume rebalance $V0 start & |