<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/cli/src/cli-xml-output.c, branch v3.5.6</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>features/quota : Fix XML output for quota list command</title>
<updated>2015-07-07T16:11:38+00:00</updated>
<author>
<name>vmallika</name>
<email>vmallika@redhat.com</email>
</author>
<published>2015-06-15T07:01:48+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=c9e92231e8fb31e6e4a9f061746daaedd77ad1b7'/>
<id>c9e92231e8fb31e6e4a9f061746daaedd77ad1b7</id>
<content type='text'>
This is a backport of http://review.gluster.org/#/c/9481/

&gt; Sample output:
&gt; ---------------
&gt;
&gt; Sample 1)
&gt; ----------
&gt; [root@snapshot-28 glusterfs]# gluster volume quota vol1 list /dir1 /dir4
&gt; /dir5 --xml
&gt; &lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&gt; &lt;cliOutput&gt;
&gt;   &lt;opRet&gt;0&lt;/opRet&gt;
&gt;   &lt;opErrno&gt;0&lt;/opErrno&gt;
&gt;   &lt;opErrstr/&gt;
&gt;   &lt;volQuota&gt;
&gt;     &lt;limit&gt;
&gt;       &lt;path&gt;/dir1&lt;/path&gt;
&gt;       &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
&gt;       &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
&gt;       &lt;used_space&gt;0Bytes&lt;/used_space&gt;
&gt;       &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
&gt;     &lt;/limit&gt;
&gt;     &lt;limit&gt;
&gt;       &lt;path&gt;/dir4&lt;/path&gt;
&gt;       &lt;path&gt;No such file or directory&lt;/path&gt;
&gt;     &lt;/limit&gt;
&gt;     &lt;limit&gt;
&gt;       &lt;path&gt;/dir5&lt;/path&gt;
&gt;       &lt;path&gt;No such file or directory&lt;/path&gt;
&gt;     &lt;/limit&gt;
&gt;   &lt;/volQuota&gt;
&gt; &lt;/cliOutput&gt;
&gt;
&gt; Sample 2)
&gt; ---------
&gt; gluster volume quota vol1 list --xml
&gt; &lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&gt; &lt;cliOutput&gt;
&gt;   &lt;opRet&gt;0&lt;/opRet&gt;
&gt;   &lt;opErrno&gt;0&lt;/opErrno&gt;
&gt;   &lt;opErrstr/&gt;
&gt;   &lt;volQuota/&gt;
&gt; &lt;/cliOutput&gt;
&gt; &lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&gt; &lt;cliOutput&gt;
&gt;   &lt;volQuota&gt;
&gt;     &lt;limit&gt;
&gt;       &lt;path&gt;/dir&lt;/path&gt;
&gt;       &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
&gt;       &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
&gt;       &lt;used_space&gt;0Bytes&lt;/used_space&gt;
&gt;       &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
&gt;     &lt;/limit&gt;
&gt;     &lt;limit&gt;
&gt;       &lt;path&gt;/dir1&lt;/path&gt;
&gt;       &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
&gt;       &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
&gt;       &lt;used_space&gt;0Bytes&lt;/used_space&gt;
&gt;       &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
&gt;     &lt;/limit&gt;
&gt;   &lt;/volQuota&gt;
&gt; &lt;/cliOutput&gt;
&gt;
&gt; Change-Id: I8a8d83cff88f778e5ee01fbca07d9f94c412317a
&gt; BUG: 1185259
&gt; Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/9481
&gt; Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;

Change-Id: Ibdf51db626a07e68b5ace98140877f6d21918c20
BUG: 1231641
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11220
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is a backport of http://review.gluster.org/#/c/9481/

&gt; Sample output:
&gt; ---------------
&gt;
&gt; Sample 1)
&gt; ----------
&gt; [root@snapshot-28 glusterfs]# gluster volume quota vol1 list /dir1 /dir4
&gt; /dir5 --xml
&gt; &lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&gt; &lt;cliOutput&gt;
&gt;   &lt;opRet&gt;0&lt;/opRet&gt;
&gt;   &lt;opErrno&gt;0&lt;/opErrno&gt;
&gt;   &lt;opErrstr/&gt;
&gt;   &lt;volQuota&gt;
&gt;     &lt;limit&gt;
&gt;       &lt;path&gt;/dir1&lt;/path&gt;
&gt;       &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
&gt;       &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
&gt;       &lt;used_space&gt;0Bytes&lt;/used_space&gt;
&gt;       &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
&gt;     &lt;/limit&gt;
&gt;     &lt;limit&gt;
&gt;       &lt;path&gt;/dir4&lt;/path&gt;
&gt;       &lt;path&gt;No such file or directory&lt;/path&gt;
&gt;     &lt;/limit&gt;
&gt;     &lt;limit&gt;
&gt;       &lt;path&gt;/dir5&lt;/path&gt;
&gt;       &lt;path&gt;No such file or directory&lt;/path&gt;
&gt;     &lt;/limit&gt;
&gt;   &lt;/volQuota&gt;
&gt; &lt;/cliOutput&gt;
&gt;
&gt; Sample 2)
&gt; ---------
&gt; gluster volume quota vol1 list --xml
&gt; &lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&gt; &lt;cliOutput&gt;
&gt;   &lt;opRet&gt;0&lt;/opRet&gt;
&gt;   &lt;opErrno&gt;0&lt;/opErrno&gt;
&gt;   &lt;opErrstr/&gt;
&gt;   &lt;volQuota/&gt;
&gt; &lt;/cliOutput&gt;
&gt; &lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt;
&gt; &lt;cliOutput&gt;
&gt;   &lt;volQuota&gt;
&gt;     &lt;limit&gt;
&gt;       &lt;path&gt;/dir&lt;/path&gt;
&gt;       &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
&gt;       &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
&gt;       &lt;used_space&gt;0Bytes&lt;/used_space&gt;
&gt;       &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
&gt;     &lt;/limit&gt;
&gt;     &lt;limit&gt;
&gt;       &lt;path&gt;/dir1&lt;/path&gt;
&gt;       &lt;hard_limit&gt;10.0MB&lt;/hard_limit&gt;
&gt;       &lt;soft_limit&gt;80%&lt;/soft_limit&gt;
&gt;       &lt;used_space&gt;0Bytes&lt;/used_space&gt;
&gt;       &lt;avail_space&gt;10.0MB&lt;/avail_space&gt;
&gt;     &lt;/limit&gt;
&gt;   &lt;/volQuota&gt;
&gt; &lt;/cliOutput&gt;
&gt;
&gt; Change-Id: I8a8d83cff88f778e5ee01fbca07d9f94c412317a
&gt; BUG: 1185259
&gt; Signed-off-by: Sachin Pandit &lt;spandit@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/9481
&gt; Reviewed-by: Vijaikumar Mallikarjuna &lt;vmallika@redhat.com&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;

Change-Id: Ibdf51db626a07e68b5ace98140877f6d21918c20
BUG: 1231641
Signed-off-by: vmallika &lt;vmallika@redhat.com&gt;
Reviewed-on: http://review.gluster.org/11220
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Sachin Pandit &lt;spandit@redhat.com&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli: Fix xml output for volume status</title>
<updated>2014-07-21T07:11:19+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2014-07-18T17:05:26+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=d5f72dc49604aec2643d92a1b4e321c532ef8d05'/>
<id>d5f72dc49604aec2643d92a1b4e321c532ef8d05</id>
<content type='text'>
The XML output for volume status was malformed when one of the nodes is
down, leading to outputs like
-------
          &lt;node&gt;
             &lt;node&gt;
               &lt;hostname&gt;NFS Server&lt;/hostname&gt;
               &lt;path&gt;localhost&lt;/path&gt;
               &lt;peerid&gt;63ca3d2f-8c1f-4b84-b797-b4baddab81fb&lt;/peerid&gt;
               &lt;status&gt;1&lt;/status&gt;
               &lt;port&gt;2049&lt;/port&gt;
               &lt;pid&gt;2130&lt;/pid&gt;
             &lt;/node&gt;
-----

This was happening because we were starting the &lt;node&gt; element before
determining if node was present, and were not closing it or clearing it
when not finding the node in the dict.

To fix this, the &lt;node&gt; element is only started once a node has been
found in the dict.

Cherry picked from commit 2ba42d07eb967472227eb0a93e4ca2cac7a197b5:
&gt; Change-Id: I6b6205f14b27a69adb95d85db7b48999aa48d400
&gt; BUG: 1046020
&gt; Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/6571
&gt; Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
&gt; Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I6b6205f14b27a69adb95d85db7b48999aa48d400
BUG: 1117241
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8334
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The XML output for volume status was malformed when one of the nodes is
down, leading to outputs like
-------
          &lt;node&gt;
             &lt;node&gt;
               &lt;hostname&gt;NFS Server&lt;/hostname&gt;
               &lt;path&gt;localhost&lt;/path&gt;
               &lt;peerid&gt;63ca3d2f-8c1f-4b84-b797-b4baddab81fb&lt;/peerid&gt;
               &lt;status&gt;1&lt;/status&gt;
               &lt;port&gt;2049&lt;/port&gt;
               &lt;pid&gt;2130&lt;/pid&gt;
             &lt;/node&gt;
-----

This was happening because we were starting the &lt;node&gt; element before
determining if node was present, and were not closing it or clearing it
when not finding the node in the dict.

To fix this, the &lt;node&gt; element is only started once a node has been
found in the dict.

Cherry picked from commit 2ba42d07eb967472227eb0a93e4ca2cac7a197b5:
&gt; Change-Id: I6b6205f14b27a69adb95d85db7b48999aa48d400
&gt; BUG: 1046020
&gt; Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
&gt; Reviewed-on: http://review.gluster.org/6571
&gt; Reviewed-by: Aravinda VK &lt;avishwan@redhat.com&gt;
&gt; Reviewed-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
&gt; Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
&gt; Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;

Change-Id: I6b6205f14b27a69adb95d85db7b48999aa48d400
BUG: 1117241
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: http://review.gluster.org/8334
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli: xml: Rebalance status(xml) was empty when a glusterd down</title>
<updated>2014-02-06T09:40:33+00:00</updated>
<author>
<name>Aravinda VK</name>
<email>avishwan@redhat.com</email>
</author>
<published>2013-12-02T09:49:17+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=cfaab6bcce48fcd369d9c8d66be4413d1d0b8356'/>
<id>cfaab6bcce48fcd369d9c8d66be4413d1d0b8356</id>
<content type='text'>
When a glusterd is down in cluster rebalance/remove-brick status
--xml will fail to get status and returns null.

This patch skips collecting status if glusterd is down, and
collects status from all the other up nodes.

BUG: 1036564
Change-Id: Id8fbb63476e136296231d6652a8bd1a4547edbf5
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6391
Reviewed-on: http://review.gluster.org/6848
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When a glusterd is down in cluster rebalance/remove-brick status
--xml will fail to get status and returns null.

This patch skips collecting status if glusterd is down, and
collects status from all the other up nodes.

BUG: 1036564
Change-Id: Id8fbb63476e136296231d6652a8bd1a4547edbf5
Signed-off-by: Aravinda VK &lt;avishwan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6391
Reviewed-on: http://review.gluster.org/6848
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli: Addition of new child elements under brick in volume info xml.</title>
<updated>2014-01-17T10:12:55+00:00</updated>
<author>
<name>ndarshan</name>
<email>dnarayan@redhat.com</email>
</author>
<published>2013-12-27T08:11:19+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=16218e529a7b38434d3618b551de1496456ee580'/>
<id>16218e529a7b38434d3618b551de1496456ee580</id>
<content type='text'>
Added new child elements; name and hostUuid under brick in
volume info xml where name and host uuid of the bricks are stored,
This does not break backward compatibility as the old value under
brick is not removed.

Change-Id: I95b690d90fc5df40aa62bc76621afa18f7c6073b
Signed-off-by: ndarshan &lt;dnarayan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6604
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-on: http://review.gluster.org/6721
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added new child elements; name and hostUuid under brick in
volume info xml where name and host uuid of the bricks are stored,
This does not break backward compatibility as the old value under
brick is not removed.

Change-Id: I95b690d90fc5df40aa62bc76621afa18f7c6073b
Signed-off-by: ndarshan &lt;dnarayan@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6604
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-on: http://review.gluster.org/6721
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Aggregate tasks status in 'volume status [tasks]'</title>
<updated>2013-12-23T14:56:34+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-12-23T08:37:45+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=9d592246d6121aa38cd6fb6a875be4473d4979c8'/>
<id>9d592246d6121aa38cd6fb6a875be4473d4979c8</id>
<content type='text'>
        Backport of http://review.gluster.org/6230
Previously, glusterd used to just send back the local status of a task
in a 'volume status [tasks]' command. As the rebalance operation is
distributed and asynchronus, this meant that different peers could give
different status values for a rebalance or remove-brick task.

With this patch, all the peers will send back the tasks status as a part
of the 'volume status' commit op, and the origin peer will aggregate
these to arrive at a final status for the task.

The aggregation is only done for rebalance or remove-brick tasks. The
replace-brick task will have the same status on all the peers (see
comment in glusterd_volume_status_aggregate_tasks_status() for more
information) and need not be aggregated.

The rebalance process has 5 states,
 NOT_STARTED - rebalance process has not been started on this node
 STARTED - rebalance process has been started and is still running
 STOPPED - rebalance process was stopped by a 'rebalance/remove-brick
           stop' command
 COMPLETED - rebalance process completed successfully
 FAILED - rebalance process failed to complete successfully
The aggregation is done using the following precedence,
 STARTED &gt; FAILED &gt; STOPPED &gt; COMPLETED &gt; NOT_STARTED

The new changes make the 'volume status tasks' command a distributed
command as we need to get the task status from all peers.

The following tests were performed,
- Start a remove-brick task and do a status command on a peer which
  doesn't have the brick being removed. The remove-brick status was
  given correctly as 'in progress' and 'completed', instead of 'not
  started'
- Start a rebalance task, run the status command. The status moved to
  'completed' only after rebalance completed on all nodes.

Also, change the CLI xml output code for rebalance status to use the
same algorithm for status aggregation.

Change-Id: Ifd4aff705aa51609a612d5a9194acc73e10a82c0
BUG: 1027094
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
 http://review.gluster.org/6230
Reviewed-on: http://review.gluster.org/6562
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
        Backport of http://review.gluster.org/6230
Previously, glusterd used to just send back the local status of a task
in a 'volume status [tasks]' command. As the rebalance operation is
distributed and asynchronus, this meant that different peers could give
different status values for a rebalance or remove-brick task.

With this patch, all the peers will send back the tasks status as a part
of the 'volume status' commit op, and the origin peer will aggregate
these to arrive at a final status for the task.

The aggregation is only done for rebalance or remove-brick tasks. The
replace-brick task will have the same status on all the peers (see
comment in glusterd_volume_status_aggregate_tasks_status() for more
information) and need not be aggregated.

The rebalance process has 5 states,
 NOT_STARTED - rebalance process has not been started on this node
 STARTED - rebalance process has been started and is still running
 STOPPED - rebalance process was stopped by a 'rebalance/remove-brick
           stop' command
 COMPLETED - rebalance process completed successfully
 FAILED - rebalance process failed to complete successfully
The aggregation is done using the following precedence,
 STARTED &gt; FAILED &gt; STOPPED &gt; COMPLETED &gt; NOT_STARTED

The new changes make the 'volume status tasks' command a distributed
command as we need to get the task status from all peers.

The following tests were performed,
- Start a remove-brick task and do a status command on a peer which
  doesn't have the brick being removed. The remove-brick status was
  given correctly as 'in progress' and 'completed', instead of 'not
  started'
- Start a rebalance task, run the status command. The status moved to
  'completed' only after rebalance completed on all nodes.

Also, change the CLI xml output code for rebalance status to use the
same algorithm for status aggregation.

Change-Id: Ifd4aff705aa51609a612d5a9194acc73e10a82c0
BUG: 1027094
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
 http://review.gluster.org/6230
Reviewed-on: http://review.gluster.org/6562
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli: List only nodes which have rebalance started in rebalance status</title>
<updated>2013-11-20T19:30:25+00:00</updated>
<author>
<name>Kaushal M</name>
<email>kaushal@redhat.com</email>
</author>
<published>2013-11-12T09:38:26+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=bc9f0bb5ce108cba7e88be123681e2c269da31b7'/>
<id>bc9f0bb5ce108cba7e88be123681e2c269da31b7</id>
<content type='text'>
Listing the nodes on which rebalance hasn't been started is just giving
out extraneous information.

Also, refactor the rebalance status printing code into a single function
and use it for both rebalance and remove-brick status.

BUG: 1031887
Change-Id: I47bd561347dfd6ef76c52a1587916d6a71eac369
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6300
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Listing the nodes on which rebalance hasn't been started is just giving
out extraneous information.

Also, refactor the rebalance status printing code into a single function
and use it for both rebalance and remove-brick status.

BUG: 1031887
Change-Id: I47bd561347dfd6ef76c52a1587916d6a71eac369
Signed-off-by: Kaushal M &lt;kaushal@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6300
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix xml compilation error</title>
<updated>2013-11-19T17:59:28+00:00</updated>
<author>
<name>M. Mohan Kumar</name>
<email>mohan@in.ibm.com</email>
</author>
<published>2013-11-18T07:19:21+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=56b82b5294ae0ea0e73ae3d6bb58504442773e0f'/>
<id>56b82b5294ae0ea0e73ae3d6bb58504442773e0f</id>
<content type='text'>
Compiling GlusterFS without xml package results in following build error

cli-rpc-ops.o: In function `gf_cli_status_cbk':
/home/mohan/Work/glusterfs/cli/src/cli-rpc-ops.c:6430: undefined
reference to `cli_xml_output_vol_status_tasks_detail'

Change-Id: I49b3c46ac3340c40e372bef4690cedb41df20e8a
Signed-off-by: M. Mohan Kumar &lt;mohan@in.ibm.com&gt;
Reviewed-on: http://review.gluster.org/6295
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Compiling GlusterFS without xml package results in following build error

cli-rpc-ops.o: In function `gf_cli_status_cbk':
/home/mohan/Work/glusterfs/cli/src/cli-rpc-ops.c:6430: undefined
reference to `cli_xml_output_vol_status_tasks_detail'

Change-Id: I49b3c46ac3340c40e372bef4690cedb41df20e8a
Signed-off-by: M. Mohan Kumar &lt;mohan@in.ibm.com&gt;
Reviewed-on: http://review.gluster.org/6295
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli: add peerid to volume status xml output</title>
<updated>2013-11-15T07:31:32+00:00</updated>
<author>
<name>Bala.FA</name>
<email>barumuga@redhat.com</email>
</author>
<published>2013-10-29T11:47:12+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=432cecfbff496bfa9e71e8cbbed789458656c553'/>
<id>432cecfbff496bfa9e71e8cbbed789458656c553</id>
<content type='text'>
This patch adds &lt;peerid&gt; tag to bricks and nfs/shd like services to
volume status xml output.

BUG: 955548
Change-Id: I9aaa9266e4d56f632235eaeef565e92d757c0694
Signed-off-by: Bala.FA &lt;barumuga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6162
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch adds &lt;peerid&gt; tag to bricks and nfs/shd like services to
volume status xml output.

BUG: 955548
Change-Id: I9aaa9266e4d56f632235eaeef565e92d757c0694
Signed-off-by: Bala.FA &lt;barumuga@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6162
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bd: posix/multi-brick support to BD xlator</title>
<updated>2013-11-13T19:38:42+00:00</updated>
<author>
<name>M. Mohan Kumar</name>
<email>mohan@in.ibm.com</email>
</author>
<published>2013-11-13T17:14:42+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=48c40e1a42efe1b59126406084821947d139dd0e'/>
<id>48c40e1a42efe1b59126406084821947d139dd0e</id>
<content type='text'>
Current BD xlator (block backend) has a few limitations such as
* Creation of directories not supported
* Supports only single brick
* Does not use extended attributes (and client gfid) like posix xlator
* Creation of special files (symbolic links, device nodes etc) not
  supported

Basic limitation of not allowing directory creation is blocking
oVirt/VDSM to consume BD xlator as part of Gluster domain since VDSM
creates multi-level directories when GlusterFS is used as storage
backend for storing VM images.

To overcome these limitations a new BD xlator with following
improvements is suggested.

* New hybrid BD xlator that handles both regular files and block device
  files
* The volume will have both POSIX and BD bricks. Regular files are
  created on POSIX bricks, block devices are created on the BD brick (VG)
* BD xlator leverages exiting POSIX xlator for most POSIX calls and
  hence sits above the POSIX xlator
* Block device file is differentiated from regular file by an extended
  attribute
* The xattr 'user.glusterfs.bd' (BD_XATTR) plays a role in mapping a
  posix file to Logical Volume (LV).
* When a client sends a request to set BD_XATTR on a posix file, a new
  LV is created and mapped to posix file. So every block device will
  have a representative file in POSIX brick with 'user.glusterfs.bd'
  (BD_XATTR) set.
* Here after all operations on this file results in LV related
  operations.

For example opening a file that has BD_XATTR set results in opening
the LV block device, reading results in reading the corresponding LV
block device.

When BD xlator gets request to set BD_XATTR via setxattr call, it
creates a LV and information about this LV is placed in the xattr of the
posix file. xattr "user.glusterfs.bd" used to identify that posix file
is mapped to BD.

Usage:
Server side:
[root@host1 ~]# gluster volume create bdvol host1:/storage/vg1_info?vg1 host2:/storage/vg2_info?vg2
It creates a distributed gluster volume 'bdvol' with Volume Group vg1
using posix brick /storage/vg1_info in host1 and Volume Group vg2 using
/storage/vg2_info in host2.

[root@host1 ~]# gluster volume start bdvol

Client side:
[root@node ~]# mount -t glusterfs host1:/bdvol /media
[root@node ~]# touch /media/posix
It creates regular posix file 'posix' in either host1:/vg1 or host2:/vg2 brick
[root@node ~]# mkdir /media/image
[root@node ~]# touch /media/image/lv1
It also creates regular posix file 'lv1' in either host1:/vg1 or
host2:/vg2 brick
[root@node ~]# setfattr -n "user.glusterfs.bd" -v "lv" /media/image/lv1
[root@node ~]#
Above setxattr results in creating a new LV in corresponding brick's VG
and it sets 'user.glusterfs.bd' with value 'lv:&lt;default-extent-size'
[root@node ~]# truncate -s5G /media/image/lv1
It results in resizig LV 'lv1'to 5G

New BD xlator code is placed in xlators/storage/bd directory.

Also add volume-uuid to the VG so that same VG can't be used for other
bricks/volumes. After deleting a gluster volume, one has to manually
remove the associated tag using vgchange &lt;vg-name&gt; --deltag
&lt;trusted.glusterfs.volume-id:&lt;volume-id&gt;&gt;

Changes from previous version V5:
* Removed support for delayed deleting of LVs

Changes from previous version V4:
* Consolidated the patches
* Removed usage of BD_XATTR_SIZE and consolidated it in BD_XATTR.

Changes from previous version V3:
* Added support in FUSE to support full/linked clone
* Added support to merge snapshots and provide information about origin
* bd_map xlator removed
* iatt structure used in inode_ctx. iatt is cached and updated during
fsync/flush
* aio support
* Type and capabilities of volume are exported through getxattr

Changes from version 2:
* Used inode_context for caching BD size and to check if loc/fd is BD or
  not.
* Added GlusterFS server offloaded copy and snapshot through setfattr
  FOP. As part of this libgfapi is modified.
* BD xlator supports stripe
* During unlinking if a LV file is already opened, its added to delete
  list and bd_del_thread tries to delete from this list when a last
  reference to that file is closed.

Changes from previous version:
* gfid is used as name of LV
* ? is used to specify VG name for creating BD volume in volume
  create, add-brick. gluster volume create volname host:/path?vg
* open-behind issue is fixed
* A replicate brick can be added dynamically and LVs from source brick
  are replicated to destination brick
* A distribute brick can be added dynamically and rebalance operation
  distributes existing LVs/files to the new brick
* Thin provisioning support added.
* bd_map xlator support retained
* setfattr -n user.glusterfs.bd -v "lv" creates a regular LV and
  setfattr -n user.glusterfs.bd -v "thin" creates thin LV
* Capability and backend information added to gluster volume info (and
--xml) so
  that management tools can exploit BD xlator.
* tracing support for bd xlator added

TODO:
* Add support to display snapshots for a given LV
* Display posix filename for list-origin instead of gfid

Change-Id: I00d32dfbab3b7c806e0841515c86c3aa519332f2
BUG: 1028672
Signed-off-by: M. Mohan Kumar &lt;mohan@in.ibm.com&gt;
Reviewed-on: http://review.gluster.org/4809
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Current BD xlator (block backend) has a few limitations such as
* Creation of directories not supported
* Supports only single brick
* Does not use extended attributes (and client gfid) like posix xlator
* Creation of special files (symbolic links, device nodes etc) not
  supported

Basic limitation of not allowing directory creation is blocking
oVirt/VDSM to consume BD xlator as part of Gluster domain since VDSM
creates multi-level directories when GlusterFS is used as storage
backend for storing VM images.

To overcome these limitations a new BD xlator with following
improvements is suggested.

* New hybrid BD xlator that handles both regular files and block device
  files
* The volume will have both POSIX and BD bricks. Regular files are
  created on POSIX bricks, block devices are created on the BD brick (VG)
* BD xlator leverages exiting POSIX xlator for most POSIX calls and
  hence sits above the POSIX xlator
* Block device file is differentiated from regular file by an extended
  attribute
* The xattr 'user.glusterfs.bd' (BD_XATTR) plays a role in mapping a
  posix file to Logical Volume (LV).
* When a client sends a request to set BD_XATTR on a posix file, a new
  LV is created and mapped to posix file. So every block device will
  have a representative file in POSIX brick with 'user.glusterfs.bd'
  (BD_XATTR) set.
* Here after all operations on this file results in LV related
  operations.

For example opening a file that has BD_XATTR set results in opening
the LV block device, reading results in reading the corresponding LV
block device.

When BD xlator gets request to set BD_XATTR via setxattr call, it
creates a LV and information about this LV is placed in the xattr of the
posix file. xattr "user.glusterfs.bd" used to identify that posix file
is mapped to BD.

Usage:
Server side:
[root@host1 ~]# gluster volume create bdvol host1:/storage/vg1_info?vg1 host2:/storage/vg2_info?vg2
It creates a distributed gluster volume 'bdvol' with Volume Group vg1
using posix brick /storage/vg1_info in host1 and Volume Group vg2 using
/storage/vg2_info in host2.

[root@host1 ~]# gluster volume start bdvol

Client side:
[root@node ~]# mount -t glusterfs host1:/bdvol /media
[root@node ~]# touch /media/posix
It creates regular posix file 'posix' in either host1:/vg1 or host2:/vg2 brick
[root@node ~]# mkdir /media/image
[root@node ~]# touch /media/image/lv1
It also creates regular posix file 'lv1' in either host1:/vg1 or
host2:/vg2 brick
[root@node ~]# setfattr -n "user.glusterfs.bd" -v "lv" /media/image/lv1
[root@node ~]#
Above setxattr results in creating a new LV in corresponding brick's VG
and it sets 'user.glusterfs.bd' with value 'lv:&lt;default-extent-size'
[root@node ~]# truncate -s5G /media/image/lv1
It results in resizig LV 'lv1'to 5G

New BD xlator code is placed in xlators/storage/bd directory.

Also add volume-uuid to the VG so that same VG can't be used for other
bricks/volumes. After deleting a gluster volume, one has to manually
remove the associated tag using vgchange &lt;vg-name&gt; --deltag
&lt;trusted.glusterfs.volume-id:&lt;volume-id&gt;&gt;

Changes from previous version V5:
* Removed support for delayed deleting of LVs

Changes from previous version V4:
* Consolidated the patches
* Removed usage of BD_XATTR_SIZE and consolidated it in BD_XATTR.

Changes from previous version V3:
* Added support in FUSE to support full/linked clone
* Added support to merge snapshots and provide information about origin
* bd_map xlator removed
* iatt structure used in inode_ctx. iatt is cached and updated during
fsync/flush
* aio support
* Type and capabilities of volume are exported through getxattr

Changes from version 2:
* Used inode_context for caching BD size and to check if loc/fd is BD or
  not.
* Added GlusterFS server offloaded copy and snapshot through setfattr
  FOP. As part of this libgfapi is modified.
* BD xlator supports stripe
* During unlinking if a LV file is already opened, its added to delete
  list and bd_del_thread tries to delete from this list when a last
  reference to that file is closed.

Changes from previous version:
* gfid is used as name of LV
* ? is used to specify VG name for creating BD volume in volume
  create, add-brick. gluster volume create volname host:/path?vg
* open-behind issue is fixed
* A replicate brick can be added dynamically and LVs from source brick
  are replicated to destination brick
* A distribute brick can be added dynamically and rebalance operation
  distributes existing LVs/files to the new brick
* Thin provisioning support added.
* bd_map xlator support retained
* setfattr -n user.glusterfs.bd -v "lv" creates a regular LV and
  setfattr -n user.glusterfs.bd -v "thin" creates thin LV
* Capability and backend information added to gluster volume info (and
--xml) so
  that management tools can exploit BD xlator.
* tracing support for bd xlator added

TODO:
* Add support to display snapshots for a given LV
* Display posix filename for list-origin instead of gfid

Change-Id: I00d32dfbab3b7c806e0841515c86c3aa519332f2
BUG: 1028672
Signed-off-by: M. Mohan Kumar &lt;mohan@in.ibm.com&gt;
Reviewed-on: http://review.gluster.org/4809
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cli,glusterd: Implement 'volume status tasks'</title>
<updated>2013-10-09T06:13:16+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2013-09-24T11:31:46+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=e51ca3c1c991416895e1e8693f7c3e6332d57464'/>
<id>e51ca3c1c991416895e1e8693f7c3e6332d57464</id>
<content type='text'>
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.

The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.

Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
oVirt's Gluster Integration needs an inexpensive command that can be
executed every 10 seconds to monitor async tasks and their parameters,
for all volumes.

The solution involves adding a 'tasks' sub-command to 'volume status'
to fetch only the async task IDs, type and other relevant parameters.
Only the originator glusterd participates in this command as all the
information needed is available on all the nodes. This is to make the
command suitable for being executed every 10 seconds.

Change-Id: I1edc607baf29b001a5585079dec681d7c641b3d1
BUG: 1012346
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/6006
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Kaushal M &lt;kaushal@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
