<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/cluster/dht/src/dht-common.c, branch v4.0dev</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>cluster/dht: Fixed crash in dht_rmdir_is_subvol_empty</title>
<updated>2017-07-20T12:45:59+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-07-19T16:14:55+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=fdc431063f33cf4f5572771742e5502565f2a3ca'/>
<id>fdc431063f33cf4f5572771742e5502565f2a3ca</id>
<content type='text'>
The local-&gt;call_cnt was being accessed and updated inside
the loop where the entries were being processed and the calls
were being wound.
This could end up in a scenario where the local-&gt;call_cnt became
0 before the processing was complete causing the crash when the
next entry was being processed.

Change-Id: I930f61f1a1d1948f90d4e58e80b7d6680cf27f2f
BUG: 1472949
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17825
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The local-&gt;call_cnt was being accessed and updated inside
the loop where the entries were being processed and the calls
were being wound.
This could end up in a scenario where the local-&gt;call_cnt became
0 before the processing was complete causing the crash when the
next entry was being processed.

Change-Id: I930f61f1a1d1948f90d4e58e80b7d6680cf27f2f
BUG: 1472949
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17825
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Name threads on creation</title>
<updated>2017-07-19T14:16:19+00:00</updated>
<author>
<name>Raghavendra Talur</name>
<email>rtalur@redhat.com</email>
</author>
<published>2017-07-18T06:06:19+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=33db9aff1deaa028f30516e49fdb1e8d6e31bb73'/>
<id>33db9aff1deaa028f30516e49fdb1e8d6e31bb73</id>
<content type='text'>
Set names to threads on creation for easier
debugging.

Output of top -H -p &lt;PID-OF-GLUSTERFSD&gt;
Before:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd

After:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustertimer
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustermemsweep
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc0
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc1
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll0
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteridxwrker
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteriotwr0
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrssign
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrswrker
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterclogecon
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd0
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd1
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd2
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixjan
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixfsy
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll1
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll2
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixhc

Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703
BUG: 1254002
Updates: #271
Signed-off-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-on: https://review.gluster.org/11926
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Set names to threads on creation for easier
debugging.

Output of top -H -p &lt;PID-OF-GLUSTERFSD&gt;
Before:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterfsd
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd

After:
19773 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19774 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustertimer
19775 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterfsd
19776 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustermemsweep
19777 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc0
19778 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glustersproc1
19779 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll0
19780 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteridxwrker
19781 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusteriotwr0
19782 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrssign
19783 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterbrswrker
19784 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterclogecon
19785 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd0
19786 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd1
19787 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.01 glusterclogd2
19789 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixjan
19790 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixfsy
25178 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll1
 5398 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterepoll2
 7881 root      20   0 1301.3m  12.6m   8.4m S  0.0  0.1   0:00.00 glusterposixhc

Change-Id: Id5f333755c1ba168a2ffaa4fce6e71c375e10703
BUG: 1254002
Updates: #271
Signed-off-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-on: https://review.gluster.org/11926
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Check if fd is opened on dst subvol</title>
<updated>2017-06-28T11:42:21+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-06-26T15:42:56+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=91db0d47ca267aecfc6124a3f337a4e2f2c9f1e2'/>
<id>91db0d47ca267aecfc6124a3f337a4e2f2c9f1e2</id>
<content type='text'>
If an fd is opened on a file, the file is migrated
and the cached subvol is updated in the inode_ctx
before an fd based fop is sent, the fop is sent to
the dst subvol on which the fd is not opened.
This causes the FOP to fail with EBADF.

Now, every fd based fop will check to see that the fd
has been opened on the dst subvol before winding it down.

Change-Id: Id92ef5eb7a5b5226688e2d2868b15e383f5f240e
BUG: 1465075
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17630
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Susant Palai &lt;spalai@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If an fd is opened on a file, the file is migrated
and the cached subvol is updated in the inode_ctx
before an fd based fop is sent, the fop is sent to
the dst subvol on which the fd is not opened.
This causes the FOP to fail with EBADF.

Now, every fd based fop will check to see that the fd
has been opened on the dst subvol before winding it down.

Change-Id: Id92ef5eb7a5b5226688e2d2868b15e383f5f240e
BUG: 1465075
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17630
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Susant Palai &lt;spalai@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/rebalance: Use GF_XATTR_LIST_NODE_UUIDS_KEY to figure out local subvols.</title>
<updated>2017-06-26T10:47:00+00:00</updated>
<author>
<name>Susant Palai</name>
<email>spalai@redhat.com</email>
</author>
<published>2017-06-21T12:22:45+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=4700c5be55b0e567755b4c8a1a91f33d29c06e6b'/>
<id>4700c5be55b0e567755b4c8a1a91f33d29c06e6b</id>
<content type='text'>
Afr has introduced a new key GF_XATTR_LIST_NODE_UUIDS_KEY,
through which rebalance will figure out its local subvolumes.(Reference
bugid=1463250)

key: GF_XATTR_NODE_UUID_KEY will continue to serve it's old
purpose of returning the first afr chiild.

test: prove tests/basic/distribute/rebal-all-nodes-migrate.t

Change-Id: I4d602feda2a05b29d2210c712a07a4ac6b8bc112
BUG: 1463648
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17595
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Afr has introduced a new key GF_XATTR_LIST_NODE_UUIDS_KEY,
through which rebalance will figure out its local subvolumes.(Reference
bugid=1463250)

key: GF_XATTR_NODE_UUID_KEY will continue to serve it's old
purpose of returning the first afr chiild.

test: prove tests/basic/distribute/rebal-all-nodes-migrate.t

Change-Id: I4d602feda2a05b29d2210c712a07a4ac6b8bc112
BUG: 1463648
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17595
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dht/hardlink : Remove stale linkto file incase of failure</title>
<updated>2017-06-22T06:27:42+00:00</updated>
<author>
<name>Jiffin Tony Thottan</name>
<email>jthottan@redhat.com</email>
</author>
<published>2017-05-18T05:52:16+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=237648780336d41cd6d820c337b38dd13f4d9e72'/>
<id>237648780336d41cd6d820c337b38dd13f4d9e72</id>
<content type='text'>
This is a similar issue fixed for rename in https://review.gluster.org/#/c/16016/
For hardlinks, if cached and hashed subvolumes are different, then it will first
create linkto file in hashed using root permission, but actually hardlink creation
fails with EACESS and stale linkto file is never removed.All the followup hardlink
calls with file name will result ESTALE because linktofile creation fails with EEXIST
and follow up lookup on linkto file returns gfid-mismatching(old linkto file) and
finally fails with ESTALE

Steps to produce :
(From link/00.t test from posix-testsuite)
Steps executed in script
 * create a file "abc" using root
 * change the ownership of file to a non root user
 * create hardlink "link" for "abc" using a non root user, it fails with EACESS
 * delete "abc"
 * create directory "abc" using root
 * again try to create hadrlink "link" for "abc" using non root user, fails with ESTALE

Also tried to fix other bugs in dht_linkfile_create_cbk() and posix_lookup.

Thanks Susant for the help in debugging the issue and suggestion for this patch.

Change-Id: I7a5a1899d3fd1fdb13578b37f9d52a084492e35d
BUG: 1452084
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17331
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is a similar issue fixed for rename in https://review.gluster.org/#/c/16016/
For hardlinks, if cached and hashed subvolumes are different, then it will first
create linkto file in hashed using root permission, but actually hardlink creation
fails with EACESS and stale linkto file is never removed.All the followup hardlink
calls with file name will result ESTALE because linktofile creation fails with EEXIST
and follow up lookup on linkto file returns gfid-mismatching(old linkto file) and
finally fails with ESTALE

Steps to produce :
(From link/00.t test from posix-testsuite)
Steps executed in script
 * create a file "abc" using root
 * change the ownership of file to a non root user
 * create hardlink "link" for "abc" using a non root user, it fails with EACESS
 * delete "abc"
 * create directory "abc" using root
 * again try to create hadrlink "link" for "abc" using non root user, fails with ESTALE

Also tried to fix other bugs in dht_linkfile_create_cbk() and posix_lookup.

Thanks Susant for the help in debugging the issue and suggestion for this patch.

Change-Id: I7a5a1899d3fd1fdb13578b37f9d52a084492e35d
BUG: 1452084
Signed-off-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17331
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Include dirs in rebalance estimates</title>
<updated>2017-06-07T04:02:24+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-06-01T16:43:41+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=c9860430a77f20ddfec532819542bb1d0187c06e'/>
<id>c9860430a77f20ddfec532819542bb1d0187c06e</id>
<content type='text'>
Empty directories were not being considered while
calculating rebalance estimates leading to negative
time-left values being displayed as part of the
rebalance status.

Change-Id: I48d41d702e72db30af10e6b87b628baa605afa98
BUG: 1457985
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17448
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Empty directories were not being considered while
calculating rebalance estimates leading to negative
time-left values being displayed as part of the
rebalance status.

Change-Id: I48d41d702e72db30af10e6b87b628baa605afa98
BUG: 1457985
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17448
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Make optimal usage of buffer provided with readdir(p)</title>
<updated>2017-05-31T14:13:01+00:00</updated>
<author>
<name>Sakshi</name>
<email>sabansal@redhat.com</email>
</author>
<published>2017-01-23T06:41:49+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=b9406e210717621bc672a63c1cbd1b0183834056'/>
<id>b9406e210717621bc672a63c1cbd1b0183834056</id>
<content type='text'>
dht_readdirp must unwind with list of entries only after
the entire buffer requested by kernel is filled to avoid
extra syscalls occuring when returning partially filled
buffer. Also wind readdir call to next subvol on reaching
EOD for directory on that subvol to avoid extra network call.

Change-Id: If2e1a2722f813d95457c7542bff25fef56c7a041
BUG: 1356453
Signed-off-by: Sakshi &lt;sabansal@redhat.com&gt;
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: https://review.gluster.org/12271
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Susant Palai &lt;spalai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
dht_readdirp must unwind with list of entries only after
the entire buffer requested by kernel is filled to avoid
extra syscalls occuring when returning partially filled
buffer. Also wind readdir call to next subvol on reaching
EOD for directory on that subvol to avoid extra network call.

Change-Id: If2e1a2722f813d95457c7542bff25fef56c7a041
BUG: 1356453
Signed-off-by: Sakshi &lt;sabansal@redhat.com&gt;
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
Reviewed-on: https://review.gluster.org/12271
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Susant Palai &lt;spalai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: fix on demand migration files from client</title>
<updated>2017-05-30T00:42:58+00:00</updated>
<author>
<name>Susant Palai</name>
<email>spalai@redhat.com</email>
</author>
<published>2017-04-25T13:02:45+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=1b1f871ca41b08671ebb327dba464aeb6c82e776'/>
<id>1b1f871ca41b08671ebb327dba464aeb6c82e776</id>
<content type='text'>
    On demand migration of files i.e. migration done by clients
    triggered by a setfattr was broken.
    
    Dependency on defrag led to crash when migration was triggered from
    client.
    
    Note: This functionality is not available for tiered volumes. Migration
    from tier served client will fail with ENOTSUP.
    
    usage (But refer to the steps mentioned below to avoid any issues) :
    setfattr -n "trusted.distribute.migrate-data" -v "1" &lt;filename&gt;
    
    The purpose of fixing the on-demand client migration was to give a
    workaround where the user has lots of empty directories compared to
    files and want to do a remove-brick process.
    
    Here are the steps to trigger file migration for remove-brick process from
    client. (This is highly recommended to follow below steps as is)
    
    Let's say it is a replica volume and user want to remove a replica pair
    named brick1 and brick2. (Make sure healing is completed before you run
    these steps)
    
    Step-1: Start remove-brick process
     - gluster v remove-brick &lt;volname&gt; brick1 brick2 start
    Step-2: Kill the rebalance daemon
     - ps aux | grep glusterfs | grep rebalance\/ | awk '{print $2}' | xargs kill
    Step-3: Do a fresh mount as mentioned here
     -  glusterfs -s ${localhostname} --volfile-id rebalance/$volume-name /tmp/mount/point
    Step-4: Go to one of the bricks (among brick1 and brick2)
     - cd &lt;brick1 path&gt;
    Step-5: Run the following command.
     - find . -not \( -path ./.glusterfs -prune \) -type f -not -perm 01000 -exec bash -c 'setfattr -n "distribute.fix.layout" -v "1" ${mountpoint}/$(dirname '{}')' \; -exec  setfattr -n "trusted.distribute.migrate-data" -v "1" ${mountpoint}/'{}' \;
    
    This command will ignore the linkto files and empty directories. Do a fix-layout of
    the parent directory. And trigger a migration operation on the files.
    
    Step-6: Once this process is completed do "remove-brick force"
     - gluster v remove-brick &lt;volname&gt; brick1 brick2 force
    
    Note: Use the above script only when there are large number of empty directories.
    Since the script does a crawl on the brick side directly and avoids directories those
    are empty, the time spent on fixing layout on those directories are eliminated(even if the script
    does not do fix-layout on empty directories, post remove-brick a fresh layout will be built
    for the directory, hence not affecting application continuity).
    
    Detailing the expectation for hardlink migartion with this patch:
        Hardlink is migrated only for remove-brick process. It is highly essential
    to have a new mount(step-3) for the hardlink migration to happen. Why?:
    setfattr operation is an inode based operation. Since, we are doing setfattr from
    fuse mount here, inode_path will try to build path from the linked dentries to the inode.
    For a file without hardlinks the path construction will be correct. But for hardlinks,
    the inode will have multiple dentries linked.
    
            Without fresh mount, inode_path will always get the most recently linked dentry.
    e.g. if there are three hardlinks named dir1/link1, dir2/link2, dir3/link3, on a client
    where these hardlinks are looked up, inode_path will always return the path dir3/link3
    if dir3/link3 was looked up most recently. Hence, we won't be able to create linkto
    files for all other hardlinks on destination (read gf_defrag_handle_hardlink for more details
    on hardlink migration).
    
            With a fresh mount, the lookup and setfattr become serialized. e.g. link2 won't be
    looked up until link1 is looked up and migrated. Hence, inode_path will always have the correct
    path, in this case link1 dentry is picked up(as this is the most recently looked up inode) and
    the path is built right.
    
    Note: If you run the above script on an existing mount(all entries looked up), hard links may
    not be migrated, but there should not be any other issue. Please raise a bug, if you find any
    issue.
    
    Tests: Manual


Change-Id: I9854cdd4955d9e24494f348fb29ba856ea7ac50a
BUG: 1450975
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17115
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
    On demand migration of files i.e. migration done by clients
    triggered by a setfattr was broken.
    
    Dependency on defrag led to crash when migration was triggered from
    client.
    
    Note: This functionality is not available for tiered volumes. Migration
    from tier served client will fail with ENOTSUP.
    
    usage (But refer to the steps mentioned below to avoid any issues) :
    setfattr -n "trusted.distribute.migrate-data" -v "1" &lt;filename&gt;
    
    The purpose of fixing the on-demand client migration was to give a
    workaround where the user has lots of empty directories compared to
    files and want to do a remove-brick process.
    
    Here are the steps to trigger file migration for remove-brick process from
    client. (This is highly recommended to follow below steps as is)
    
    Let's say it is a replica volume and user want to remove a replica pair
    named brick1 and brick2. (Make sure healing is completed before you run
    these steps)
    
    Step-1: Start remove-brick process
     - gluster v remove-brick &lt;volname&gt; brick1 brick2 start
    Step-2: Kill the rebalance daemon
     - ps aux | grep glusterfs | grep rebalance\/ | awk '{print $2}' | xargs kill
    Step-3: Do a fresh mount as mentioned here
     -  glusterfs -s ${localhostname} --volfile-id rebalance/$volume-name /tmp/mount/point
    Step-4: Go to one of the bricks (among brick1 and brick2)
     - cd &lt;brick1 path&gt;
    Step-5: Run the following command.
     - find . -not \( -path ./.glusterfs -prune \) -type f -not -perm 01000 -exec bash -c 'setfattr -n "distribute.fix.layout" -v "1" ${mountpoint}/$(dirname '{}')' \; -exec  setfattr -n "trusted.distribute.migrate-data" -v "1" ${mountpoint}/'{}' \;
    
    This command will ignore the linkto files and empty directories. Do a fix-layout of
    the parent directory. And trigger a migration operation on the files.
    
    Step-6: Once this process is completed do "remove-brick force"
     - gluster v remove-brick &lt;volname&gt; brick1 brick2 force
    
    Note: Use the above script only when there are large number of empty directories.
    Since the script does a crawl on the brick side directly and avoids directories those
    are empty, the time spent on fixing layout on those directories are eliminated(even if the script
    does not do fix-layout on empty directories, post remove-brick a fresh layout will be built
    for the directory, hence not affecting application continuity).
    
    Detailing the expectation for hardlink migartion with this patch:
        Hardlink is migrated only for remove-brick process. It is highly essential
    to have a new mount(step-3) for the hardlink migration to happen. Why?:
    setfattr operation is an inode based operation. Since, we are doing setfattr from
    fuse mount here, inode_path will try to build path from the linked dentries to the inode.
    For a file without hardlinks the path construction will be correct. But for hardlinks,
    the inode will have multiple dentries linked.
    
            Without fresh mount, inode_path will always get the most recently linked dentry.
    e.g. if there are three hardlinks named dir1/link1, dir2/link2, dir3/link3, on a client
    where these hardlinks are looked up, inode_path will always return the path dir3/link3
    if dir3/link3 was looked up most recently. Hence, we won't be able to create linkto
    files for all other hardlinks on destination (read gf_defrag_handle_hardlink for more details
    on hardlink migration).
    
            With a fresh mount, the lookup and setfattr become serialized. e.g. link2 won't be
    looked up until link1 is looked up and migrated. Hence, inode_path will always have the correct
    path, in this case link1 dentry is picked up(as this is the most recently looked up inode) and
    the path is built right.
    
    Note: If you run the above script on an existing mount(all entries looked up), hard links may
    not be migrated, but there should not be any other issue. Please raise a bug, if you find any
    issue.
    
    Tests: Manual


Change-Id: I9854cdd4955d9e24494f348fb29ba856ea7ac50a
BUG: 1450975
Signed-off-by: Susant Palai &lt;spalai@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17115
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Rebalance on all nodes should migrate files</title>
<updated>2017-05-16T16:01:39+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-05-10T15:56:28+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=b23bd3dbc2c153171d0bb1205e6804afe022a55f'/>
<id>b23bd3dbc2c153171d0bb1205e6804afe022a55f</id>
<content type='text'>
Problem:
Rebalance compares the node-uuid of a file against its own
to and migrates a file only if they match. However, the
current behaviour in both AFR and EC is to return
the node-uuid of the first brick in a replica set for all
files. This means a single node ends up migrating all
the files if the first brick of every replica set is on the
same node.

Fix:
AFR and EC will return all node-uuids for the replica set.
The rebalance process will divide the files to be migrated
among all the nodes by hashing the gfid of the file and
using that value to select a node to perform the migration.
This patch makes the required DHT and tiering changes.

Some tests in rebal-all-nodes-migrate.t will need to be
uncommented once the AFR and EC changes are merged.

Change-Id: I5ce41600f5ba0e244ddfd986e2ba8fa23329ff0c
BUG: 1366817
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17239
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Rebalance compares the node-uuid of a file against its own
to and migrates a file only if they match. However, the
current behaviour in both AFR and EC is to return
the node-uuid of the first brick in a replica set for all
files. This means a single node ends up migrating all
the files if the first brick of every replica set is on the
same node.

Fix:
AFR and EC will return all node-uuids for the replica set.
The rebalance process will divide the files to be migrated
among all the nodes by hashing the gfid of the file and
using that value to select a node to perform the migration.
This patch makes the required DHT and tiering changes.

Some tests in rebal-all-nodes-migrate.t will need to be
uncommented once the AFR and EC changes are merged.

Change-Id: I5ce41600f5ba0e244ddfd986e2ba8fa23329ff0c
BUG: 1366817
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17239
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: Fix crash in dht rmdir</title>
<updated>2017-05-16T14:03:06+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-05-16T04:56:25+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=6f7d55c9d58797beaf8d5393c03a5a545bed8bec'/>
<id>6f7d55c9d58797beaf8d5393c03a5a545bed8bec</id>
<content type='text'>
Using local-&gt;call_cnt to check STACK_WINDs can
cause dht_rmdir_do to be called erroneously if
dht_rmdir_readdirp_cbk unwinds before we check if
local-&gt;call_cnt is zero in dht_rmdir_opendir_cbk.
This can cause frame corruptions and crashes.

Thanks to Shyam (srangana@redhat.com) for the
analysis.

Change-Id: I5362cf78f97f21b3fade0b9e94d492002a8d4a11
BUG: 1451083
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17305
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Using local-&gt;call_cnt to check STACK_WINDs can
cause dht_rmdir_do to be called erroneously if
dht_rmdir_readdirp_cbk unwinds before we check if
local-&gt;call_cnt is zero in dht_rmdir_opendir_cbk.
This can cause frame corruptions and crashes.

Thanks to Shyam (srangana@redhat.com) for the
analysis.

Change-Id: I5362cf78f97f21b3fade0b9e94d492002a8d4a11
BUG: 1451083
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17305
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
