<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git, branch release-9</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>core: Avoid several dict OR key is NULL message in brick logs (#2344)</title>
<updated>2021-04-22T13:26:28+00:00</updated>
<author>
<name>mohit84</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2021-04-22T13:26:28+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=5cbf5d94c719d1c7674a59c8009660197fc56af2'/>
<id>5cbf5d94c719d1c7674a59c8009660197fc56af2</id>
<content type='text'>
Problem: dict_get_with_ref throw a message "dict or key is NULL"
if dict or key is NULL.

Solution: Before access a key check if dictionary is valid.

&gt; Fixes: #1909
&gt; Change-Id: I50911679142b52f854baf20c187962a2a3698f2d
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; Cherry picked from commit de1b26d68e31b029a59e59a47b51a7e3e6fbfe22
&gt; Reviewed on upstream link https://github.com/gluster/glusterfs/pull/1910

Fixes: #1909
Change-Id: I50911679142b52f854baf20c187962a2a3698f2d
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: dict_get_with_ref throw a message "dict or key is NULL"
if dict or key is NULL.

Solution: Before access a key check if dictionary is valid.

&gt; Fixes: #1909
&gt; Change-Id: I50911679142b52f854baf20c187962a2a3698f2d
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; Cherry picked from commit de1b26d68e31b029a59e59a47b51a7e3e6fbfe22
&gt; Reviewed on upstream link https://github.com/gluster/glusterfs/pull/1910

Fixes: #1909
Change-Id: I50911679142b52f854baf20c187962a2a3698f2d
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Fix race in lockinfo (f)getxattr</title>
<updated>2021-04-12T13:03:39+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@users.noreply.github.com</email>
</author>
<published>2021-02-24T15:44:55+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=7feaeeabd3ad0b1410e78f584b7c5bbfb41ae0e6'/>
<id>7feaeeabd3ad0b1410e78f584b7c5bbfb41ae0e6</id>
<content type='text'>
* cluster/afr: Fix race in lockinfo (f)getxattr

A shared dictionary was updated outside the lock after having updated
the number of remaining answers. This means that one thread may be
processing the last answer and unwinding the request before another
thread completes updating the dict.

    Thread 1                           Thread 2

    LOCK()
    call_cnt-- (=1)
    UNLOCK()
                                       LOCK()
                                       call_cnt-- (=0)
                                       UNLOCK()
                                       update_dict(dict)
                                       if (call_cnt == 0) {
                                           STACK_UNWIND(dict);
                                       }
    update_dict(dict)
    if (call_cnt == 0) {
        STACK_UNWIND(dict);
    }

The updates from thread 1 are lost.

This patch also reduces the work done inside the locked region and
reduces code duplication.

Fixes: #2161
Change-Id: Idc0d34ab19ea6031de0641f7b05c624d90fac8fa
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* cluster/afr: Fix race in lockinfo (f)getxattr

A shared dictionary was updated outside the lock after having updated
the number of remaining answers. This means that one thread may be
processing the last answer and unwinding the request before another
thread completes updating the dict.

    Thread 1                           Thread 2

    LOCK()
    call_cnt-- (=1)
    UNLOCK()
                                       LOCK()
                                       call_cnt-- (=0)
                                       UNLOCK()
                                       update_dict(dict)
                                       if (call_cnt == 0) {
                                           STACK_UNWIND(dict);
                                       }
    update_dict(dict)
    if (call_cnt == 0) {
        STACK_UNWIND(dict);
    }

The updates from thread 1 are lost.

This patch also reduces the work done inside the locked region and
reduces code duplication.

Fixes: #2161
Change-Id: Idc0d34ab19ea6031de0641f7b05c624d90fac8fa
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>extras: disable lookup-optimize in virt and block groups</title>
<updated>2021-04-09T16:41:35+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2021-03-17T09:59:54+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=779f86d860e1ab98ee1c9bfc5214c9bd526a98ed'/>
<id>779f86d860e1ab98ee1c9bfc5214c9bd526a98ed</id>
<content type='text'>
lookup-optimize doesn't provide any benefit for virtualized
environments and gluster-block workloads, but it's known to cause
corruption in some cases when sharding is also enabled and the volume
is expanded or shrunk.

For this reason, we disable lookup-optimize by default on those
environments.

Fixes: #2253
Change-Id: I25861aa50b335556a995a9c33318dd3afb41bf71
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
lookup-optimize doesn't provide any benefit for virtualized
environments and gluster-block workloads, but it's known to cause
corruption in some cases when sharding is also enabled and the volume
is expanded or shrunk.

For this reason, we disable lookup-optimize by default on those
environments.

Fixes: #2253
Change-Id: I25861aa50b335556a995a9c33318dd3afb41bf71
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>afr: fix directory entry count</title>
<updated>2021-04-09T16:30:14+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2021-03-08T23:24:07+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=c3172d4883adf0bf277fd3a05825845978c7d7f2'/>
<id>c3172d4883adf0bf277fd3a05825845978c7d7f2</id>
<content type='text'>
AFR may hide some existing entries from a directory when reading it
because they are generated internally for private management. However
the returned number of entries from readdir() function is not updated
accordingly. So it may return a number higher than the real entries
present in the gf_dirent list.

This may cause unexpected behavior of clients, including gfapi which
incorrectly assumes that there was an entry when the list was actually
empty.

This patch also makes the check in gfapi more robust to avoid similar
issues that could appear in the future.

Fixes: #2232
Change-Id: I81ba3699248a53ebb0ee4e6e6231a4301436f763
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
AFR may hide some existing entries from a directory when reading it
because they are generated internally for private management. However
the returned number of entries from readdir() function is not updated
accordingly. So it may return a number higher than the real entries
present in the gf_dirent list.

This may cause unexpected behavior of clients, including gfapi which
incorrectly assumes that there was an entry when the list was actually
empty.

This patch also makes the check in gfapi more robust to avoid similar
issues that could appear in the future.

Fixes: #2232
Change-Id: I81ba3699248a53ebb0ee4e6e6231a4301436f763
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>doc: Updated release 9.1 notes (#2302)</title>
<updated>2021-03-30T05:04:44+00:00</updated>
<author>
<name>Rinku Kothiya</name>
<email>rkothiya@redhat.com</email>
</author>
<published>2021-03-30T05:04:44+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=74f93a8662858a3b41e0c153cc0976ea76ff1eae'/>
<id>74f93a8662858a3b41e0c153cc0976ea76ff1eae</id>
<content type='text'>
Added
* provided option to enable/disable storage.linux-io_uring during compilation
* Healing data in 1MB chunks instead of 128KB for improving healing performance

Updates: #2301

Change-Id: Iae49287cca00681426b4ecac85f1122912492ed5
Signed-off-by: Rinku Kothiya &lt;rkothiya@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added
* provided option to enable/disable storage.linux-io_uring during compilation
* Healing data in 1MB chunks instead of 128KB for improving healing performance

Updates: #2301

Change-Id: Iae49287cca00681426b4ecac85f1122912492ed5
Signed-off-by: Rinku Kothiya &lt;rkothiya@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>afr: make fsync post-op aware of inodelk count (#2273) (#2297)</title>
<updated>2021-03-29T05:35:13+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2021-03-29T05:35:13+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=b313a20f342b74809972ed308c1415769a881fc5'/>
<id>b313a20f342b74809972ed308c1415769a881fc5</id>
<content type='text'>
Problem:
Since commit bd540db1e, eager-locking was enabled for fsync. But on
certain VM workloads wit sharding enabled, shard xlator keeps sending
fsync on the base shard. This can cause blocked inodelks from other
clients (including shd) to time out due to call bail.

Fix:
Make afr fsync aware of inodelk count and not delay post-op + unlock
when inodelk count &gt; 1, just like writev.

Code is restructured so that any fd based AFR_DATA_TRANSACTION can be made
aware by setting GLUSTERFS_INODELK_DOM_COUNT in xdata request.

Note: We do not know yet why VMs go in to paused state because of the
blocked inodelks but this patch should be a first step in reducing the
occurence.

Updates: #2198
Change-Id: Ib91ebdd3101d590c326e69c829cf9335003e260b
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Since commit bd540db1e, eager-locking was enabled for fsync. But on
certain VM workloads wit sharding enabled, shard xlator keeps sending
fsync on the base shard. This can cause blocked inodelks from other
clients (including shd) to time out due to call bail.

Fix:
Make afr fsync aware of inodelk count and not delay post-op + unlock
when inodelk count &gt; 1, just like writev.

Code is restructured so that any fd based AFR_DATA_TRANSACTION can be made
aware by setting GLUSTERFS_INODELK_DOM_COUNT in xdata request.

Note: We do not know yet why VMs go in to paused state because of the
blocked inodelks but this patch should be a first step in reducing the
occurence.

Updates: #2198
Change-Id: Ib91ebdd3101d590c326e69c829cf9335003e260b
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>configure: add linux-io_uring flag (#2060) (#2295)</title>
<updated>2021-03-29T05:29:12+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2021-03-29T05:29:12+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=fc1b424cc07572066d7bb4f9064664946842d70a'/>
<id>fc1b424cc07572066d7bb4f9064664946842d70a</id>
<content type='text'>
By default, if liburing is not present on the machine where gluster rpms are
being built, then the built rpm won't have the feature present in posix.so.
While this is obviously displayed in the ./configure's summary, it means the
feature won't work on a target machine where the rpm is installed, even if the
target has Linux kernel &gt;=5.1 and liburing installed.

I think it is better to have a configure option `--enable-linux-io_uring` which
is on by default. That way, the build machines will error out by default and
will need to `./configure --disable-linux-io_uring` to compile or install the
lbirary and headers on the build machine.

Fixes: #2063
Change-Id: Ide1daa11b3513210d12be8d2cb683a4084d41e18
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
By default, if liburing is not present on the machine where gluster rpms are
being built, then the built rpm won't have the feature present in posix.so.
While this is obviously displayed in the ./configure's summary, it means the
feature won't work on a target machine where the rpm is installed, even if the
target has Linux kernel &gt;=5.1 and liburing installed.

I think it is better to have a configure option `--enable-linux-io_uring` which
is on by default. That way, the build machines will error out by default and
will need to `./configure --disable-linux-io_uring` to compile or install the
lbirary and headers on the build machine.

Fixes: #2063
Change-Id: Ide1daa11b3513210d12be8d2cb683a4084d41e18
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>cli: syntax check for arbiter volume creation (#2207) (#2222)</title>
<updated>2021-03-23T07:46:57+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2021-03-23T07:46:57+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=81cb602908d47e8a4ed07e1e21746e118690aceb'/>
<id>81cb602908d47e8a4ed07e1e21746e118690aceb</id>
<content type='text'>
commit 8e7bfd6a58b444b26cb50fb98870e77302f3b9eb changed the syntax for
arbiter volume creation to 'replica 2 arbiter 1', while still allowing
the old syntax of 'replica 3 arbiter 1'. But while doing so, it also
removed a conditional check, thereby allowing replica count &gt; 3. This
patch fixes it.

Updates: #2192
Change-Id: Ie109325adb6d78e287e658fd5f59c26ad002e2d3
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 8e7bfd6a58b444b26cb50fb98870e77302f3b9eb changed the syntax for
arbiter volume creation to 'replica 2 arbiter 1', while still allowing
the old syntax of 'replica 3 arbiter 1'. But while doing so, it also
removed a conditional check, thereby allowing replica count &gt; 3. This
patch fixes it.

Updates: #2192
Change-Id: Ie109325adb6d78e287e658fd5f59c26ad002e2d3
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>afr: remove priv-&gt;root_inode (#2244) (#2279)</title>
<updated>2021-03-23T07:45:33+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2021-03-23T07:45:33+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=9d3bc96ed5842b43689a1f989255fac4f019c88f'/>
<id>9d3bc96ed5842b43689a1f989255fac4f019c88f</id>
<content type='text'>
priv-&gt;root_inode seems to be a remenant of pump xlator and was getting
populated in discover code path. thin-arbiter code used it to populate
loc info but it seems that in case of some daemons like quotad, the
discover path for root gfid is not hit, causing it to crash.

Fix:
root inode can be accessed via this-&gt;itable-&gt;root, so use that and
remove priv-&gt;rot_inode instances from the afr code.

Fixes: #2234
Change-Id: Iec59c157f963a4dc455652a5c85a797d00cba52a
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
priv-&gt;root_inode seems to be a remenant of pump xlator and was getting
populated in discover code path. thin-arbiter code used it to populate
loc info but it seems that in case of some daemons like quotad, the
discover path for root gfid is not hit, causing it to crash.

Fix:
root inode can be accessed via this-&gt;itable-&gt;root, so use that and
remove priv-&gt;rot_inode instances from the afr code.

Fixes: #2234
Change-Id: Iec59c157f963a4dc455652a5c85a797d00cba52a
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Fix for shared storage in ipv6 env (#1972) (#2145)</title>
<updated>2021-02-16T12:10:47+00:00</updated>
<author>
<name>Nikhil Ladha</name>
<email>nladha@redhat.com</email>
</author>
<published>2021-02-16T12:10:47+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=532e6a678c610dc8b20271b5597c3f5282353f7d'/>
<id>532e6a678c610dc8b20271b5597c3f5282353f7d</id>
<content type='text'>
Issue:
Mounting shared storage volume was failing in ipv6 env if the hostnames were FQDNs.
The brickname for the volume was being cut off, as a result, volume creation was failing

Change-Id: Ib38993724c709b35b603f9ac666630c50c932c3e
Updates: #1406
Signed-off-by: nik-redhat &lt;nladha@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
Mounting shared storage volume was failing in ipv6 env if the hostnames were FQDNs.
The brickname for the volume was being cut off, as a result, volume creation was failing

Change-Id: Ib38993724c709b35b603f9ac666630c50c932c3e
Updates: #1406
Signed-off-by: nik-redhat &lt;nladha@redhat.com&gt;</pre>
</div>
</content>
</entry>
</feed>
