<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/rpc, branch devel</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>glusterd: After upgrade on release 9.1 glusterd protocol is broken (#2352)</title>
<updated>2021-04-23T13:56:49+00:00</updated>
<author>
<name>mohit84</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2021-04-23T13:56:49+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=dbbad5d69c73a62888a22adf40901e9a578c6518'/>
<id>dbbad5d69c73a62888a22adf40901e9a578c6518</id>
<content type='text'>
* glusterd: After upgrade on release 9.1 glusterd protocol is broken

After upgrade on release-9 glusterd protocol is broken
because on the upgraded nodes glusterd is not able to find an
actor at expected index in rpc procedure table.The new proc (GLUSTERD_MGMT_V3_POST_COMMIT)
was introduced from a patch(https://review.gluster.org/#/c/glusterfs/+/24771/)
in the middle due to that index of existing actor is changed on new upgraded nodes
glusterd is failing.

Solution: Change the proc(GLUSTERD_MGMT_V3_POST_COMMIT) position at
          last in proc table to avoid an issue.

Fixes: #2351
Change-Id: I36575fd4302944336a75a8d4a305401a7128fd84
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* glusterd: After upgrade on release 9.1 glusterd protocol is broken

After upgrade on release-9 glusterd protocol is broken
because on the upgraded nodes glusterd is not able to find an
actor at expected index in rpc procedure table.The new proc (GLUSTERD_MGMT_V3_POST_COMMIT)
was introduced from a patch(https://review.gluster.org/#/c/glusterfs/+/24771/)
in the middle due to that index of existing actor is changed on new upgraded nodes
glusterd is failing.

Solution: Change the proc(GLUSTERD_MGMT_V3_POST_COMMIT) position at
          last in proc table to avoid an issue.

Fixes: #2351
Change-Id: I36575fd4302944336a75a8d4a305401a7128fd84
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>afr: don't reopen fds on which POSIX locks are held (#1980)</title>
<updated>2021-03-27T10:14:04+00:00</updated>
<author>
<name>Karthik Subrahmanya</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2021-03-27T10:14:04+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=2a524ad0738be491dcac3cc96db1411320168c72'/>
<id>2a524ad0738be491dcac3cc96db1411320168c72</id>
<content type='text'>
When client.strict-locks is enabled on a volume and there are POSIX
locks held on the files, after disconnect and reconnection of the
clients do not re-open such fds which might lead to multiple clients
acquiring the locks and cause data corruption.

Change-Id: I8777ffbc2cc8d15ab57b58b72b56eb67521787c5
Fixes: #1977
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When client.strict-locks is enabled on a volume and there are POSIX
locks held on the files, after disconnect and reconnection of the
clients do not re-open such fds which might lead to multiple clients
acquiring the locks and cause data corruption.

Change-Id: I8777ffbc2cc8d15ab57b58b72b56eb67521787c5
Fixes: #1977
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>CID 1412333 (#1 of 1): Copy into fixed size buffer (STRING_OVERFLOW) (#2264)</title>
<updated>2021-03-22T10:07:20+00:00</updated>
<author>
<name>Ayush Ujjwal</name>
<email>77244483+aujjwal-redhat@users.noreply.github.com</email>
</author>
<published>2021-03-22T10:07:20+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=cbdd7dc11189187e3a8191ed35de47b4715e6e21'/>
<id>cbdd7dc11189187e3a8191ed35de47b4715e6e21</id>
<content type='text'>
* CID 1412333 (#1 of 1): Copy into fixed size buffer (STRING_OVERFLOW)

CID: 1412333

Description:
`path` length might overrun the 108-character fixed-size string. Added a condition to check the size of `path`.

Updates: #1060

Change-Id: I4e7c58ab3a3f6807992dfc3023c21f762bff6b32
Signed-off-by: aujjwal-redhat &lt;aujjwal@redhat.com&gt;

* refactored the code

Change-Id: I1eaa6fc59e43f76224f44b5f8c54495b67076651
Signed-off-by: aujjwal-redhat &lt;aujjwal@redhat.com&gt;

* added strncpy in place of strcpy to store only the number of characters as much is size of addr-sunppath

Change-Id: I9b4eeed3dd0c00d052dcaaf6b34597fbfe7fe1a2
Signed-off-by: aujjwal-redhat &lt;aujjwal@redhat.com&gt;

* Removed goto err as it was already going to err

Change-Id: Ib40c11537b57aea72d3095eda86bd5b541930550
Signed-off-by: aujjwal-redhat &lt;aujjwal@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* CID 1412333 (#1 of 1): Copy into fixed size buffer (STRING_OVERFLOW)

CID: 1412333

Description:
`path` length might overrun the 108-character fixed-size string. Added a condition to check the size of `path`.

Updates: #1060

Change-Id: I4e7c58ab3a3f6807992dfc3023c21f762bff6b32
Signed-off-by: aujjwal-redhat &lt;aujjwal@redhat.com&gt;

* refactored the code

Change-Id: I1eaa6fc59e43f76224f44b5f8c54495b67076651
Signed-off-by: aujjwal-redhat &lt;aujjwal@redhat.com&gt;

* added strncpy in place of strcpy to store only the number of characters as much is size of addr-sunppath

Change-Id: I9b4eeed3dd0c00d052dcaaf6b34597fbfe7fe1a2
Signed-off-by: aujjwal-redhat &lt;aujjwal@redhat.com&gt;

* Removed goto err as it was already going to err

Change-Id: Ib40c11537b57aea72d3095eda86bd5b541930550
Signed-off-by: aujjwal-redhat &lt;aujjwal@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fix for starting brick on new port (#2090)</title>
<updated>2021-02-10T09:37:32+00:00</updated>
<author>
<name>Nikhil Ladha</name>
<email>nladha@redhat.com</email>
</author>
<published>2021-02-10T09:37:32+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=5223300bc65eb88a1fbe27bd2702dbb92768cb27'/>
<id>5223300bc65eb88a1fbe27bd2702dbb92768cb27</id>
<content type='text'>
The Errno set by the runner code was not correct when the bind() fails
to assign an already occupied port in the __socket_server_bind().

Fix:
Updated the code to return the correct errno from the
__socket_server_bind() if the bind() fails due to EADDRINUSE error. And,
use the returned errno from runner_run() to retry allocating a new port
to the brick process.

Fixes: #1101

Change-Id: If124337f41344a04f050754e402490529ef4ecdc
Signed-off-by: nik-redhat nladha@redhat.com</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The Errno set by the runner code was not correct when the bind() fails
to assign an already occupied port in the __socket_server_bind().

Fix:
Updated the code to return the correct errno from the
__socket_server_bind() if the bind() fails due to EADDRINUSE error. And,
use the returned errno from runner_run() to retry allocating a new port
to the brick process.

Fixes: #1101

Change-Id: If124337f41344a04f050754e402490529ef4ecdc
Signed-off-by: nik-redhat nladha@redhat.com</pre>
</div>
</content>
</entry>
<entry>
<title>cli/glusterd: conscious language changes for geo-rep</title>
<updated>2020-12-30T10:25:22+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2020-10-08T05:37:03+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=aa29aaf1c982c95c7df9d765288c2f779ac4bfb2'/>
<id>aa29aaf1c982c95c7df9d765288c2f779ac4bfb2</id>
<content type='text'>
Replace master and slave terminology in geo-replication with primary and
secondary respectively.

All instances are replaced in cli and glusterd.

Changes to other parts of the code to follow in separate patches.
tests/00-geo-rep/* are passing thus far.

Updates: #1415
Change-Id: Ifb12b7f5ce927a4a61bda1e953c1eb0fdfc8a7c5
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Replace master and slave terminology in geo-replication with primary and
secondary respectively.

All instances are replaced in cli and glusterd.

Changes to other parts of the code to follow in separate patches.
tests/00-geo-rep/* are passing thus far.

Updates: #1415
Change-Id: Ifb12b7f5ce927a4a61bda1e953c1eb0fdfc8a7c5
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd/cli: enhance rebalance-status after replace/reset-brick (#1869)</title>
<updated>2020-12-08T10:51:35+00:00</updated>
<author>
<name>Tamar Shacked</name>
<email>tshacked@redhat.com</email>
</author>
<published>2020-12-08T10:51:35+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=ae8cfe5baaff5b3e4c55f49ec71811e32a885271'/>
<id>ae8cfe5baaff5b3e4c55f49ec71811e32a885271</id>
<content type='text'>
* glusterd/cli: enhance rebalance-status after replace/reset-brick

Rebalance status is being reset during replace/reset-brick operations.
This cause 'volume status' to shows rebalance as "not started".

Fix:
change rebalance-status to "reset due to (replace|reset)-brick"

Change-Id: I6e3372d67355eb76c5965984a23f073289d4ff23
Signed-off-by: Tamar Shacked &lt;tshacked@redhat.com&gt;

* glusterd/cli: enhance rebalance-status after replace/reset-brick

Rebalance status is being reset during replace/reset-brick operations.
This cause 'volume status' to shows rebalance as "not started".

Fix: change rebalance-status to "reset due to (replace|reset)-brick"

Fixes: #1717
Signed-off-by: Tamar Shacked &lt;tshacked@redhat.com&gt;

Change-Id: I1e3e373ca3b2007b5b7005b6c757fb43801fde33

* cli: changing rebal task ID to "None" in case status is being reset

Rebalance status is being reset during replace/reset-brick operations.
This cause 'volume status' to shows rebalance as "not started".

Fix:
change rebalance-status to "reset due to (replace|reset)-brick"

Fixes: #1717

Change-Id: Ia73a8bea3dcd8e51acf4faa6434c3cb0d09856d0
Signed-off-by: Tamar Shacked &lt;tshacked@redhat.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* glusterd/cli: enhance rebalance-status after replace/reset-brick

Rebalance status is being reset during replace/reset-brick operations.
This cause 'volume status' to shows rebalance as "not started".

Fix:
change rebalance-status to "reset due to (replace|reset)-brick"

Change-Id: I6e3372d67355eb76c5965984a23f073289d4ff23
Signed-off-by: Tamar Shacked &lt;tshacked@redhat.com&gt;

* glusterd/cli: enhance rebalance-status after replace/reset-brick

Rebalance status is being reset during replace/reset-brick operations.
This cause 'volume status' to shows rebalance as "not started".

Fix: change rebalance-status to "reset due to (replace|reset)-brick"

Fixes: #1717
Signed-off-by: Tamar Shacked &lt;tshacked@redhat.com&gt;

Change-Id: I1e3e373ca3b2007b5b7005b6c757fb43801fde33

* cli: changing rebal task ID to "None" in case status is being reset

Rebalance status is being reset during replace/reset-brick operations.
This cause 'volume status' to shows rebalance as "not started".

Fix:
change rebalance-status to "reset due to (replace|reset)-brick"

Fixes: #1717

Change-Id: Ia73a8bea3dcd8e51acf4faa6434c3cb0d09856d0
Signed-off-by: Tamar Shacked &lt;tshacked@redhat.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: resource leaks (#1748)</title>
<updated>2020-11-12T08:34:27+00:00</updated>
<author>
<name>Nikhil Ladha</name>
<email>nladha@redhat.com</email>
</author>
<published>2020-11-12T08:34:27+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=b8ad18a260bf1fb34af92bd948df9142e0a08f51'/>
<id>b8ad18a260bf1fb34af92bd948df9142e0a08f51</id>
<content type='text'>
Issue:
iobref was not freed before exiting the function
if all the checks were OK, which caused the resource
leak.

Fix:
Modified the code a bit to avoid use of an extra reference
to the label, and to free the iobref and iobuf if not NULL,
and then exit the function.

CID: 1430118

Updates: #1060</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Issue:
iobref was not freed before exiting the function
if all the checks were OK, which caused the resource
leak.

Fix:
Modified the code a bit to avoid use of an extra reference
to the label, and to free the iobref and iobuf if not NULL,
and then exit the function.

CID: 1430118

Updates: #1060</pre>
</div>
</content>
</entry>
<entry>
<title>rpcsvc/transport: gracefully disconnect when graph is not ready (#1671)</title>
<updated>2020-10-27T07:12:03+00:00</updated>
<author>
<name>Rafi KC</name>
<email>rafi.kavungal@iternity.com</email>
</author>
<published>2020-10-27T07:12:03+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=03d9cd7284403499287643a77fb1700a97cde588'/>
<id>03d9cd7284403499287643a77fb1700a97cde588</id>
<content type='text'>
* rpcsvc/transport: gracefully disconnect when graph is not ready.

There was a crash reported when the brick rpc get's an accept
request from a client before the server xlator is fully inited.

The fix https://review.gluster.org/22339/ solves
the crash, but it leaves the connection alive with out adding
the rpc to xprts list of server conf. This will leads to problems
with upcall, dump, and other cleanup codes.

So this patch will make the rpc to fail and disconnect if a
connection attempted before the server is fully inited.

Change-Id: I3bf1113c0da4c2614afaa2c0f4eb6abfb0d26ed0
Signed-off-by: Mohammed Rafi KC &lt;rafi.kavungal@iternity.com&gt;</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* rpcsvc/transport: gracefully disconnect when graph is not ready.

There was a crash reported when the brick rpc get's an accept
request from a client before the server xlator is fully inited.

The fix https://review.gluster.org/22339/ solves
the crash, but it leaves the connection alive with out adding
the rpc to xprts list of server conf. This will leads to problems
with upcall, dump, and other cleanup codes.

So this patch will make the rpc to fail and disconnect if a
connection attempted before the server is fully inited.

Change-Id: I3bf1113c0da4c2614afaa2c0f4eb6abfb0d26ed0
Signed-off-by: Mohammed Rafi KC &lt;rafi.kavungal@iternity.com&gt;</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: add post-commit phase to the transaction</title>
<updated>2020-07-24T08:41:24+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2020-07-24T08:41:24+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=96d976e6be70d075639e69f2bab70b7de783c520'/>
<id>96d976e6be70d075639e69f2bab70b7de783c520</id>
<content type='text'>
This is part 2 of the fix. part 1 is at
https://review.gluster.org/#/c/glusterfs/+/24325/

This patch adds post commit phase to the mgmt v3 transaction
framework.

In post commit phase we replace the old auth.allow list
in case of add-brick and replace-brick.

fixes: #1391

Change-Id: I41c871d59e6252d27163b042ad710e929d7d0399
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This is part 2 of the fix. part 1 is at
https://review.gluster.org/#/c/glusterfs/+/24325/

This patch adds post commit phase to the mgmt v3 transaction
framework.

In post commit phase we replace the old auth.allow list
in case of add-brick and replace-brick.

fixes: #1391

Change-Id: I41c871d59e6252d27163b042ad710e929d7d0399
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpcsvc: Add latency tracking for rpc programs</title>
<updated>2020-09-04T05:14:17+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2020-09-04T05:14:17+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=da3835971a5fa1b29fd792faa8339be428a5116c'/>
<id>da3835971a5fa1b29fd792faa8339be428a5116c</id>
<content type='text'>
Added latency tracking of rpc-handling code. With this change we
should be able to monitor the amount of time rpc-handling code is
consuming for each of the rpc call.

fixes: #1466
Change-Id: I04fc7f3b12bfa5053c0fc36885f271cb78f581cd
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Added latency tracking of rpc-handling code. With this change we
should be able to monitor the amount of time rpc-handling code is
consuming for each of the rpc call.

fixes: #1466
Change-Id: I04fc7f3b12bfa5053c0fc36885f271cb78f581cd
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
