<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/xlators/mgmt/glusterd/src, branch v4.1dev</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>md-cache: Implement dynamic configuration of xattr list for caching</title>
<updated>2018-01-22T04:10:52+00:00</updated>
<author>
<name>Poornima G</name>
<email>pgurusid@redhat.com</email>
</author>
<published>2018-01-09T11:56:44+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=24bf7715140586675f8d2036f4d589bc255c16dc'/>
<id>24bf7715140586675f8d2036f4d589bc255c16dc</id>
<content type='text'>
Currently, the list of xattrs that md-cache can cache is hard coded
in the md-cache.c file, this necessiates code change and rebuild
everytime a new xattr needs to be added to md-cache xattr cache
list.

With this patch, the user will be able to configure a comma
seperated list of xattrs to be cached by md-cache

Updates #297

Change-Id: Ie35ed607d17182d53f6bb6e6c6563ac52bc3132e
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently, the list of xattrs that md-cache can cache is hard coded
in the md-cache.c file, this necessiates code change and rebuild
everytime a new xattr needs to be added to md-cache xattr cache
list.

With this patch, the user will be able to configure a comma
seperated list of xattrs to be cached by md-cache

Updates #297

Change-Id: Ie35ed607d17182d53f6bb6e6c6563ac52bc3132e
Signed-off-by: Poornima G &lt;pgurusid@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/afr: Adding option to take full file lock</title>
<updated>2018-01-19T00:15:19+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2018-01-17T12:00:06+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=84c5c540b26c8f3dcb9845344dd48df063e57845'/>
<id>84c5c540b26c8f3dcb9845344dd48df063e57845</id>
<content type='text'>
Problem:
In replica 3 volumes there is a possibilities of ending up in split
brain scenario, when multiple clients writing data on the same file
at non overlapping regions in parallel.

Scenario:
- Initially all the copies are good and all the clients gets the value
  of data readables as all good.
- Client C0 performs write W1 which fails on brick B0 and succeeds on
  other two bricks.
- C1 performs write W2 which fails on B1 and succeeds on other two bricks.
- C2 performs write W3 which fails on B2 and succeeds on other two bricks.
- All the 3 writes above happen in parallel and fall on different ranges
  so afr takes granular locks and all the writes are performed in parallel.
  Since each client had data-readables as good, it does not see
  file going into split-brain in the in_flight_split_brain check, hence
  performs the post-op marking the pending xattrs. Now all the bricks
  are being blamed by each other, ending up in split-brain.

Fix:
Have an option to take either full lock or range lock on files while
doing data transactions, to prevent the possibility of ending up in
split brains. With this change, by default the files will take full
lock while doing IO. If you want to make use of the old range lock
change the value of "cluster.full-lock" to "no".

Change-Id: I7893fa33005328ed63daa2f7c35eeed7c5218962
BUG: 1535438
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In replica 3 volumes there is a possibilities of ending up in split
brain scenario, when multiple clients writing data on the same file
at non overlapping regions in parallel.

Scenario:
- Initially all the copies are good and all the clients gets the value
  of data readables as all good.
- Client C0 performs write W1 which fails on brick B0 and succeeds on
  other two bricks.
- C1 performs write W2 which fails on B1 and succeeds on other two bricks.
- C2 performs write W3 which fails on B2 and succeeds on other two bricks.
- All the 3 writes above happen in parallel and fall on different ranges
  so afr takes granular locks and all the writes are performed in parallel.
  Since each client had data-readables as good, it does not see
  file going into split-brain in the in_flight_split_brain check, hence
  performs the post-op marking the pending xattrs. Now all the bricks
  are being blamed by each other, ending up in split-brain.

Fix:
Have an option to take either full lock or range lock on files while
doing data transactions, to prevent the possibility of ending up in
split brains. With this change, by default the files will take full
lock while doing IO. If you want to make use of the old range lock
change the value of "cluster.full-lock" to "no".

Change-Id: I7893fa33005328ed63daa2f7c35eeed7c5218962
BUG: 1535438
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>locks: added inodelk/entrylk contention upcall notifications</title>
<updated>2018-01-16T10:37:22+00:00</updated>
<author>
<name>Xavier Hernandez</name>
<email>xhernandez@datalab.es</email>
</author>
<published>2016-06-15T12:42:19+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=7ba7a4b27d124f4ee16fe4776a4670cd5b0160c4'/>
<id>7ba7a4b27d124f4ee16fe4776a4670cd5b0160c4</id>
<content type='text'>
The locks xlator now is able to send a contention notification to
the current owner of the lock.

This is only a notification that can be used to improve performance
of some client side operations that might benefit from extended
duration of lock ownership. Nothing is done if the lock owner decides
to ignore the message and to not release the lock. For forced
release of acquired resources, leases must be used.

Change-Id: I7f1ad32a0b4b445505b09908a050080ad848f8e0
Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The locks xlator now is able to send a contention notification to
the current owner of the lock.

This is only a notification that can be used to improve performance
of some client side operations that might benefit from extended
duration of lock ownership. Nothing is done if the lock owner decides
to ignore the message and to not release the lock. For forced
release of acquired resources, leases must be used.

Change-Id: I7f1ad32a0b4b445505b09908a050080ad848f8e0
Signed-off-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: get-state memory leak fix</title>
<updated>2018-01-08T09:28:07+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-01-04T16:37:54+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=6ad4c7ff050de6faa267e42a09ba35b8641f15db'/>
<id>6ad4c7ff050de6faa267e42a09ba35b8641f15db</id>
<content type='text'>
Change-Id: Ic4fcf2087f295d3dade944efb8fd08f7e2d7d516
BUG: 1531149
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Ic4fcf2087f295d3dade944efb8fd08f7e2d7d516
BUG: 1531149
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: fix up volume option flags</title>
<updated>2018-01-07T10:53:36+00:00</updated>
<author>
<name>Csaba Henk</name>
<email>csaba@redhat.com</email>
</author>
<published>2018-01-05T14:29:13+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=013e06bf99ee1e0e374d140bef5ddc92d62b4418'/>
<id>013e06bf99ee1e0e374d140bef5ddc92d62b4418</id>
<content type='text'>
In glusterd volfile generation code options should be ornamented
with the VOLOPT_FLAG_* flags. However, some are ornamented with
OPT_FLAG_* flags (which are to be used in xlator context).

The impact is: the OPT_FLAG_* that occurs is OPT_FLAG_CLIENT_OPT,
which has the same value as VOLOPT_FLAG_XLATOR_OPT, so what was
meant is "option affects clients" and what was there means
"option enables/disables xlators". Because of this semantic
shift, op version might be incorrectly calculated for volumes
and clients. (At this point it's a theoretical possibility.
Actual occurrence might depend on connecting client &amp; server
versions; it's also possible that there exists a proof of
concept scenario but it's irrealistic.)

This commit eliminates the OPT_FLAG_* occurrences from glusterd code,
and replaces them with the appropriate VOLOPT_FLAG_* flags.

Change-Id: Ia4e6fbac738d5a8d889c0f5561c4dea6783250b1
Signed-off-by: Csaba Henk &lt;csaba@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In glusterd volfile generation code options should be ornamented
with the VOLOPT_FLAG_* flags. However, some are ornamented with
OPT_FLAG_* flags (which are to be used in xlator context).

The impact is: the OPT_FLAG_* that occurs is OPT_FLAG_CLIENT_OPT,
which has the same value as VOLOPT_FLAG_XLATOR_OPT, so what was
meant is "option affects clients" and what was there means
"option enables/disables xlators". Because of this semantic
shift, op version might be incorrectly calculated for volumes
and clients. (At this point it's a theoretical possibility.
Actual occurrence might depend on connecting client &amp; server
versions; it's also possible that there exists a proof of
concept scenario but it's irrealistic.)

This commit eliminates the OPT_FLAG_* occurrences from glusterd code,
and replaces them with the appropriate VOLOPT_FLAG_* flags.

Change-Id: Ia4e6fbac738d5a8d889c0f5561c4dea6783250b1
Signed-off-by: Csaba Henk &lt;csaba@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: connect to an existing brick process when qourum status is NOT_APPLICABLE_QUORUM</title>
<updated>2018-01-05T07:31:43+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-01-03T08:59:51+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=01caa839ebda29c2fe209c4767626f2f49ea3e71'/>
<id>01caa839ebda29c2fe209c4767626f2f49ea3e71</id>
<content type='text'>
First of all, this patch reverts commit 635c1c3 as the same is causing a
regression with bricks not coming up on time when a node is rebooted.
This patch tries to fix the problem in a different way by just trying to
connect to an existing running brick when quorum status is not
applicable.

Change-Id: I0efb5901832824b1c15dcac529bffac85173e097
BUG: 1509845
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
First of all, this patch reverts commit 635c1c3 as the same is causing a
regression with bricks not coming up on time when a node is rebooted.
This patch tries to fix the problem in a different way by just trying to
connect to an existing running brick when quorum status is not
applicable.

Change-Id: I0efb5901832824b1c15dcac529bffac85173e097
BUG: 1509845
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Enable geo-rep test cases</title>
<updated>2018-01-05T07:08:10+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2017-08-11T08:55:18+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=60a992e69a7cf5a588f5139709d325125d6f04fb'/>
<id>60a992e69a7cf5a588f5139709d325125d6f04fb</id>
<content type='text'>
This patch re-enables the geo-rep test cases.
Along with it does following optimizations.

1. Use EXPECT_WITHIN instead of sleep
2. Clean up geo-rep ssh key after test
3. Changes to gverify.sh and S56glusterd-geo-rep-create-post.sh
   to use the given ssh identity file for geo-rep create
4. Make gluster-command-dir configurable and introduce
   slave-gluster-command-dir which points the parent directory
   of gluster binaries in master and slave respectively.

Change-Id: Ia7696278d9dd3ba04224dcd7c3564088ca970b04
BUG: 1480491
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch re-enables the geo-rep test cases.
Along with it does following optimizations.

1. Use EXPECT_WITHIN instead of sleep
2. Clean up geo-rep ssh key after test
3. Changes to gverify.sh and S56glusterd-geo-rep-create-post.sh
   to use the given ssh identity file for geo-rep create
4. Make gluster-command-dir configurable and introduce
   slave-gluster-command-dir which points the parent directory
   of gluster binaries in master and slave respectively.

Change-Id: Ia7696278d9dd3ba04224dcd7c3564088ca970b04
BUG: 1480491
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Nullify pmap entry for bricks belonging to same port</title>
<updated>2018-01-03T01:22:58+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-01-02T14:56:31+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=0e0fb43d322ade2e1187ea7e4389bf65c66a6ad7'/>
<id>0e0fb43d322ade2e1187ea7e4389bf65c66a6ad7</id>
<content type='text'>
Commit 30e0b86 tried to address all the stale port issues glusterd had
in case of a brick is abruptly killed. For brick multiplexing case
because of a bug the portmap entry was not getting removed. This patch
addresses the same.

Change-Id: Ib020b967a9b92f1abae9cab9492f0cacec59aaa1
BUG: 1530281
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commit 30e0b86 tried to address all the stale port issues glusterd had
in case of a brick is abruptly killed. For brick multiplexing case
because of a bug the portmap entry was not getting removed. This patch
addresses the same.

Change-Id: Ib020b967a9b92f1abae9cab9492f0cacec59aaa1
BUG: 1530281
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/glusterd: Adding validation for setting quorum-count</title>
<updated>2017-12-29T11:19:52+00:00</updated>
<author>
<name>karthik-us</name>
<email>ksubrahm@redhat.com</email>
</author>
<published>2017-12-28T13:12:16+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=cc425b0de57d574f6e68b6ef5edfdf99cd8353af'/>
<id>cc425b0de57d574f6e68b6ef5edfdf99cd8353af</id>
<content type='text'>
In a replicated volume it was allowing to set the quorum-count value
between the range [1 - 2147483647]. This patch adds validation for
allowing only maximum of replica_count number of quorum-count value
to be set on a volume.

Change-Id: I13952f3c6cf498c9f2b91161503fc0fba9d94898
BUG: 1529515
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In a replicated volume it was allowing to set the quorum-count value
between the range [1 - 2147483647]. This patch adds validation for
allowing only maximum of replica_count number of quorum-count value
to be set on a volume.

Change-Id: I13952f3c6cf498c9f2b91161503fc0fba9d94898
BUG: 1529515
Signed-off-by: karthik-us &lt;ksubrahm@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>snapshot : after brick reset/replace snapshot creation fails</title>
<updated>2017-12-29T09:49:40+00:00</updated>
<author>
<name>Sunny Kumar</name>
<email>sunkumar@redhat.com</email>
</author>
<published>2017-12-27T17:45:37+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=e126368f6b5231fb2acf307fbd8e343e52d61885'/>
<id>e126368f6b5231fb2acf307fbd8e343e52d61885</id>
<content type='text'>
Problem : after brick reset/replace snapshot creation fails

Solution : During brick reset/replace when we validate and aggrigate
           dictionary data from another node it was rewriting
           'mount_dir' value to NULL which is critical for snapshot
           creation.

Change-Id: Iabefbfcef7d8ac4cbd2a241e821c0e51492c093e
BUG: 1512451
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem : after brick reset/replace snapshot creation fails

Solution : During brick reset/replace when we validate and aggrigate
           dictionary data from another node it was rewriting
           'mount_dir' value to NULL which is critical for snapshot
           creation.

Change-Id: Iabefbfcef7d8ac4cbd2a241e821c0e51492c093e
BUG: 1512451
Signed-off-by: Sunny Kumar &lt;sunkumar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
