<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/include.rc, branch v5.5</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>Bump up timeout for tests on AWS</title>
<updated>2019-02-07T08:02:01+00:00</updated>
<author>
<name>Nigel Babu</name>
<email>nigelb@redhat.com</email>
</author>
<published>2019-01-21T06:47:04+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=183cac3a642ae4a1b750bc11673045fc0ff66a6d'/>
<id>183cac3a642ae4a1b750bc11673045fc0ff66a6d</id>
<content type='text'>
Fixes: bz#1673268
Change-Id: I2b9be45f199f6436b858536c6f49be85902217f0
Signed-off-by: Nigel Babu &lt;nigelb@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fixes: bz#1673268
Change-Id: I2b9be45f199f6436b858536c6f49be85902217f0
Signed-off-by: Nigel Babu &lt;nigelb@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Preserve tarball of tests when they timeout</title>
<updated>2018-08-27T02:42:19+00:00</updated>
<author>
<name>ShyamsundarR</name>
<email>srangana@redhat.com</email>
</author>
<published>2018-08-14T18:00:41+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=d34455a2d2d267c545c25ad7fa710ae2677d5afb'/>
<id>d34455a2d2d267c545c25ad7fa710ae2677d5afb</id>
<content type='text'>
When tests timeout, the timeout command sends TERM
signal to the command being executed. In the case of run-tests.sh
it invokes prove, which further invokes perl and finally the test
is run using bash. The TERM signal does not seem to be reachnig
the end bash that is actually executing the tests, and hence
when any test is terminated due to a timeout, the cleanup routine
in include.rc does not get a chance to run and preserve the
tarball.

Further, cleanup invokes tarball generation, but is invoked at
the beginning and end of every test, and at times in beteween
as well. This caused way too many tarballs in case we decide to
preserve the same whenever generated by cleanup.

This patch hence moves the tarball generation to run-tests.sh
instead, and further stores them named &lt;test&gt;-iteration-&lt;n&gt;.tar
and also prints tarball name generated and stored per iteration.

This should help relate failed runs to the tarball iteration #
and to look at relevant logs.

Further the patch also provides a -p option to run-tests.sh for
unit testing purposes, where running a test in a loop without the
option will generate as many tarballs, and using the option will
reduce this to preserving the last tarball, saving space in
smaller unit test setups.

Fixes: bz#1614062
Change-Id: I0aee76c89df0691cf4d0c1fcd4c04dffe0d7c896
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When tests timeout, the timeout command sends TERM
signal to the command being executed. In the case of run-tests.sh
it invokes prove, which further invokes perl and finally the test
is run using bash. The TERM signal does not seem to be reachnig
the end bash that is actually executing the tests, and hence
when any test is terminated due to a timeout, the cleanup routine
in include.rc does not get a chance to run and preserve the
tarball.

Further, cleanup invokes tarball generation, but is invoked at
the beginning and end of every test, and at times in beteween
as well. This caused way too many tarballs in case we decide to
preserve the same whenever generated by cleanup.

This patch hence moves the tarball generation to run-tests.sh
instead, and further stores them named &lt;test&gt;-iteration-&lt;n&gt;.tar
and also prints tarball name generated and stored per iteration.

This should help relate failed runs to the tarball iteration #
and to look at relevant logs.

Further the patch also provides a -p option to run-tests.sh for
unit testing purposes, where running a test in a loop without the
option will generate as many tarballs, and using the option will
reduce this to preserving the last tarball, saving space in
smaller unit test setups.

Fixes: bz#1614062
Change-Id: I0aee76c89df0691cf4d0c1fcd4c04dffe0d7c896
Signed-off-by: ShyamsundarR &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: address test failures with brick mux enabled</title>
<updated>2018-05-31T04:27:26+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2018-05-15T03:52:26+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=f1f2bfd7c966c6d1efc5c0397caf056cd38ddbbc'/>
<id>f1f2bfd7c966c6d1efc5c0397caf056cd38ddbbc</id>
<content type='text'>
This patch addresses following:
1. On volume stop, for the last brick, pmap_registry_remove () is
invoked by glusterd.
2. If a brick process is sigkilled, remove all the associated brick
instances from the portmap.
3. Bump up PROCESS_UP_TIMEOUT to 45.
4. gf_attach to kill a brick takes more time in mux (which is an
issue that needs a fix), but in the interim, give br-state-check.t
more time to complete (there are 2 kill_bricks, each taking 120
seconds, and the test usually passes in 30 odd seconds, hence bumping
this up to 350 seconds)
5. The test bug-1559004-EMLINK-handling.t is taking ~950 seconds at
times on master without mux, in mux cases, when it fails, it is almost
at the last iteration, hence bumping the timeout for this test case
to reduce regression error rates

Updates: bz#1577672
Change-Id: I1922675e112baca4c125c4c094eaa42a11e34e67
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch addresses following:
1. On volume stop, for the last brick, pmap_registry_remove () is
invoked by glusterd.
2. If a brick process is sigkilled, remove all the associated brick
instances from the portmap.
3. Bump up PROCESS_UP_TIMEOUT to 45.
4. gf_attach to kill a brick takes more time in mux (which is an
issue that needs a fix), but in the interim, give br-state-check.t
more time to complete (there are 2 kill_bricks, each taking 120
seconds, and the test usually passes in 30 odd seconds, hence bumping
this up to 350 seconds)
5. The test bug-1559004-EMLINK-handling.t is taking ~950 seconds at
times on master without mux, in mux cases, when it fails, it is almost
at the last iteration, hence bumping the timeout for this test case
to reduce regression error rates

Updates: bz#1577672
Change-Id: I1922675e112baca4c125c4c094eaa42a11e34e67
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/dht: fixes to parallel renames to same destination codepath</title>
<updated>2018-05-07T06:04:10+00:00</updated>
<author>
<name>Raghavendra G</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2018-02-08T11:42:41+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=5e356e3f470779433bfe6b0b368676062842b367'/>
<id>5e356e3f470779433bfe6b0b368676062842b367</id>
<content type='text'>
Test case:
 # while true; do uuid="`uuidgen`"; echo "some data" &gt; "test$uuid"; mv
   "test$uuid" "test" -f || break; echo "done:$uuid"; done

 This script was run in parallel from multiple mountpoints

Along the course of getting the above usecase working, many issues
were found:

Issue 1:
=======
consider a case of rename (src, dst). We can encounter a situation
where,
* dst is a file present at the time of lookup
* dst is removed by the time rename fop reaches glusterfs

In this scenario, acquring inodelk on dst fails with ESTALE resulting
in failure of rename. However, as per POSIX irrespective of whether
dst is present or not, rename should be successful. Acquiring entrylk
provides synchronization even in races like this.

Algorithm:
1. Take inodelks on src and dst (if dst is present) on respective
   cached subvols. These inodelks are done to preserve backward
   compatibility with older clients, so that synchronization is
   preserved when a volume is mounted by clients of different
   versions. Once relevant older versions (3.10, 3.12, 3.13) reach
   EOL, this code can be removed.
2. Ignore ENOENT/ESTALE errors of inodelk on dst.
3. protect namespace of src and dst. To protect namespace of a file,
   take inodelk on parent on hashed subvol, then take entrylk on the
   same subvol on parent with basename of file. inodelk on parent is
   done to guard against changes to parent layout so that hashed
   subvol won't change during rename.
4. &lt;rest of rename continues&gt;
5. unlock all locks

Issue 2:
========
linkfile creation in lookup codepath can race with a rename. Imagine
the following scenario:
* lookup finds a data-file with gfid - gfid-dst - without a
  corresponding linkto file on hashed-subvol. It decides to create
  linkto file with gfid - gfid-dst.
    - Note that some codepaths of dht-rename deletes linkto file of
      dst as first step. So, a lookup racing with an in-progress
      rename can easily run into this situation.
* a rename (src-path:gfid-src, dst-path:gfid-dst) renames data-file
  and hence gfid of data-file changes to gfid-src with path dst-path.
* lookup proceeds and creates linkto file - dst-path - with gfid -
  dst-gfid - on hashed-subvol.
* rename tries to create a linkto file dst-path with src-gfid on
  hashed-subvol, but it fails with EEXIST. But EEXIST is ignored
  during linkto file creation.

Now we've ended with dst-path having different gfids - dst-gfid on
linkto file and src-gfid on data file. Future lookups on dst-path will
always fail with ESTALE, due to differing gfids.

The fix is to synchronize linkfile creation in lookup path with rename
using the same mechanism of protecting namespace explained in solution
of Issue 1. Once locks are acquired, before proceeding with linkfile
creation, we check whether conditions for linkto file creation are
still valid. If not, we skip linkto file creation.

Issue 3:
========
gfid of dst-path can change by the time locks are acquired. This
means, either another rename overwrote dst-path or dst-path was
deleted and recreated by a different client. When this happens,
cached-subvol for dst can change. If rename proceeds with old-gfid and
old-cached subvol, we'll end up in inconsistent state(s) like dst-path
with different gfids on different subvols, more than one data-file
being present etc.

Fix is to do the lookup with a new inode after protecting namespace of
dst. Post lookup, we've to compare gfids and correct local state
appropriately to be in sync with backend.

Issue 4:
========
During revalidate lookup, if following a linkto file doesn't lead to a
valid data-file, local-&gt;cached-subvol was not reset to NULL. This
means we would be operating on a stale state which can lead to
inconsistency. As a fix, reset it to NULL before proceeding with
lookup everywhere.

Issue 5:
========
Stale dentries left out in inode table on brick resulted in failures
of link fop even though the file/dentry didn't exist on backend fs. A
patch is submitted to fix this issue. Please check the dependency tree
of current patch on gerrit for details

In short, we fix the problem by not blindly trusting the
inode-table. Instead we validate whether dentry is present by doing
lookup on backend fs.

Change-Id: I832e5c47d232f90c4edb1fafc512bf19bebde165
updates: bz#1543279
BUG: 1543279
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Test case:
 # while true; do uuid="`uuidgen`"; echo "some data" &gt; "test$uuid"; mv
   "test$uuid" "test" -f || break; echo "done:$uuid"; done

 This script was run in parallel from multiple mountpoints

Along the course of getting the above usecase working, many issues
were found:

Issue 1:
=======
consider a case of rename (src, dst). We can encounter a situation
where,
* dst is a file present at the time of lookup
* dst is removed by the time rename fop reaches glusterfs

In this scenario, acquring inodelk on dst fails with ESTALE resulting
in failure of rename. However, as per POSIX irrespective of whether
dst is present or not, rename should be successful. Acquiring entrylk
provides synchronization even in races like this.

Algorithm:
1. Take inodelks on src and dst (if dst is present) on respective
   cached subvols. These inodelks are done to preserve backward
   compatibility with older clients, so that synchronization is
   preserved when a volume is mounted by clients of different
   versions. Once relevant older versions (3.10, 3.12, 3.13) reach
   EOL, this code can be removed.
2. Ignore ENOENT/ESTALE errors of inodelk on dst.
3. protect namespace of src and dst. To protect namespace of a file,
   take inodelk on parent on hashed subvol, then take entrylk on the
   same subvol on parent with basename of file. inodelk on parent is
   done to guard against changes to parent layout so that hashed
   subvol won't change during rename.
4. &lt;rest of rename continues&gt;
5. unlock all locks

Issue 2:
========
linkfile creation in lookup codepath can race with a rename. Imagine
the following scenario:
* lookup finds a data-file with gfid - gfid-dst - without a
  corresponding linkto file on hashed-subvol. It decides to create
  linkto file with gfid - gfid-dst.
    - Note that some codepaths of dht-rename deletes linkto file of
      dst as first step. So, a lookup racing with an in-progress
      rename can easily run into this situation.
* a rename (src-path:gfid-src, dst-path:gfid-dst) renames data-file
  and hence gfid of data-file changes to gfid-src with path dst-path.
* lookup proceeds and creates linkto file - dst-path - with gfid -
  dst-gfid - on hashed-subvol.
* rename tries to create a linkto file dst-path with src-gfid on
  hashed-subvol, but it fails with EEXIST. But EEXIST is ignored
  during linkto file creation.

Now we've ended with dst-path having different gfids - dst-gfid on
linkto file and src-gfid on data file. Future lookups on dst-path will
always fail with ESTALE, due to differing gfids.

The fix is to synchronize linkfile creation in lookup path with rename
using the same mechanism of protecting namespace explained in solution
of Issue 1. Once locks are acquired, before proceeding with linkfile
creation, we check whether conditions for linkto file creation are
still valid. If not, we skip linkto file creation.

Issue 3:
========
gfid of dst-path can change by the time locks are acquired. This
means, either another rename overwrote dst-path or dst-path was
deleted and recreated by a different client. When this happens,
cached-subvol for dst can change. If rename proceeds with old-gfid and
old-cached subvol, we'll end up in inconsistent state(s) like dst-path
with different gfids on different subvols, more than one data-file
being present etc.

Fix is to do the lookup with a new inode after protecting namespace of
dst. Post lookup, we've to compare gfids and correct local state
appropriately to be in sync with backend.

Issue 4:
========
During revalidate lookup, if following a linkto file doesn't lead to a
valid data-file, local-&gt;cached-subvol was not reset to NULL. This
means we would be operating on a stale state which can lead to
inconsistency. As a fix, reset it to NULL before proceeding with
lookup everywhere.

Issue 5:
========
Stale dentries left out in inode table on brick resulted in failures
of link fop even though the file/dentry didn't exist on backend fs. A
patch is submitted to fix this issue. Please check the dependency tree
of current patch on gerrit for details

In short, we fix the problem by not blindly trusting the
inode-table. Instead we validate whether dentry is present by doing
lookup on backend fs.

Change-Id: I832e5c47d232f90c4edb1fafc512bf19bebde165
updates: bz#1543279
BUG: 1543279
Signed-off-by: Raghavendra G &lt;rgowdapp@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: don't kill the process directly with KILL signal</title>
<updated>2018-03-08T10:15:01+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2018-02-26T08:25:19+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=b2613c9eed6b9d840bc88105dadf282488e6cd64'/>
<id>b2613c9eed6b9d840bc88105dadf282488e6cd64</id>
<content type='text'>
Instead send the SIGTERM (default, 15) first, and at the end
send SIGKILL. If SIGKILL is sent directly, we miss many tests
like valgrind, lcov etc., not able to process the information
properly.

BUG: 1549000
Change-Id: I664de12ee7dbf47eb98b8141004cd51f6006b314
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Instead send the SIGTERM (default, 15) first, and at the end
send SIGKILL. If SIGKILL is sent directly, we miss many tests
like valgrind, lcov etc., not able to process the information
properly.

BUG: 1549000
Change-Id: I664de12ee7dbf47eb98b8141004cd51f6006b314
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: fix bug-1483058-replace-brick-quorum-validation.t spurious failure</title>
<updated>2017-11-12T11:28:13+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-11-09T17:12:22+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=76a83f98b78a0bdf29bbb0f8e4c9ab74dae52be4'/>
<id>76a83f98b78a0bdf29bbb0f8e4c9ab74dae52be4</id>
<content type='text'>
Change-Id: I04c35305bfb663eabbf715eee78695adfd4a2d20
BUG: 1511310
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I04c35305bfb663eabbf715eee78695adfd4a2d20
BUG: 1511310
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Implement DISCARD FOP for EC</title>
<updated>2017-10-25T11:52:41+00:00</updated>
<author>
<name>Sunil Kumar Acharya</name>
<email>sheggodu@redhat.com</email>
</author>
<published>2017-06-14T10:58:40+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=63160cb952fe7716a3313ce5ee32f890fe4d7a0c'/>
<id>63160cb952fe7716a3313ce5ee32f890fe4d7a0c</id>
<content type='text'>
Updates #254

This code change implements DISCARD FOP support for
EC.

BUG: 1461018
Change-Id: I09a9cb2aa9d91ec27add4f422dc9074af5b8b2db
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Updates #254

This code change implements DISCARD FOP support for
EC.

BUG: 1461018
Change-Id: I09a9cb2aa9d91ec27add4f422dc9074af5b8b2db
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Gluster should keep PID file in correct location</title>
<updated>2017-08-11T07:36:41+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>garg.gaurav52@gmail.com</email>
</author>
<published>2016-03-02T12:12:07+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=220d406ad13d840e950eef001a2b36f87570058d'/>
<id>220d406ad13d840e950eef001a2b36f87570058d</id>
<content type='text'>
Currently Gluster keeps process pid information of all the daemons
and brick processes in Gluster configuration file directory
(ie., /var/lib/glusterd/*).

These pid files should be seperate from configuration files.
Deletion of the configuration file directory might result into serious problems.
Also, /var/run/gluster is the default placeholder directory for pid files.

So, with this fix Gluster will keep all process pid information of all
processes in /var/run/gluster/* directory.

Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4
BUG: 1258561
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/13580
Tested-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently Gluster keeps process pid information of all the daemons
and brick processes in Gluster configuration file directory
(ie., /var/lib/glusterd/*).

These pid files should be seperate from configuration files.
Deletion of the configuration file directory might result into serious problems.
Also, /var/run/gluster is the default placeholder directory for pid files.

So, with this fix Gluster will keep all process pid information of all
processes in /var/run/gluster/* directory.

Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4
BUG: 1258561
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/13580
Tested-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Minor fix in error condition</title>
<updated>2017-08-02T13:58:32+00:00</updated>
<author>
<name>Rajesh Joseph</name>
<email>rjoseph@redhat.com</email>
</author>
<published>2016-12-19T05:53:38+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=21aa6170151a19c0b9349374e97b517e9adb25f6'/>
<id>21aa6170151a19c0b9349374e97b517e9adb25f6</id>
<content type='text'>
Change-Id: I2dcc8d88234d2ce92dd8506c61cb84ab253decab
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16191
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I2dcc8d88234d2ce92dd8506c61cb84ab253decab
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16191
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Non-disruptive upgrade on EC volume fails</title>
<updated>2017-07-14T00:26:04+00:00</updated>
<author>
<name>Sunil Kumar Acharya</name>
<email>sheggodu@redhat.com</email>
</author>
<published>2017-07-05T11:11:38+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=d2650feb4bfadf3fb0cdb90236bc78c33b5ea451'/>
<id>d2650feb4bfadf3fb0cdb90236bc78c33b5ea451</id>
<content type='text'>
Problem:
Enabling optimistic changelog on EC volume was not
handling node down scenarios appropriately resulting
in volume data inaccessibility.

Solution:
Update dirty xattr appropriately on good bricks whenever
nodes are down. This would fix the metadata information
as part of heal and thus ensures data accessibility.

BUG: 1468261
Change-Id: I08b0d28df386d9b2b49c3de84b4aac1c729ac057
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17703
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Enabling optimistic changelog on EC volume was not
handling node down scenarios appropriately resulting
in volume data inaccessibility.

Solution:
Update dirty xattr appropriately on good bricks whenever
nodes are down. This would fix the metadata information
as part of heal and thus ensures data accessibility.

BUG: 1468261
Change-Id: I08b0d28df386d9b2b49c3de84b4aac1c729ac057
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17703
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
