<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests, branch v5.1</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>glusterd: ensure volinfo-&gt;caps is set to correct value.</title>
<updated>2018-11-09T18:46:34+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-10-03T18:28:37+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=3782c90a617dfefc9bc8a92d0facb3927659dede'/>
<id>3782c90a617dfefc9bc8a92d0facb3927659dede</id>
<content type='text'>
With the commit febf5ed4848, during the volume create op,
we are setting volinfo-&gt;caps to 0, only if any of the bricks
belong to the same node and brickinfo-&gt;vg[0] is null.
Previously, we used to set volinfo-&gt;caps to 0, when
either brick doesn't belong to the same node or brickinfo-&gt;vg[0]
is null.

With this patch, we set volinfo-&gt;caps to 0, when either brick
doesn't belong to the same node or brickinfo-&gt;vg[0] is null.
(as we do earlier without commit febf5ed4848).

&gt; BUG: bz#1635820
&gt; Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit aae1c402b74fd02ed2f6473b896f108d82aef8e3)

fixes: bz#1647968
Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With the commit febf5ed4848, during the volume create op,
we are setting volinfo-&gt;caps to 0, only if any of the bricks
belong to the same node and brickinfo-&gt;vg[0] is null.
Previously, we used to set volinfo-&gt;caps to 0, when
either brick doesn't belong to the same node or brickinfo-&gt;vg[0]
is null.

With this patch, we set volinfo-&gt;caps to 0, when either brick
doesn't belong to the same node or brickinfo-&gt;vg[0] is null.
(as we do earlier without commit febf5ed4848).

&gt; BUG: bz#1635820
&gt; Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit aae1c402b74fd02ed2f6473b896f108d82aef8e3)

fixes: bz#1647968
Change-Id: I00a97415786b775fb088ac45566ad52b402f1a49
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: correction in tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t</title>
<updated>2018-11-08T14:37:17+00:00</updated>
<author>
<name>Sanju Rakonde</name>
<email>srakonde@redhat.com</email>
</author>
<published>2018-10-08T14:03:58+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=2b1d28aa809fc364ca383866ad4d905016d6ef57'/>
<id>2b1d28aa809fc364ca383866ad4d905016d6ef57</id>
<content type='text'>
Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has
optimised glusterd test cases by clubbing the similar test
cases into a single test case.

https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t
test case has been deleted and added as a part of
tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t

In the original test case, we create a volume with two bricks,
each on a separate node(N1 &amp; N2). From another node in cluster(N3),
we try to detach a node which is hosting bricks. It fails.

In the new test, we created volume with single brick on N1.
and from another node in cluster, we tried to detach N1. we
expect peer detach to fail, but peer detach was success as
the node is hosting all the bricks of volume.

Now, changing the new test case to cover the original test case scenario.

Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to
understand why the new test case is not failing in centos-regression.

&gt; BUG: bz#1642597

&gt; Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 0ca6773eaf5aeb507ebc72d2c2f61902eeff414c)

fixes: bz#1643078

Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Patch https://review.gluster.org/#/c/glusterfs/+/19135/ has
optimised glusterd test cases by clubbing the similar test
cases into a single test case.

https://review.gluster.org/#/c/glusterfs/+/19135/15/tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t
test case has been deleted and added as a part of
tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t

In the original test case, we create a volume with two bricks,
each on a separate node(N1 &amp; N2). From another node in cluster(N3),
we try to detach a node which is hosting bricks. It fails.

In the new test, we created volume with single brick on N1.
and from another node in cluster, we tried to detach N1. we
expect peer detach to fail, but peer detach was success as
the node is hosting all the bricks of volume.

Now, changing the new test case to cover the original test case scenario.

Please refer https://bugzilla.redhat.com/show_bug.cgi?id=1642597#c1 to
understand why the new test case is not failing in centos-regression.

&gt; BUG: bz#1642597

&gt; Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
&gt; Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
(cherry picked from commit 0ca6773eaf5aeb507ebc72d2c2f61902eeff414c)

fixes: bz#1643078

Change-Id: Ifda12b5677143095f263fbb97a6808573f513234
Signed-off-by: Sanju Rakonde &lt;srakonde@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.t</title>
<updated>2018-10-25T13:12:28+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-10-21T12:02:52+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=686849beb424b8b0ebd17b21a9cc201f252f3547'/>
<id>686849beb424b8b0ebd17b21a9cc201f252f3547</id>
<content type='text'>
Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:

22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38

In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:

[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.

Fix:
Check for shd up status before launching heal via CLI

Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641872
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:

22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38

In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:

[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.

Fix:
Check for shd up status before launching heal via CLI

Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641872
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
(cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1)
</pre>
</div>
</content>
</entry>
<entry>
<title>features/shard: Hold a ref on base inode when adding a shard to lru list</title>
<updated>2018-10-25T13:11:49+00:00</updated>
<author>
<name>Krutika Dhananjay</name>
<email>kdhananj@redhat.com</email>
</author>
<published>2018-10-05T06:02:21+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=02a05da6989f5cd4283e2e5d62a9cfa6493d65dc'/>
<id>02a05da6989f5cd4283e2e5d62a9cfa6493d65dc</id>
<content type='text'>
Backport of:
&gt; Change-Id: Ic15ca41444dd04684a9458bd4a526b1d3e160499
&gt; Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
&gt; (cherry picked from commit e627977)
&gt; BUG: 1605056

In __shard_update_shards_inode_list(), previously shard translator
was not holding a ref on the base inode whenever a shard was added to
the lru list. But if the base shard is forgotten and destroyed either
by fuse due to memory pressure or due to the file being deleted at some
point by a different client with this client still containing stale
shards in its lru list, the client would crash at the time of locking
lru_base_inode-&gt;lock owing to illegal memory access.

So now the base shard is ref'd into the inode ctx of every shard that
is added to lru list until it gets lru'd out.

The patch also handles the case where none of the shards associated
with a file that is about to be deleted are part of the LRU list and
where an unlink at the beginning of the operation destroys the base
inode (because there are no refkeepers) and hence all of the shards
that are about to be deleted will be resolved without the existence
of a base shard in-memory. This, if not handled properly, could lead
to a crash.

Change-Id: Ic15ca41444dd04684a9458bd4a526b1d3e160499
updates: bz#1641440
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of:
&gt; Change-Id: Ic15ca41444dd04684a9458bd4a526b1d3e160499
&gt; Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
&gt; (cherry picked from commit e627977)
&gt; BUG: 1605056

In __shard_update_shards_inode_list(), previously shard translator
was not holding a ref on the base inode whenever a shard was added to
the lru list. But if the base shard is forgotten and destroyed either
by fuse due to memory pressure or due to the file being deleted at some
point by a different client with this client still containing stale
shards in its lru list, the client would crash at the time of locking
lru_base_inode-&gt;lock owing to illegal memory access.

So now the base shard is ref'd into the inode ctx of every shard that
is added to lru list until it gets lru'd out.

The patch also handles the case where none of the shards associated
with a file that is about to be deleted are part of the LRU list and
where an unlink at the beginning of the operation destroys the base
inode (because there are no refkeepers) and hence all of the shards
that are about to be deleted will be resolved without the existence
of a base shard in-memory. This, if not handled properly, could lead
to a crash.

Change-Id: Ic15ca41444dd04684a9458bd4a526b1d3e160499
updates: bz#1641440
Signed-off-by: Krutika Dhananjay &lt;kdhananj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gfapi: Bug fixes in leases processing code-path</title>
<updated>2018-10-18T13:26:29+00:00</updated>
<author>
<name>Soumya Koduri</name>
<email>skoduri@redhat.com</email>
</author>
<published>2018-10-10T16:07:07+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=2f2872b335018a7fa4b61193f2e6404bef8864ed'/>
<id>2f2872b335018a7fa4b61193f2e6404bef8864ed</id>
<content type='text'>
This patch fixes below issues in gfapi lease code-path
* 'glfs_setfsleasid' should allow NULL input to be
   able to reset leaseid
* Applications should be allowed to (un)register for
  upcall notifications of type GLFS_EVENT_LEASE_RECALL
* APIs added to read contents of GLFS_EVENT_LEASE_RECALL
  argument which is of type "struct glfs_upcall_lease"

This is backport of the below mainline patch -
 https://review.gluster.org/#/c/glusterfs/+/21391

Change-Id: I3320ddf235cc82fad561e13b9457ebd64db6c76b
updates: #350
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch fixes below issues in gfapi lease code-path
* 'glfs_setfsleasid' should allow NULL input to be
   able to reset leaseid
* Applications should be allowed to (un)register for
  upcall notifications of type GLFS_EVENT_LEASE_RECALL
* APIs added to read contents of GLFS_EVENT_LEASE_RECALL
  argument which is of type "struct glfs_upcall_lease"

This is backport of the below mainline patch -
 https://review.gluster.org/#/c/glusterfs/+/21391

Change-Id: I3320ddf235cc82fad561e13b9457ebd64db6c76b
updates: #350
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>api: fill out attribute information if not valid</title>
<updated>2018-10-17T23:36:13+00:00</updated>
<author>
<name>Raghavendra Gowdappa</name>
<email>rgowdapp@redhat.com</email>
</author>
<published>2018-10-12T05:01:04+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=5bc2fd4fc6a8aa65d8d2b2c22ffb6c5b70ef9dac'/>
<id>5bc2fd4fc6a8aa65d8d2b2c22ffb6c5b70ef9dac</id>
<content type='text'>
translators like readdir-ahead selectively retain entry information of
iatt (gfid and type) when rest of the iatt is invalidated (for write
invalidating ia_size, (m)(c)times etc). Fuse-bridge uses this
information and sends only entry information in readdirplus
response. However such option doesn't exist in gfapi. This patch
modifies gfapi to populate the stat by forcing an extra lookup.

Thanks to Shyamsundar Ranganathan &lt;srangana@redhat.com&gt; and Prashanth
Pai &lt;ppai@redhat.com&gt; for tests.

Change-Id: Ieb5f8fc76359c327627b7d8420aaf20810e53000
Fixes: bz#1630804
Signed-off-by: Raghavendra Gowdappa &lt;rgowdapp@redhat.com&gt;
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
(cherry picked from commit 6257276d9de3f15643f159b2ec627a67c84fc23d)
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
translators like readdir-ahead selectively retain entry information of
iatt (gfid and type) when rest of the iatt is invalidated (for write
invalidating ia_size, (m)(c)times etc). Fuse-bridge uses this
information and sends only entry information in readdirplus
response. However such option doesn't exist in gfapi. This patch
modifies gfapi to populate the stat by forcing an extra lookup.

Thanks to Shyamsundar Ranganathan &lt;srangana@redhat.com&gt; and Prashanth
Pai &lt;ppai@redhat.com&gt; for tests.

Change-Id: Ieb5f8fc76359c327627b7d8420aaf20810e53000
Fixes: bz#1630804
Signed-off-by: Raghavendra Gowdappa &lt;rgowdapp@redhat.com&gt;
Signed-off-by: Soumya Koduri &lt;skoduri@redhat.com&gt;
(cherry picked from commit 6257276d9de3f15643f159b2ec627a67c84fc23d)
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: fix incorrect reporting of directory split-brain</title>
<updated>2018-10-11T11:00:02+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-10-11T01:52:09+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=34e8058b7c8e906c889d6d0e155ea63037148eea'/>
<id>34e8058b7c8e906c889d6d0e155ea63037148eea</id>
<content type='text'>
Backport of https://review.gluster.org/#/c/glusterfs/+/21135/

Problem:
When a directory has dirty xattrs due to failed post-ops or when
replace/reset brick is performed, AFR does a conservative merge as
expected, but heal-info reports it as split-brain because there are no
clear sources.

Fix:
Modify pending flag to contain information about pending heals and
split-brains. For directories, if spit-brain flag is not set,just show
them as needing heal and not being in split-brain.

Change-Id: I09ef821f6887c87d315ae99e6b1de05103cd9383
fixes: bz#1638163
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of https://review.gluster.org/#/c/glusterfs/+/21135/

Problem:
When a directory has dirty xattrs due to failed post-ops or when
replace/reset brick is performed, AFR does a conservative merge as
expected, but heal-info reports it as split-brain because there are no
clear sources.

Fix:
Modify pending flag to contain information about pending heals and
split-brains. For directories, if spit-brain flag is not set,just show
them as needing heal and not being in split-brain.

Change-Id: I09ef821f6887c87d315ae99e6b1de05103cd9383
fixes: bz#1638163
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>afr: prevent winding inodelks twice for arbiter volumes</title>
<updated>2018-10-11T10:56:41+00:00</updated>
<author>
<name>Ravishankar N</name>
<email>ravishankar@redhat.com</email>
</author>
<published>2018-10-11T01:01:40+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=c7b933bc460bf47a8d4404055bf1a52225a138cb'/>
<id>c7b933bc460bf47a8d4404055bf1a52225a138cb</id>
<content type='text'>
Backport of https://review.gluster.org/#/c/glusterfs/+/21380/

Problem:
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.

Fix:
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff

Thanks to  Pranith Kumar K &lt;pkarampu@redhat.com&gt; for finding the RCA.

fixes: bz#1638159
Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Backport of https://review.gluster.org/#/c/glusterfs/+/21380/

Problem:
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.

Fix:
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff

Thanks to  Pranith Kumar K &lt;pkarampu@redhat.com&gt; for finding the RCA.

fixes: bz#1638159
Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1
Signed-off-by: Ravishankar N &lt;ravishankar@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ctime: Provide noatime option</title>
<updated>2018-10-02T12:45:23+00:00</updated>
<author>
<name>Kotresh HR</name>
<email>khiremat@redhat.com</email>
</author>
<published>2018-09-03T13:07:58+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=315b45f85ecba15d7fc8f2342468b89ee4747c48'/>
<id>315b45f85ecba15d7fc8f2342468b89ee4747c48</id>
<content type='text'>
Most of the applications are {c|m}time dependant
and very few are atime dependant. So provide noatime
option to not update atime when ctime feature is
enabled.

Also this option has to be enabled with ctime
feature to avoid unnecessary self heal. Since
AFR/EC reads data from single subvolume, atime
is only updated in one subvolume triggering self
heal.

Backport of:
&gt; Patch: https://review.gluster.org/21073
&gt; BUG: 1593538
&gt; Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 89636be4c73b12de2e11c75d8e59527bb243f147)

updates: bz#1633015
Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Most of the applications are {c|m}time dependant
and very few are atime dependant. So provide noatime
option to not update atime when ctime feature is
enabled.

Also this option has to be enabled with ctime
feature to avoid unnecessary self heal. Since
AFR/EC reads data from single subvolume, atime
is only updated in one subvolume triggering self
heal.

Backport of:
&gt; Patch: https://review.gluster.org/21073
&gt; BUG: 1593538
&gt; Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
&gt; Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
(cherry picked from commit 89636be4c73b12de2e11c75d8e59527bb243f147)

updates: bz#1633015
Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
Signed-off-by: Kotresh HR &lt;khiremat@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>python3: assume python3 unless building _packages_ on sys without py3</title>
<updated>2018-09-27T19:31:53+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2018-09-24T18:12:45+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=f7af62ecd893bd977cebc4e6586f1e524b36561b'/>
<id>f7af62ecd893bd977cebc4e6586f1e524b36561b</id>
<content type='text'>
The jenkins release-new job runs on a CentOS 7 box, which does not
have python3. As a result it runs (autogen.sh and) configure before
producing the dist tar file, converting all the python3 shebangs to
python2 shebangs in the dist tar file.

Then when that tar file is "carried" to, e.g. Fedora koji build
system to build packages, the shebangs are incorrect, despite having
originally been correct in the git repo.

Change-Id: I5154baba3f6d29d3c4823bafc2b57abecbf90e5b
updates: #411
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The jenkins release-new job runs on a CentOS 7 box, which does not
have python3. As a result it runs (autogen.sh and) configure before
producing the dist tar file, converting all the python3 shebangs to
python2 shebangs in the dist tar file.

Then when that tar file is "carried" to, e.g. Fedora koji build
system to build packages, the shebangs are incorrect, despite having
originally been correct in the git repo.

Change-Id: I5154baba3f6d29d3c4823bafc2b57abecbf90e5b
updates: #411
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
