<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/tests/include.rc, branch v4.1.3</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>tests: don't kill the process directly with KILL signal</title>
<updated>2018-03-08T10:15:01+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2018-02-26T08:25:19+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=b2613c9eed6b9d840bc88105dadf282488e6cd64'/>
<id>b2613c9eed6b9d840bc88105dadf282488e6cd64</id>
<content type='text'>
Instead send the SIGTERM (default, 15) first, and at the end
send SIGKILL. If SIGKILL is sent directly, we miss many tests
like valgrind, lcov etc., not able to process the information
properly.

BUG: 1549000
Change-Id: I664de12ee7dbf47eb98b8141004cd51f6006b314
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Instead send the SIGTERM (default, 15) first, and at the end
send SIGKILL. If SIGKILL is sent directly, we miss many tests
like valgrind, lcov etc., not able to process the information
properly.

BUG: 1549000
Change-Id: I664de12ee7dbf47eb98b8141004cd51f6006b314
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: fix bug-1483058-replace-brick-quorum-validation.t spurious failure</title>
<updated>2017-11-12T11:28:13+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-11-09T17:12:22+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=76a83f98b78a0bdf29bbb0f8e4c9ab74dae52be4'/>
<id>76a83f98b78a0bdf29bbb0f8e4c9ab74dae52be4</id>
<content type='text'>
Change-Id: I04c35305bfb663eabbf715eee78695adfd4a2d20
BUG: 1511310
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I04c35305bfb663eabbf715eee78695adfd4a2d20
BUG: 1511310
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Implement DISCARD FOP for EC</title>
<updated>2017-10-25T11:52:41+00:00</updated>
<author>
<name>Sunil Kumar Acharya</name>
<email>sheggodu@redhat.com</email>
</author>
<published>2017-06-14T10:58:40+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=63160cb952fe7716a3313ce5ee32f890fe4d7a0c'/>
<id>63160cb952fe7716a3313ce5ee32f890fe4d7a0c</id>
<content type='text'>
Updates #254

This code change implements DISCARD FOP support for
EC.

BUG: 1461018
Change-Id: I09a9cb2aa9d91ec27add4f422dc9074af5b8b2db
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Updates #254

This code change implements DISCARD FOP support for
EC.

BUG: 1461018
Change-Id: I09a9cb2aa9d91ec27add4f422dc9074af5b8b2db
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Gluster should keep PID file in correct location</title>
<updated>2017-08-11T07:36:41+00:00</updated>
<author>
<name>Gaurav Kumar Garg</name>
<email>garg.gaurav52@gmail.com</email>
</author>
<published>2016-03-02T12:12:07+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=220d406ad13d840e950eef001a2b36f87570058d'/>
<id>220d406ad13d840e950eef001a2b36f87570058d</id>
<content type='text'>
Currently Gluster keeps process pid information of all the daemons
and brick processes in Gluster configuration file directory
(ie., /var/lib/glusterd/*).

These pid files should be seperate from configuration files.
Deletion of the configuration file directory might result into serious problems.
Also, /var/run/gluster is the default placeholder directory for pid files.

So, with this fix Gluster will keep all process pid information of all
processes in /var/run/gluster/* directory.

Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4
BUG: 1258561
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/13580
Tested-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently Gluster keeps process pid information of all the daemons
and brick processes in Gluster configuration file directory
(ie., /var/lib/glusterd/*).

These pid files should be seperate from configuration files.
Deletion of the configuration file directory might result into serious problems.
Also, /var/run/gluster is the default placeholder directory for pid files.

So, with this fix Gluster will keep all process pid information of all
processes in /var/run/gluster/* directory.

Change-Id: Idb09e3fccb6a7355fbac1df31082637c8d7ab5b4
BUG: 1258561
Signed-off-by: Gaurav Kumar Garg &lt;ggarg@redhat.com&gt;
Signed-off-by: Saravanakumar Arumugam &lt;sarumuga@redhat.com&gt;
Reviewed-on: https://review.gluster.org/13580
Tested-by: MOHIT AGRAWAL &lt;moagrawa@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: Minor fix in error condition</title>
<updated>2017-08-02T13:58:32+00:00</updated>
<author>
<name>Rajesh Joseph</name>
<email>rjoseph@redhat.com</email>
</author>
<published>2016-12-19T05:53:38+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=21aa6170151a19c0b9349374e97b517e9adb25f6'/>
<id>21aa6170151a19c0b9349374e97b517e9adb25f6</id>
<content type='text'>
Change-Id: I2dcc8d88234d2ce92dd8506c61cb84ab253decab
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16191
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I2dcc8d88234d2ce92dd8506c61cb84ab253decab
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16191
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Non-disruptive upgrade on EC volume fails</title>
<updated>2017-07-14T00:26:04+00:00</updated>
<author>
<name>Sunil Kumar Acharya</name>
<email>sheggodu@redhat.com</email>
</author>
<published>2017-07-05T11:11:38+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=d2650feb4bfadf3fb0cdb90236bc78c33b5ea451'/>
<id>d2650feb4bfadf3fb0cdb90236bc78c33b5ea451</id>
<content type='text'>
Problem:
Enabling optimistic changelog on EC volume was not
handling node down scenarios appropriately resulting
in volume data inaccessibility.

Solution:
Update dirty xattr appropriately on good bricks whenever
nodes are down. This would fix the metadata information
as part of heal and thus ensures data accessibility.

BUG: 1468261
Change-Id: I08b0d28df386d9b2b49c3de84b4aac1c729ac057
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17703
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
Enabling optimistic changelog on EC volume was not
handling node down scenarios appropriately resulting
in volume data inaccessibility.

Solution:
Update dirty xattr appropriately on good bricks whenever
nodes are down. This would fix the metadata information
as part of heal and thus ensures data accessibility.

BUG: 1468261
Change-Id: I08b0d28df386d9b2b49c3de84b4aac1c729ac057
Signed-off-by: Sunil Kumar Acharya &lt;sheggodu@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17703
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>cluster/ec: Update xattr and heal size properly</title>
<updated>2017-06-06T14:41:52+00:00</updated>
<author>
<name>Ashish Pandey</name>
<email>aspandey@redhat.com</email>
</author>
<published>2017-04-03T07:16:29+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=88c67b72b1d5843d11ce7cba27dd242bd0c23c6a'/>
<id>88c67b72b1d5843d11ce7cba27dd242bd0c23c6a</id>
<content type='text'>
Problem-1 : Recursive healing of same file is happening
when IO is going on even after data heal completes.

Solution:
RCA: At the end of the write, when ec_update_size_version
gets called, we send it only on good bricks and not
on healing brick. Due to this, xattr on healing brick
will always remain out of sync and when the background
heal check source and sink, it finds this brick to be
healed and start healing from scratch. That involve
ftruncate and writing all of the data again.

To solve this, send xattrop on all the good bricks as
well as healing bricks.

Problem-2: The above fix exposes the data corruption
during heal. If the write on a file is going on and
heal finishes, we find that the file gets corrupted.

RCA:
The real problem happens in ec_rebuild_data(). Here we receive the
'size' argument which contains the real file size at the time of
starting self-heal and it's assigned to heal-&gt;total_size.

After that, a sequence of calls to ec_sync_heal_block() are done. Each
call ends up calling ec_manager_heal_block(), which does the actual work
of healing a block.

First a lock on the inode is taken in state EC_STATE_INIT using
ec_heal_inodelk(). When the lock is acquired, ec_heal_lock_cbk() is
called. This function calls ec_set_inode_size() to store the real size
of the inode (it uses heal-&gt;total_size).

The next step is to read the block to be healed. This is done using a
regular ec_readv(). One of the things this call does is to trim the
returned size if the file is smaller than the requested size.

In our case, when we read the last block of a file whose size was = 512
mod 1024 at the time of starting self-heal, ec_readv() will return only
the first 512 bytes, not the whole 1024 bytes.

This isn't a problem since the following ec_writev() sent from the heal
code only attempts to write the amount of data read, so it shouldn't
modify the remaining 512 bytes.

However ec_writev() also checks the file size. If we are writing the
last block of the file (determined by the size stored on the inode that
we have set to heal-&gt;total_size), any data beyond the (imposed) end of
file will be cleared with 0's. This causes the 512 bytes after the
heal-&gt;total_size to be cleared. Since the file was written after heal
started, the these bytes contained data, so the block written to the
damaged brick will be incorrect.

Solution:
Align heal-&gt;total_size to a multiple of the stripe size.

Thanks "Xavier Hernandez" &lt;xhernandez@datalab.es&gt;
to find out the root cause and to fix the issue.

Change-Id: I6c9f37b3ff9dd7f5dc1858ad6f9845c05b4e204e
BUG: 1428673
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16985
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem-1 : Recursive healing of same file is happening
when IO is going on even after data heal completes.

Solution:
RCA: At the end of the write, when ec_update_size_version
gets called, we send it only on good bricks and not
on healing brick. Due to this, xattr on healing brick
will always remain out of sync and when the background
heal check source and sink, it finds this brick to be
healed and start healing from scratch. That involve
ftruncate and writing all of the data again.

To solve this, send xattrop on all the good bricks as
well as healing bricks.

Problem-2: The above fix exposes the data corruption
during heal. If the write on a file is going on and
heal finishes, we find that the file gets corrupted.

RCA:
The real problem happens in ec_rebuild_data(). Here we receive the
'size' argument which contains the real file size at the time of
starting self-heal and it's assigned to heal-&gt;total_size.

After that, a sequence of calls to ec_sync_heal_block() are done. Each
call ends up calling ec_manager_heal_block(), which does the actual work
of healing a block.

First a lock on the inode is taken in state EC_STATE_INIT using
ec_heal_inodelk(). When the lock is acquired, ec_heal_lock_cbk() is
called. This function calls ec_set_inode_size() to store the real size
of the inode (it uses heal-&gt;total_size).

The next step is to read the block to be healed. This is done using a
regular ec_readv(). One of the things this call does is to trim the
returned size if the file is smaller than the requested size.

In our case, when we read the last block of a file whose size was = 512
mod 1024 at the time of starting self-heal, ec_readv() will return only
the first 512 bytes, not the whole 1024 bytes.

This isn't a problem since the following ec_writev() sent from the heal
code only attempts to write the amount of data read, so it shouldn't
modify the remaining 512 bytes.

However ec_writev() also checks the file size. If we are writing the
last block of the file (determined by the size stored on the inode that
we have set to heal-&gt;total_size), any data beyond the (imposed) end of
file will be cleared with 0's. This causes the 512 bytes after the
heal-&gt;total_size to be cleared. Since the file was written after heal
started, the these bytes contained data, so the block written to the
damaged brick will be incorrect.

Solution:
Align heal-&gt;total_size to a multiple of the stripe size.

Thanks "Xavier Hernandez" &lt;xhernandez@datalab.es&gt;
to find out the root cause and to fix the issue.

Change-Id: I6c9f37b3ff9dd7f5dc1858ad6f9845c05b4e204e
BUG: 1428673
Signed-off-by: Ashish Pandey &lt;aspandey@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16985
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Pranith Kumar Karampuri &lt;pkarampu@redhat.com&gt;
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Make reset-brick work correctly if brick-mux is on</title>
<updated>2017-05-10T18:58:21+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2017-04-24T16:30:17+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=74383e3ec6f8244b3de9bf14016452498c1ddcf0'/>
<id>74383e3ec6f8244b3de9bf14016452498c1ddcf0</id>
<content type='text'>
Reset brick currently kills of the corresponding brick process.
However, with brick multiplexing enabled, stopping the brick
process would render all bricks attached to it unavailable. To
handle this correctly, we need to make sure that the brick process
is terminated only if brick-multiplexing is disabled. Otherwise,
we should send the GLUSTERD_BRICK_TERMINATE rpc to the respective
brick process to detach the brick that is to be reset.

Change-Id: I69002d66ffe6ec36ef48af09b66c522c6d35ac58
BUG: 1446172
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: https://review.gluster.org/17128
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Reset brick currently kills of the corresponding brick process.
However, with brick multiplexing enabled, stopping the brick
process would render all bricks attached to it unavailable. To
handle this correctly, we need to make sure that the brick process
is terminated only if brick-multiplexing is disabled. Otherwise,
we should send the GLUSTERD_BRICK_TERMINATE rpc to the respective
brick process to detach the brick that is to be reset.

Change-Id: I69002d66ffe6ec36ef48af09b66c522c6d35ac58
BUG: 1446172
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: https://review.gluster.org/17128
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tests: track EW_RETRIES for debugging</title>
<updated>2017-04-09T22:11:05+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2017-01-20T20:29:05+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=5512a5723ad22dd099a57c60483eb80e5b2a14c4'/>
<id>5512a5723ad22dd099a57c60483eb80e5b2a14c4</id>
<content type='text'>
It can often be useful while debugging to know how many times
EXPECT_WITHIN had to retry a command before it got the result we were
looking for.  This patch just adds a variable EW_RETRIES that can be
inspected to find this info for the last EXPECT_WITHIN.

Change-Id: I1bcb09bb7eb118c3d76c60317ef99e02df6b6ee6
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16451
Tested-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
It can often be useful while debugging to know how many times
EXPECT_WITHIN had to retry a command before it got the result we were
looking for.  This patch just adds a variable EW_RETRIES that can be
inspected to find this info for the last EXPECT_WITHIN.

Change-Id: I1bcb09bb7eb118c3d76c60317ef99e02df6b6ee6
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16451
Tested-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Raghavendra Talur &lt;rtalur@redhat.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: run many bricks within one glusterfsd process</title>
<updated>2017-01-31T00:13:58+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2016-12-08T21:24:15+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=1a95fc3036db51b82b6a80952f0908bc2019d24a'/>
<id>1a95fc3036db51b82b6a80952f0908bc2019d24a</id>
<content type='text'>
This patch adds support for multiple brick translator stacks running
in a single brick server process.  This reduces our per-brick memory usage by
approximately 3x, and our appetite for TCP ports even more.  It also creates
potential to avoid process/thread thrashing, and to improve QoS by scheduling
more carefully across the bricks, but realizing that potential will require
further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global option.  By
default it's off, and bricks are started in separate processes as before.  If
multiplexing is enabled, then *compatible* bricks (mostly those with the same
transport options) will be started in the same process.

Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/14763
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch adds support for multiple brick translator stacks running
in a single brick server process.  This reduces our per-brick memory usage by
approximately 3x, and our appetite for TCP ports even more.  It also creates
potential to avoid process/thread thrashing, and to improve QoS by scheduling
more carefully across the bricks, but realizing that potential will require
further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global option.  By
default it's off, and bricks are started in separate processes as before.  If
multiplexing is enabled, then *compatible* bricks (mostly those with the same
transport options) will be started in the same process.

Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/14763
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
