<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/glusterfsd/src/glusterfsd.c, branch v3.11.2</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>core: make the per glusterfs_ctx_t timer-wheel refcounted</title>
<updated>2017-05-12T13:32:32+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-04-17T10:20:07+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=45a5cea1ad028bdff5f33770df8ecdd9ac69b6f1'/>
<id>45a5cea1ad028bdff5f33770df8ecdd9ac69b6f1</id>
<content type='text'>
xlators can use a 'global' timer-wheel for scheduling events. This
timer-wheel is managed per glusterfs_ctx_t, but does not need to be
allocated for every graph. When an xlator wants to use the timer-wheel,
it will be instanciated on demand, and provided to xlators that request
it later on.

By adding a reference counter to the glusterfs_ctx_t for the
timer-wheel, the threads and structures can be cleaned up when the last
xlator does not have a need for it anymore. In general, the xlators
request the timer-wheel in init(), and they should return it in fini().

Because the timer-wheel is managed per glusterfs_ctx_t, the functions
can be added to ctx.c and do not need to live in their very minimal
tw.[ch] files.


&gt;Reported-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt;Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/17068
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
&gt;Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
&gt;Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt;(cherry picked from commit 73fcf3a874b2049da31d01b8363d1ac85c9488c2)

Change-Id: I19d225b39aaa272d9005ba7adc3104c3764f1572
BUG: 1450267
Reviewed-on: https://review.gluster.org/17262
Tested-by: Poornima G &lt;pgurusid@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
xlators can use a 'global' timer-wheel for scheduling events. This
timer-wheel is managed per glusterfs_ctx_t, but does not need to be
allocated for every graph. When an xlator wants to use the timer-wheel,
it will be instanciated on demand, and provided to xlators that request
it later on.

By adding a reference counter to the glusterfs_ctx_t for the
timer-wheel, the threads and structures can be cleaned up when the last
xlator does not have a need for it anymore. In general, the xlators
request the timer-wheel in init(), and they should return it in fini().

Because the timer-wheel is managed per glusterfs_ctx_t, the functions
can be added to ctx.c and do not need to live in their very minimal
tw.[ch] files.


&gt;Reported-by: Poornima G &lt;pgurusid@redhat.com&gt;
&gt;Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
&gt;Reviewed-on: https://review.gluster.org/17068
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
&gt;Reviewed-by: Zhou Zhengping &lt;johnzzpcrystal@gmail.com&gt;
&gt;Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
&gt;(cherry picked from commit 73fcf3a874b2049da31d01b8363d1ac85c9488c2)

Change-Id: I19d225b39aaa272d9005ba7adc3104c3764f1572
BUG: 1450267
Reviewed-on: https://review.gluster.org/17262
Tested-by: Poornima G &lt;pgurusid@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: cleanup pidfile on pmap signout</title>
<updated>2017-05-10T14:06:14+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2017-05-03T06:47:30+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=25e24c5ab7202d43afa837cf5159e14fe078cc73'/>
<id>25e24c5ab7202d43afa837cf5159e14fe078cc73</id>
<content type='text'>
This patch ensures
1. brick pidfile is cleaned up on pmap signout
2. pmap signout evemt is sent for all the bricks when a brick process
shuts down.

&gt;Reviewed-on: https://review.gluster.org/17168
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
&gt;(cherry picked from commit 3d35e21ffb15713237116d85711e9cd1dda1688a)

Change-Id: I7606a60775b484651d4b9743b6037b40323931a2
BUG: 1449004
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17211
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch ensures
1. brick pidfile is cleaned up on pmap signout
2. pmap signout evemt is sent for all the bricks when a brick process
shuts down.

&gt;Reviewed-on: https://review.gluster.org/17168
&gt;Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt;CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt;Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
&gt;(cherry picked from commit 3d35e21ffb15713237116d85711e9cd1dda1688a)

Change-Id: I7606a60775b484651d4b9743b6037b40323931a2
BUG: 1449004
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17211
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: socketfile &amp; pidfile related fixes for brick multiplexing feature</title>
<updated>2017-05-10T14:05:52+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2017-05-08T13:59:22+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=7287b46042f805d646d7e117c243a1a4fdc61788'/>
<id>7287b46042f805d646d7e117c243a1a4fdc61788</id>
<content type='text'>
Problem: While brick-muliplexing is on after restarting glusterd, CLI is
         not showing pid of all brick processes in all volumes.

Solution: While brick-mux is on all local brick process communicated through one
          UNIX socket but as per current code (glusterd_brick_start) it is trying
          to communicate with separate UNIX socket for each volume which is populated
          based on brick-name and vol-name.Because of multiplexing design only one
          UNIX socket is opened so it is throwing poller error and not able to
          fetch correct status of brick process through cli process.
          To resolve the problem write a new function glusterd_set_socket_filepath_for_mux
          that will call by glusterd_brick_start to validate about the existence of socketpath.
          To avoid the continuous EPOLLERR erros in  logs update socket_connect code.

Test:     To reproduce the issue followed below steps
          1) Create two distributed volumes(dist1 and dist2)
          2) Set cluster.brick-multiplex is on
          3) kill glusterd
          4) run command gluster v status
          After apply the patch it shows correct pid for all volumes

&gt; BUG: 1444596
&gt; Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17101
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; (cherry picked from commit 21c7f7baccfaf644805e63682e5a7d2a9864a1e6)

Change-Id: Ia95b9d36e50566b293a8d6350f8316dafc27033b
BUG: 1449004
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17212
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: While brick-muliplexing is on after restarting glusterd, CLI is
         not showing pid of all brick processes in all volumes.

Solution: While brick-mux is on all local brick process communicated through one
          UNIX socket but as per current code (glusterd_brick_start) it is trying
          to communicate with separate UNIX socket for each volume which is populated
          based on brick-name and vol-name.Because of multiplexing design only one
          UNIX socket is opened so it is throwing poller error and not able to
          fetch correct status of brick process through cli process.
          To resolve the problem write a new function glusterd_set_socket_filepath_for_mux
          that will call by glusterd_brick_start to validate about the existence of socketpath.
          To avoid the continuous EPOLLERR erros in  logs update socket_connect code.

Test:     To reproduce the issue followed below steps
          1) Create two distributed volumes(dist1 and dist2)
          2) Set cluster.brick-multiplex is on
          3) kill glusterd
          4) run command gluster v status
          After apply the patch it shows correct pid for all volumes

&gt; BUG: 1444596
&gt; Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2
&gt; Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
&gt; Reviewed-on: https://review.gluster.org/17101
&gt; Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
&gt; NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
&gt; CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
&gt; Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
&gt; (cherry picked from commit 21c7f7baccfaf644805e63682e5a7d2a9864a1e6)

Change-Id: Ia95b9d36e50566b293a8d6350f8316dafc27033b
BUG: 1449004
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17212
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Propagate EADDRINUSE correctly to parent process</title>
<updated>2017-04-13T03:49:03+00:00</updated>
<author>
<name>Prashanth Pai</name>
<email>ppai@redhat.com</email>
</author>
<published>2016-12-19T10:58:06+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=94afe2ca98a8ed9effb05901fc89d3b7bb6d0d41'/>
<id>94afe2ca98a8ed9effb05901fc89d3b7bb6d0d41</id>
<content type='text'>
exit()/_exit():
Only the least significant 8 bits i.e (err &amp; 255) shall be available
to the waiting parent process on calling _exit() or exit() with an
integer exit status. If this number is negative, the parent process
doesn't readily get what it's really looking forward to handle.

For example: EADDRINUSE is 98 and if exit status code is set to -98,
the waiting parent process shall get 158 (= -98 &amp; 255) as exit status.

BUG: 1193929

Change-Id: Idc6b0f40c2332e087e584b4b40cbf0d29168c9cd
Signed-off-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16200
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
exit()/_exit():
Only the least significant 8 bits i.e (err &amp; 255) shall be available
to the waiting parent process on calling _exit() or exit() with an
integer exit status. If this number is negative, the parent process
doesn't readily get what it's really looking forward to handle.

For example: EADDRINUSE is 98 and if exit status code is set to -98,
the waiting parent process shall get 158 (= -98 &amp; 255) as exit status.

BUG: 1193929

Change-Id: Idc6b0f40c2332e087e584b4b40cbf0d29168c9cd
Signed-off-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-on: https://review.gluster.org/16200
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: Clean up pmap registry up correctly on volume/brick stop</title>
<updated>2017-02-27T22:59:03+00:00</updated>
<author>
<name>Samikshan Bairagya</name>
<email>samikshan@gmail.com</email>
</author>
<published>2017-02-20T13:05:01+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=1e3538baab7abc29ac329c78182b62558da56d98'/>
<id>1e3538baab7abc29ac329c78182b62558da56d98</id>
<content type='text'>
This commit changes the following:
1. In glusterfs_handle_terminate, send out individual pmap signout
requests to glusterd for every brick.
2. Add another parameter to glusterfs_mgmt_pmap_signout function to
pass the brickname that needs to be removed from the pmap registry.
3. Make sure pmap_registry_search doesn't break out from the loop
iterating over the list of bricks per port if the first brick entry
corresponding to a port is whitespaced out.
4. Make sure the pmap registry entries are removed for other
daemons like snapd.

Change-Id: I69949874435b02699e5708dab811777ccb297174
BUG: 1421590
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: https://review.gluster.org/16689
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This commit changes the following:
1. In glusterfs_handle_terminate, send out individual pmap signout
requests to glusterd for every brick.
2. Add another parameter to glusterfs_mgmt_pmap_signout function to
pass the brickname that needs to be removed from the pmap registry.
3. Make sure pmap_registry_search doesn't break out from the loop
iterating over the list of bricks per port if the first brick entry
corresponding to a port is whitespaced out.
4. Make sure the pmap registry entries are removed for other
daemons like snapd.

Change-Id: I69949874435b02699e5708dab811777ccb297174
BUG: 1421590
Signed-off-by: Samikshan Bairagya &lt;samikshan@gmail.com&gt;
Reviewed-on: https://review.gluster.org/16689
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Gaurav Yadav &lt;gyadav@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: make memory pools more thread-friendly</title>
<updated>2017-02-02T18:30:19+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2016-10-14T14:04:07+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=ae47befebeda2de5fd2d706090cbacf4ef60c785'/>
<id>ae47befebeda2de5fd2d706090cbacf4ef60c785</id>
<content type='text'>
Early multiplexing tests revealed *massive* contention on certain
pools' global locks - especially for dictionaries and secondarily for
call stubs.  For the thread counts that multiplexing can create, a
more lock-free solution is clearly needed.  Also, the current mem-pool
implementation does a poor job releasing memory back to the system,
artificially inflating memory usage to match whatever the worst case
was since the process started.  This is bad in general, but especially
so for multiplexing where there are more pools and a major point of
the whole exercise is to reduce memory consumption.

The basic ideas for the new design are these

  There is one pool, globally, for each power-of-two size range.
  Every attempt to create a new pool within this range will instead
  add a reference to the existing pool.

  Instead of adding pools for each translator within each multiplexed
  brick (potentially infinite and quite possibly thousands), we
  allocate one set of size-based pools per *thread* (hundreds at
  worst).

  Each per-thread pool is divided into hot and cold lists.  Every
  allocation first attempts to use the hot list, then the cold list.
  When objects are freed, they always go on the hot list.

  There is one global "pool sweeper" thread, which periodically
  reclaims everything in each pool's cold list and then "demotes" the
  current hot list to be the new cold list.

  For normal allocation activity, only a per-thread lock need be
  taken, and even that only to guard against very rare contention from
  the pool sweeper.  When threads start and stop, a global lock must
  be taken to add them to the pool sweeper's list.  Lock contention is
  therefore extremely low, and the hot/cold lists also provide good
  locality.

A more complete explanation (of a similar earlier design) can be found
here:

 http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html

Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/15645
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Early multiplexing tests revealed *massive* contention on certain
pools' global locks - especially for dictionaries and secondarily for
call stubs.  For the thread counts that multiplexing can create, a
more lock-free solution is clearly needed.  Also, the current mem-pool
implementation does a poor job releasing memory back to the system,
artificially inflating memory usage to match whatever the worst case
was since the process started.  This is bad in general, but especially
so for multiplexing where there are more pools and a major point of
the whole exercise is to reduce memory consumption.

The basic ideas for the new design are these

  There is one pool, globally, for each power-of-two size range.
  Every attempt to create a new pool within this range will instead
  add a reference to the existing pool.

  Instead of adding pools for each translator within each multiplexed
  brick (potentially infinite and quite possibly thousands), we
  allocate one set of size-based pools per *thread* (hundreds at
  worst).

  Each per-thread pool is divided into hot and cold lists.  Every
  allocation first attempts to use the hot list, then the cold list.
  When objects are freed, they always go on the hot list.

  There is one global "pool sweeper" thread, which periodically
  reclaims everything in each pool's cold list and then "demotes" the
  current hot list to be the new cold list.

  For normal allocation activity, only a per-thread lock need be
  taken, and even that only to guard against very rare contention from
  the pool sweeper.  When threads start and stop, a global lock must
  be taken to add them to the pool sweeper's list.  Lock contention is
  therefore extremely low, and the hot/cold lists also provide good
  locality.

A more complete explanation (of a similar earlier design) can be found
here:

 http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html

Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/15645
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: run many bricks within one glusterfsd process</title>
<updated>2017-01-31T00:13:58+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2016-12-08T21:24:15+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=1a95fc3036db51b82b6a80952f0908bc2019d24a'/>
<id>1a95fc3036db51b82b6a80952f0908bc2019d24a</id>
<content type='text'>
This patch adds support for multiple brick translator stacks running
in a single brick server process.  This reduces our per-brick memory usage by
approximately 3x, and our appetite for TCP ports even more.  It also creates
potential to avoid process/thread thrashing, and to improve QoS by scheduling
more carefully across the bricks, but realizing that potential will require
further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global option.  By
default it's off, and bricks are started in separate processes as before.  If
multiplexing is enabled, then *compatible* bricks (mostly those with the same
transport options) will be started in the same process.

Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/14763
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch adds support for multiple brick translator stacks running
in a single brick server process.  This reduces our per-brick memory usage by
approximately 3x, and our appetite for TCP ports even more.  It also creates
potential to avoid process/thread thrashing, and to improve QoS by scheduling
more carefully across the bricks, but realizing that potential will require
further work.

Multiplexing is controlled by the "cluster.brick-multiplex" global option.  By
default it's off, and bricks are started in separate processes as before.  If
multiplexing is enabled, then *compatible* bricks (mostly those with the same
transport options) will be started in the same process.

Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/14763
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd: glusterfs_ctx_defaults_init should not re-initialize ctx-&gt;locks</title>
<updated>2016-12-02T06:23:38+00:00</updated>
<author>
<name>Rajesh Joseph</name>
<email>rjoseph@redhat.com</email>
</author>
<published>2016-11-21T20:21:19+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=47e69455d3aede77960fd81a7cf3d6b4a869dbfa'/>
<id>47e69455d3aede77960fd81a7cf3d6b4a869dbfa</id>
<content type='text'>
glusterfs_ctx_new already initialize ctx-&gt;locks therefore the second
initialization in glusterfs_ctx_defaults_init does not make sense.

Change-Id: I6027cbd311da8e80585e0f0dcd6916e3bc8dd284
BUG: 1397419
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15905
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glusterfs_ctx_new already initialize ctx-&gt;locks therefore the second
initialization in glusterfs_ctx_defaults_init does not make sense.

Change-Id: I6027cbd311da8e80585e0f0dcd6916e3bc8dd284
BUG: 1397419
Signed-off-by: Rajesh Joseph &lt;rjoseph@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15905
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Poornima G &lt;pgurusid@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Update copyright content for glusterfs binaries</title>
<updated>2016-11-11T03:20:45+00:00</updated>
<author>
<name>Anoop C S</name>
<email>anoopcs@redhat.com</email>
</author>
<published>2016-11-10T07:04:48+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=7947a6dd63979c7638f60d1a0954d9f78fd7df21'/>
<id>7947a6dd63979c7638f60d1a0954d9f78fd7df21</id>
<content type='text'>
Change-Id: I2d5de7ae634d55ae32977e337f366586eab449e4
BUG: 1198849
Signed-off-by: Anoop C S &lt;anoopcs@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15819
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I2d5de7ae634d55ae32977e337f366586eab449e4
BUG: 1198849
Signed-off-by: Anoop C S &lt;anoopcs@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15819
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterfsd/main: fix OOM adjustment for older kernels</title>
<updated>2016-10-11T12:18:05+00:00</updated>
<author>
<name>Oleksandr Natalenko</name>
<email>onatalen@redhat.com</email>
</author>
<published>2016-09-28T12:29:23+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=de07155bfae3c5846797cbb19ee044751cbe6f6e'/>
<id>de07155bfae3c5846797cbb19ee044751cbe6f6e</id>
<content type='text'>
Milind Changire reported that GlusterFS fails to build on RHEL5
because linux/oom.h is unavailable.

Milind's initial patch disables OOM adjustment completely
for those environments that do not have this header. However,
I'd take another approach that:

1) checks for linux/oom.h in compile-time and defines necessary
constants if the header is not present;
2) checks for available OOM API in /proc in run-time and uses it
accordingly.

This allows OOM to be adjusted properly on RHEL5 (the kernel is pretty new
to present /proc API for that) as well as RHEL6 (the kernel has many thing
backported including new /proc API).

Change-Id: I1bc610586872d208430575c149a7d0c54bd82370
BUG: 1379769
Signed-off-by: Oleksandr Natalenko &lt;onatalen@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15587
Tested-by: Oleksandr Natalenko &lt;oleksandr@natalenko.name&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Milind Changire reported that GlusterFS fails to build on RHEL5
because linux/oom.h is unavailable.

Milind's initial patch disables OOM adjustment completely
for those environments that do not have this header. However,
I'd take another approach that:

1) checks for linux/oom.h in compile-time and defines necessary
constants if the header is not present;
2) checks for available OOM API in /proc in run-time and uses it
accordingly.

This allows OOM to be adjusted properly on RHEL5 (the kernel is pretty new
to present /proc API for that) as well as RHEL6 (the kernel has many thing
backported including new /proc API).

Change-Id: I1bc610586872d208430575c149a7d0c54bd82370
BUG: 1379769
Signed-off-by: Oleksandr Natalenko &lt;onatalen@redhat.com&gt;
Reviewed-on: http://review.gluster.org/15587
Tested-by: Oleksandr Natalenko &lt;oleksandr@natalenko.name&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
</feed>
