<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/libglusterfs/src/statedump.c, branch v4.1.3</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>Revert "glusterfsd: Memleak in glusterfsd process while  brick mux is on"</title>
<updated>2018-05-25T02:05:37+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-05-23T03:40:11+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=b679fd4b73d9ec039029088769722887b61d750a'/>
<id>b679fd4b73d9ec039029088769722887b61d750a</id>
<content type='text'>
Updates: bz#1582286
This reverts commit 7c3cc485054e4ede1efb358552135b432fb7047a.
Change-Id: I831d646112bcfa13d0c2153482ad00ff1b23aa6c
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Updates: bz#1582286
This reverts commit 7c3cc485054e4ede1efb358552135b432fb7047a.
Change-Id: I831d646112bcfa13d0c2153482ad00ff1b23aa6c
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "gluster: Sometimes Brick process is crashed at the time of stopping brick"</title>
<updated>2018-05-25T02:05:37+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-05-23T03:36:04+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=7b95d5a4b3988757bf8c91f82dcaf86ed3da6875'/>
<id>7b95d5a4b3988757bf8c91f82dcaf86ed3da6875</id>
<content type='text'>
Updates: bz#1582286
This reverts commit 0043c63f70776444f69667a4ef9596217ecb42b7.
Change-Id: Iab3b4f4a54e122c589e515add93c6effc966b3e0
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Updates: bz#1582286
This reverts commit 0043c63f70776444f69667a4ef9596217ecb42b7.
Change-Id: Iab3b4f4a54e122c589e515add93c6effc966b3e0
</pre>
</div>
</content>
</entry>
<entry>
<title>gluster: Sometimes Brick process is crashed at the time of stopping brick</title>
<updated>2018-04-19T04:31:51+00:00</updated>
<author>
<name>Mohit Agrawal</name>
<email>moagrawa@redhat.com</email>
</author>
<published>2018-03-12T14:13:15+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=0043c63f70776444f69667a4ef9596217ecb42b7'/>
<id>0043c63f70776444f69667a4ef9596217ecb42b7</id>
<content type='text'>
Problem: Sometimes brick process is getting crashed at the time
         of stop brick while brick mux is enabled.

Solution: Brick process was getting crashed because of rpc connection
          was not cleaning properly while brick mux is enabled.In this patch
          after sending GF_EVENT_CLEANUP notification to xlator(server)
          waits for all rpc client connection destroy for specific xlator.Once rpc
          connections are destroyed in server_rpc_notify for all associated client
          for that brick then call xlator_mem_cleanup for for brick xlator as well as
          all child xlators.To avoid races at the time of cleanup introduce
          two new flags at each xlator cleanup_starting, call_cleanup.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Sometimes brick process is getting crashed at the time
         of stop brick while brick mux is enabled.

Solution: Brick process was getting crashed because of rpc connection
          was not cleaning properly while brick mux is enabled.In this patch
          after sending GF_EVENT_CLEANUP notification to xlator(server)
          waits for all rpc client connection destroy for specific xlator.Once rpc
          connections are destroyed in server_rpc_notify for all associated client
          for that brick then call xlator_mem_cleanup for for brick xlator as well as
          all child xlators.To avoid races at the time of cleanup introduce
          two new flags at each xlator cleanup_starting, call_cleanup.

BUG: 1544090
Signed-off-by: Mohit Agrawal &lt;moagrawa@redhat.com&gt;

Note: Run all test-cases in separate build (https://review.gluster.org/#/c/19700/)
      with same patch after enable brick mux forcefully, all test cases are
      passed.

Change-Id: Ic4ab9c128df282d146cf1135640281fcb31997bf
updates: bz#1544090
</pre>
</div>
</content>
</entry>
<entry>
<title>statedump: sanity check of mem_acct and rec for xlator</title>
<updated>2018-01-31T13:05:42+00:00</updated>
<author>
<name>Kinglong Mee</name>
<email>mijinlong@open-fs.com</email>
</author>
<published>2018-01-29T08:07:50+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=85f1d5444735c34ec8da23dc014ed99b6706577c'/>
<id>85f1d5444735c34ec8da23dc014ed99b6706577c</id>
<content type='text'>
With memory accounting is disabled, glusterfs crash when doing statedump at,

0  0x00007fe24cff543a in gf_proc_dump_xlator_mem_info_only_in_use (xl=0x7fe23e44dc00) at statedump.c:269
1  0x00007fe24cff6310 in gf_proc_dump_oldgraph_xlator_info (top=0x7fe23e44dc00) at statedump.c:530
2  0x00007fe24cff7114 in gf_proc_dump_info (signum=10, ctx=0x7fe24ac0e000) at statedump.c:845
3  0x00007fe24d4d4bab in glusterfs_sigwaiter (arg=0x7ffc6c080750) at glusterfsd.c:2109
4  0x00007fe24bbd5dc5 in start_thread () from /lib64/libpthread.so.0
5  0x00007fe24b51a73d in clone () from /lib64/libc.so.6

(gdb) p xl-&gt;mem_acct
$1 = (struct mem_acct *) 0x0
(gdb) p xl-&gt;mem_acct-&gt;rec
$2 = 0x10

Change-Id: I10858170431311833ae01224d51c66caaad5e9a3
BUG: 1539603
Signed-off-by: Kinglong Mee &lt;mijinlong@open-fs.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With memory accounting is disabled, glusterfs crash when doing statedump at,

0  0x00007fe24cff543a in gf_proc_dump_xlator_mem_info_only_in_use (xl=0x7fe23e44dc00) at statedump.c:269
1  0x00007fe24cff6310 in gf_proc_dump_oldgraph_xlator_info (top=0x7fe23e44dc00) at statedump.c:530
2  0x00007fe24cff7114 in gf_proc_dump_info (signum=10, ctx=0x7fe24ac0e000) at statedump.c:845
3  0x00007fe24d4d4bab in glusterfs_sigwaiter (arg=0x7ffc6c080750) at glusterfsd.c:2109
4  0x00007fe24bbd5dc5 in start_thread () from /lib64/libpthread.so.0
5  0x00007fe24b51a73d in clone () from /lib64/libc.so.6

(gdb) p xl-&gt;mem_acct
$1 = (struct mem_acct *) 0x0
(gdb) p xl-&gt;mem_acct-&gt;rec
$2 = 0x10

Change-Id: I10858170431311833ae01224d51c66caaad5e9a3
BUG: 1539603
Signed-off-by: Kinglong Mee &lt;mijinlong@open-fs.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: count allocations done per user-pool</title>
<updated>2017-08-29T19:14:04+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-08-28T22:17:03+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=b3c068ccd9125ffdfb6fbb3d2728f16ff8dda2eb'/>
<id>b3c068ccd9125ffdfb6fbb3d2728f16ff8dda2eb</id>
<content type='text'>
Count the active allocations per 'struct mem_pool'. These are the
objects that the calling component allocated and free'd in the memory
pool for this specific type. Having this count in the statedump will
make it easy to find memory leaks.

Updates: #307
Change-Id: I797fabab86f104e49338c00e449a7d0b0d270004
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18074
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Count the active allocations per 'struct mem_pool'. These are the
objects that the calling component allocated and free'd in the memory
pool for this specific type. Having this count in the statedump will
make it easy to find memory leaks.

Updates: #307
Change-Id: I797fabab86f104e49338c00e449a7d0b0d270004
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18074
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: track glusterfs_ctx_t in struct mem_pool</title>
<updated>2017-08-29T12:37:40+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-08-28T22:16:22+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=ea8c9af0b4a91ef927bbeee9afdfa7d1cea6369f'/>
<id>ea8c9af0b4a91ef927bbeee9afdfa7d1cea6369f</id>
<content type='text'>
In order to generate statedumps per glusterfs_ctx_t, it is needed to
place all the memory pools in a structure that the context can reach.
The 'struct mem_pool' has been extended with a 'list_head owner' that is
linked with the glusterfs_ctx_t-&gt;mempool_list.

All callers of mem_pool_new() have been updated to pass the current
glusterfs_ctx_t along. This context is needed to add the new memory pool
to the list and for grabbing the ctx-&gt;lock while updating the
glusterfs_ctx_t-&gt;mempool_list.

Updates: #307
Change-Id: Ia9384424d8d1630ef3efc9d5d523bf739c356c6e
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18075
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In order to generate statedumps per glusterfs_ctx_t, it is needed to
place all the memory pools in a structure that the context can reach.
The 'struct mem_pool' has been extended with a 'list_head owner' that is
linked with the glusterfs_ctx_t-&gt;mempool_list.

All callers of mem_pool_new() have been updated to pass the current
glusterfs_ctx_t along. This context is needed to add the new memory pool
to the list and for grabbing the ctx-&gt;lock while updating the
glusterfs_ctx_t-&gt;mempool_list.

Updates: #307
Change-Id: Ia9384424d8d1630ef3efc9d5d523bf739c356c6e
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18075
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>statedump: add support for dumping basic mem-pool info</title>
<updated>2017-08-28T12:40:33+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-08-06T15:21:51+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=c96cb9fc28e4358c5d7246ce77b676113a63ce85'/>
<id>c96cb9fc28e4358c5d7246ce77b676113a63ce85</id>
<content type='text'>
With all the new 'struct mem_pool' infrastructure in place, it is now
possible to fetch details about the memory pools that a glusterfs_ctx_t
uses.

This only captures the information from 'struct mem_pool', and not from
the global 'struct mem_pool_shared' or the pool_sweeper thread. The
current details help with detecting memory leaks.

Updates: #307
Change-Id: Idbc5ba136df50863e1e380b448061509896f2c23
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18076
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With all the new 'struct mem_pool' infrastructure in place, it is now
possible to fetch details about the memory pools that a glusterfs_ctx_t
uses.

This only captures the information from 'struct mem_pool', and not from
the global 'struct mem_pool_shared' or the pool_sweeper thread. The
current details help with detecting memory leaks.

Updates: #307
Change-Id: Idbc5ba136df50863e1e380b448061509896f2c23
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18076
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: add more information on dictionary usage</title>
<updated>2017-06-05T12:44:28+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2017-05-30T08:57:16+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=d7105ba1652e548d9ba893e05f3d1fa29e8ee3b1'/>
<id>d7105ba1652e548d9ba893e05f3d1fa29e8ee3b1</id>
<content type='text'>
when you take the 'statedump', it shows the output like below

-----
[dict]
max-number-of-dict-pairs=13
total-pairs-used=41613
total-dict-used=12629
average-pairs-per-dict=3
------

Updates #220

Change-Id: I71a7eda3a3cd23edf4483234f22f983923bbb081
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-on: https://review.gluster.org/4035
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
when you take the 'statedump', it shows the output like below

-----
[dict]
max-number-of-dict-pairs=13
total-pairs-used=41613
total-dict-used=12629
average-pairs-per-dict=3
------

Updates #220

Change-Id: I71a7eda3a3cd23edf4483234f22f983923bbb081
Signed-off-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-on: https://review.gluster.org/4035
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: make memory pools more thread-friendly</title>
<updated>2017-02-02T18:30:19+00:00</updated>
<author>
<name>Jeff Darcy</name>
<email>jdarcy@redhat.com</email>
</author>
<published>2016-10-14T14:04:07+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=ae47befebeda2de5fd2d706090cbacf4ef60c785'/>
<id>ae47befebeda2de5fd2d706090cbacf4ef60c785</id>
<content type='text'>
Early multiplexing tests revealed *massive* contention on certain
pools' global locks - especially for dictionaries and secondarily for
call stubs.  For the thread counts that multiplexing can create, a
more lock-free solution is clearly needed.  Also, the current mem-pool
implementation does a poor job releasing memory back to the system,
artificially inflating memory usage to match whatever the worst case
was since the process started.  This is bad in general, but especially
so for multiplexing where there are more pools and a major point of
the whole exercise is to reduce memory consumption.

The basic ideas for the new design are these

  There is one pool, globally, for each power-of-two size range.
  Every attempt to create a new pool within this range will instead
  add a reference to the existing pool.

  Instead of adding pools for each translator within each multiplexed
  brick (potentially infinite and quite possibly thousands), we
  allocate one set of size-based pools per *thread* (hundreds at
  worst).

  Each per-thread pool is divided into hot and cold lists.  Every
  allocation first attempts to use the hot list, then the cold list.
  When objects are freed, they always go on the hot list.

  There is one global "pool sweeper" thread, which periodically
  reclaims everything in each pool's cold list and then "demotes" the
  current hot list to be the new cold list.

  For normal allocation activity, only a per-thread lock need be
  taken, and even that only to guard against very rare contention from
  the pool sweeper.  When threads start and stop, a global lock must
  be taken to add them to the pool sweeper's list.  Lock contention is
  therefore extremely low, and the hot/cold lists also provide good
  locality.

A more complete explanation (of a similar earlier design) can be found
here:

 http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html

Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/15645
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Early multiplexing tests revealed *massive* contention on certain
pools' global locks - especially for dictionaries and secondarily for
call stubs.  For the thread counts that multiplexing can create, a
more lock-free solution is clearly needed.  Also, the current mem-pool
implementation does a poor job releasing memory back to the system,
artificially inflating memory usage to match whatever the worst case
was since the process started.  This is bad in general, but especially
so for multiplexing where there are more pools and a major point of
the whole exercise is to reduce memory consumption.

The basic ideas for the new design are these

  There is one pool, globally, for each power-of-two size range.
  Every attempt to create a new pool within this range will instead
  add a reference to the existing pool.

  Instead of adding pools for each translator within each multiplexed
  brick (potentially infinite and quite possibly thousands), we
  allocate one set of size-based pools per *thread* (hundreds at
  worst).

  Each per-thread pool is divided into hot and cold lists.  Every
  allocation first attempts to use the hot list, then the cold list.
  When objects are freed, they always go on the hot list.

  There is one global "pool sweeper" thread, which periodically
  reclaims everything in each pool's cold list and then "demotes" the
  current hot list to be the new cold list.

  For normal allocation activity, only a per-thread lock need be
  taken, and even that only to guard against very rare contention from
  the pool sweeper.  When threads start and stop, a global lock must
  be taken to add them to the pool sweeper's list.  Lock contention is
  therefore extremely low, and the hot/cold lists also provide good
  locality.

A more complete explanation (of a similar earlier design) can be found
here:

 http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html

Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665
BUG: 1385758
Signed-off-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Reviewed-on: https://review.gluster.org/15645
Reviewed-by: Xavier Hernandez &lt;xhernandez@datalab.es&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Shyamsundar Ranganathan &lt;srangana@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: fix glusterd statedump crash</title>
<updated>2016-08-04T15:44:09+00:00</updated>
<author>
<name>Atin Mukherjee</name>
<email>amukherj@redhat.com</email>
</author>
<published>2016-05-31T11:14:48+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=049c91565dddb622b8902ccfeb36c0d414c609e1'/>
<id>049c91565dddb622b8902ccfeb36c0d414c609e1</id>
<content type='text'>
commit 3c04a91 removed setting typeStr to NULL if num_allocs is set to 0, this
has caused this regression. Code has been put back like earlier and to avoid
statedump printing all the NULL values check is modified to see skip the records
if num_allocs is 0 instead of total_allocs

Change-Id: Ib8bcc2fba908e88cf52b641c3f6bcba74f5e667c
BUG: 1359190
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14987
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 3c04a91 removed setting typeStr to NULL if num_allocs is set to 0, this
has caused this regression. Code has been put back like earlier and to avoid
statedump printing all the NULL values check is modified to see skip the records
if num_allocs is 0 instead of total_allocs

Change-Id: Ib8bcc2fba908e88cf52b641c3f6bcba74f5e667c
BUG: 1359190
Signed-off-by: Atin Mukherjee &lt;amukherj@redhat.com&gt;
Reviewed-on: http://review.gluster.org/14987
NetBSD-regression: NetBSD Build System &lt;jenkins@build.gluster.org&gt;
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: N Balachandran &lt;nbalacha@redhat.com&gt;
Reviewed-by: Prashanth Pai &lt;ppai@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
