<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/libglusterfs/src/mem-pool.c, branch v4.1.7</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>core/memacct: save allocs in mem_acct_rec list</title>
<updated>2017-12-07T09:27:27+00:00</updated>
<author>
<name>N Balachandran</name>
<email>nbalacha@redhat.com</email>
</author>
<published>2017-12-06T08:53:06+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=47d01546a1826dc14a8331ea8700015f1cfdc4db'/>
<id>47d01546a1826dc14a8331ea8700015f1cfdc4db</id>
<content type='text'>
With configure --enable-debug, add all object allocations
to a list in the corresponding mem_acct_rec. This
allows us to see all objects of a particular type
and allows for additional debugging in case of memory
leaks.

This is not compiled in by default and must be explicitly
enabled. It is intended to be used by developers.

Change-Id: I7cf2dbeadecf994423d7e7591e85f18d2575cce8
BUG: 1522662
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With configure --enable-debug, add all object allocations
to a list in the corresponding mem_acct_rec. This
allows us to see all objects of a particular type
and allows for additional debugging in case of memory
leaks.

This is not compiled in by default and must be explicitly
enabled. It is intended to be used by developers.

Change-Id: I7cf2dbeadecf994423d7e7591e85f18d2575cce8
BUG: 1522662
Signed-off-by: N Balachandran &lt;nbalacha@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: Verify pool pointer before destroying it</title>
<updated>2017-09-29T12:36:23+00:00</updated>
<author>
<name>Akarsha Rai</name>
<email>akrai@redhat.com</email>
</author>
<published>2017-09-28T11:18:56+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=9af20af096a14c6297ca8f89697f2a9e4e83bd8f'/>
<id>9af20af096a14c6297ca8f89697f2a9e4e83bd8f</id>
<content type='text'>
Problem: Current code is not checking whether the pool pointer is null or not.

Solution: Updated the code to verify pool pointer.

Bug: 1496675
Change-Id: Ie1f2de4e4204fde15d2b1e3a966ea4c9e7b41534
Signed-off-by: Akarsha Rai &lt;akrai@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Current code is not checking whether the pool pointer is null or not.

Solution: Updated the code to verify pool pointer.

Bug: 1496675
Change-Id: Ie1f2de4e4204fde15d2b1e3a966ea4c9e7b41534
Signed-off-by: Akarsha Rai &lt;akrai@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mempool: fix code when GF_DISABLE_MEMPOOL is defined</title>
<updated>2017-09-02T02:21:15+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-08-18T15:12:05+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=3737ed53caad69ddb0f5b3db2e3498c2d7df2dff'/>
<id>3737ed53caad69ddb0f5b3db2e3498c2d7df2dff</id>
<content type='text'>
Problem: Run-time crash is observed when attempting to memset() a zero
length buffer.

Solution: When GF_DISABLE_MEMPOOL is set, mem_get() gets translated to a
GF_MALLOC(). The size of the allocation does not need to relate to the
available (but uninitialized) global memory pools. It is fine to
allocate the exact amount of memory that was configured when the
mem-pool was created.

Change-Id: Iea0bff974bb771623a34d7a940e10cb0db0f90e1
BUG: 1481199
Reported-by: Milind Changire &lt;mchangir@redhat.com&gt;
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18034
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem: Run-time crash is observed when attempting to memset() a zero
length buffer.

Solution: When GF_DISABLE_MEMPOOL is set, mem_get() gets translated to a
GF_MALLOC(). The size of the allocation does not need to relate to the
available (but uninitialized) global memory pools. It is fine to
allocate the exact amount of memory that was configured when the
mem-pool was created.

Change-Id: Iea0bff974bb771623a34d7a940e10cb0db0f90e1
BUG: 1481199
Reported-by: Milind Changire &lt;mchangir@redhat.com&gt;
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18034
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gfapi: adds a glfs_mem_header for exported memory</title>
<updated>2017-09-01T15:32:55+00:00</updated>
<author>
<name>Kinglong Mee</name>
<email>mijinlong@open-fs.com</email>
</author>
<published>2017-08-30T09:54:09+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=d7ccdb33c2e84bab25bf0898866104f8a85b4217'/>
<id>d7ccdb33c2e84bab25bf0898866104f8a85b4217</id>
<content type='text'>
glfs_free releases different types of data depends on memory type.
Drop the depends of memory type of memory accounting,
new macro GLFS_CALLOC/GLFS_MALLOC/GLFS_REALLOC/GLFS_FREE are added
to support assign release function dynamically, it adds a separate
memory header named glfs_mem_header for gfapi.

Updates: #312
Change-Id: Ie608e5227cbaa05d3f4681a515e83a50d5b17c3f
Signed-off-by: Kinglong Mee &lt;mijinlong@open-fs.com&gt;
Reviewed-on: https://review.gluster.org/18092
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
glfs_free releases different types of data depends on memory type.
Drop the depends of memory type of memory accounting,
new macro GLFS_CALLOC/GLFS_MALLOC/GLFS_REALLOC/GLFS_FREE are added
to support assign release function dynamically, it adds a separate
memory header named glfs_mem_header for gfapi.

Updates: #312
Change-Id: Ie608e5227cbaa05d3f4681a515e83a50d5b17c3f
Signed-off-by: Kinglong Mee &lt;mijinlong@open-fs.com&gt;
Reviewed-on: https://review.gluster.org/18092
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Tested-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: count allocations done per user-pool</title>
<updated>2017-08-29T19:14:04+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-08-28T22:17:03+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=b3c068ccd9125ffdfb6fbb3d2728f16ff8dda2eb'/>
<id>b3c068ccd9125ffdfb6fbb3d2728f16ff8dda2eb</id>
<content type='text'>
Count the active allocations per 'struct mem_pool'. These are the
objects that the calling component allocated and free'd in the memory
pool for this specific type. Having this count in the statedump will
make it easy to find memory leaks.

Updates: #307
Change-Id: I797fabab86f104e49338c00e449a7d0b0d270004
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18074
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Count the active allocations per 'struct mem_pool'. These are the
objects that the calling component allocated and free'd in the memory
pool for this specific type. Having this count in the statedump will
make it easy to find memory leaks.

Updates: #307
Change-Id: I797fabab86f104e49338c00e449a7d0b0d270004
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18074
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: track glusterfs_ctx_t in struct mem_pool</title>
<updated>2017-08-29T12:37:40+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-08-28T22:16:22+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=ea8c9af0b4a91ef927bbeee9afdfa7d1cea6369f'/>
<id>ea8c9af0b4a91ef927bbeee9afdfa7d1cea6369f</id>
<content type='text'>
In order to generate statedumps per glusterfs_ctx_t, it is needed to
place all the memory pools in a structure that the context can reach.
The 'struct mem_pool' has been extended with a 'list_head owner' that is
linked with the glusterfs_ctx_t-&gt;mempool_list.

All callers of mem_pool_new() have been updated to pass the current
glusterfs_ctx_t along. This context is needed to add the new memory pool
to the list and for grabbing the ctx-&gt;lock while updating the
glusterfs_ctx_t-&gt;mempool_list.

Updates: #307
Change-Id: Ia9384424d8d1630ef3efc9d5d523bf739c356c6e
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18075
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In order to generate statedumps per glusterfs_ctx_t, it is needed to
place all the memory pools in a structure that the context can reach.
The 'struct mem_pool' has been extended with a 'list_head owner' that is
linked with the glusterfs_ctx_t-&gt;mempool_list.

All callers of mem_pool_new() have been updated to pass the current
glusterfs_ctx_t along. This context is needed to add the new memory pool
to the list and for grabbing the ctx-&gt;lock while updating the
glusterfs_ctx_t-&gt;mempool_list.

Updates: #307
Change-Id: Ia9384424d8d1630ef3efc9d5d523bf739c356c6e
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18075
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: add tracking of mem_pool that requested the allocation</title>
<updated>2017-08-28T12:46:16+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-08-04T14:29:51+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=2645e730b79b44fc035170657e43bb52f3e855c5'/>
<id>2645e730b79b44fc035170657e43bb52f3e855c5</id>
<content type='text'>
This renames the current 'struct mem_pool' to 'struct mem_pool_shared'.
The mem_pool_shared is globally allocated and not specific for
particular objects.

A new 'struct mem_pool' gets allocated when mem_pool_new() is called. It
points to the mem_pool_shared that handles the actual allocation
requests. The 'struct mem_pool' is only used for accounting of the
objects that the caller requested and free'd.

All of these changes will be used to collect all the memory pools a
glusterfs_ctx_t is consuming, so that statedumps can be collected per
context.

Updates: #307
Change-Id: I6355d3f0251c928e0bbfc71be3431307c6f3a3da
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18073
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This renames the current 'struct mem_pool' to 'struct mem_pool_shared'.
The mem_pool_shared is globally allocated and not specific for
particular objects.

A new 'struct mem_pool' gets allocated when mem_pool_new() is called. It
points to the mem_pool_shared that handles the actual allocation
requests. The 'struct mem_pool' is only used for accounting of the
objects that the caller requested and free'd.

All of these changes will be used to collect all the memory pools a
glusterfs_ctx_t is consuming, so that statedumps can be collected per
context.

Updates: #307
Change-Id: I6355d3f0251c928e0bbfc71be3431307c6f3a3da
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/18073
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: track and verify initialization state</title>
<updated>2017-07-28T11:27:10+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-07-26T14:16:11+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=b5fa5ae05f73e03023db37e43fb203267b719160'/>
<id>b5fa5ae05f73e03023db37e43fb203267b719160</id>
<content type='text'>
It is possible that pthread_getspecific() returns a non-NULL value in
case the pthread_key_t is not initialized. The behaviour for
pthread_getspecific() is not defined in this case. This can happen when
applications use mem-pools from libglusterfs.so, but did not call
mem_pools_init_early().

By tracking the status of the mem-pools initialization, it is now
possible to prevent calling pthread_getspecific() in case the
pthread_key_t is not initialized. In future, we might want to exend this
more to faciliate debugging.

Reported-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Tested-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Change-Id: I6255419fe05792dc78b1eaff55bc008fc5ff3933
Fixes: 1e8e62640 ("mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()")
BUG: 1475255
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17899
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
It is possible that pthread_getspecific() returns a non-NULL value in
case the pthread_key_t is not initialized. The behaviour for
pthread_getspecific() is not defined in this case. This can happen when
applications use mem-pools from libglusterfs.so, but did not call
mem_pools_init_early().

By tracking the status of the mem-pools initialization, it is now
possible to prevent calling pthread_getspecific() in case the
pthread_key_t is not initialized. In future, we might want to exend this
more to faciliate debugging.

Reported-by: Kotresh HR &lt;khiremat@redhat.com&gt;
Tested-by: Jiffin Tony Thottan &lt;jthottan@redhat.com&gt;
Change-Id: I6255419fe05792dc78b1eaff55bc008fc5ff3933
Fixes: 1e8e62640 ("mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()")
BUG: 1475255
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17899
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: jiffin tony Thottan &lt;jthottan@redhat.com&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: free objects from pools on mem_pools_fini()</title>
<updated>2017-07-20T11:35:23+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-07-13T11:44:19+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=8a09d78076cf506f0750cccd63cc983496473cf3'/>
<id>8a09d78076cf506f0750cccd63cc983496473cf3</id>
<content type='text'>
When using a minimal gfapi application that only initializes a small
graph (sink, shard and meta xlators) the following memory leaks are
reported by Valgrind:

  HEAP SUMMARY:
      in use at exit: 322,976 bytes in 75 blocks
    total heap usage: 684 allocs, 609 frees, 2,092,116 bytes allocated

With this change, the mem-pools are cleaned up on calling of
mem_pools_fini() and the objects in the pool are free'd.

  HEAP SUMMARY:
      in use at exit: 315,265 bytes in 58 blocks
    total heap usage: 684 allocs, 626 frees, 2,092,079 bytes allocated

This information was gathered with `./run-xlator.sh features/shard` that
comes with `gfapi-load-volfile` from gluster-debug-tools.

While working on the free'ing of the per_thread_pool_list_t structures,
it became apparent that GF_CALLOC() in mem_get_pool_list() gets
redirected to a standard calloc() without prepending the Gluster
specific memory header. This is because mem_pools_init() gets called
before THIS-&gt;ctx is valid, so it is not possible to check if memory
accounting is enabled or not. Because of this, the GF_CALLOC() call in
mem_get_pool_list() has been replaced by CALLOC() to prevent potential
mismatches between the allocation/free'ing of per_thread_pool_list_t
structures.

Change-Id: Id6f558816f399b0c613d74df36deac2300b6dd98
BUG: 1470170
URL: https://github.com/gluster/gluster-debug-tools
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17768
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When using a minimal gfapi application that only initializes a small
graph (sink, shard and meta xlators) the following memory leaks are
reported by Valgrind:

  HEAP SUMMARY:
      in use at exit: 322,976 bytes in 75 blocks
    total heap usage: 684 allocs, 609 frees, 2,092,116 bytes allocated

With this change, the mem-pools are cleaned up on calling of
mem_pools_fini() and the objects in the pool are free'd.

  HEAP SUMMARY:
      in use at exit: 315,265 bytes in 58 blocks
    total heap usage: 684 allocs, 626 frees, 2,092,079 bytes allocated

This information was gathered with `./run-xlator.sh features/shard` that
comes with `gfapi-load-volfile` from gluster-debug-tools.

While working on the free'ing of the per_thread_pool_list_t structures,
it became apparent that GF_CALLOC() in mem_get_pool_list() gets
redirected to a standard calloc() without prepending the Gluster
specific memory header. This is because mem_pools_init() gets called
before THIS-&gt;ctx is valid, so it is not possible to check if memory
accounting is enabled or not. Because of this, the GF_CALLOC() call in
mem_get_pool_list() has been replaced by CALLOC() to prevent potential
mismatches between the allocation/free'ing of per_thread_pool_list_t
structures.

Change-Id: Id6f558816f399b0c613d74df36deac2300b6dd98
BUG: 1470170
URL: https://github.com/gluster/gluster-debug-tools
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17768
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
Reviewed-by: soumya k &lt;skoduri@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()</title>
<updated>2017-07-19T20:18:24+00:00</updated>
<author>
<name>Niels de Vos</name>
<email>ndevos@redhat.com</email>
</author>
<published>2017-07-14T16:35:10+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=1e8e6264033669332d4cfa117faf678d7631a7b1'/>
<id>1e8e6264033669332d4cfa117faf678d7631a7b1</id>
<content type='text'>
It is not possible to call pthread_key_delete for the pool_key that is
intialized in the constructor for the memory pools. This makes it
difficult to do a full cleanup of all the resources in mem_pools_fini().
For this, the initialization of pool_key should be moved to
mem_pool_init().

However, the glusterfsd binary has a rather complex initialization
procedure. The memory pools need to get initialized partially to get
mem_get() functionality working. But, the pool_sweeper thread can get
killed in case it is started before glusterfsd deamonizes.

In order to solve this, mem_pools_init() is split into two pieces:
1. mem_pools_init_early() for initializing the basic structures
2. mem_pools_init_late() to start the pool_sweeper thread

With the split of mem_pools_init(), and placing the pthread_key_create()
in mem_pools_init_early(), it is now possible to correctly cleanup the
pool_key with pthread_key_delete() in mem_pools_fini().

It seems that there was no memory pool initialization in the CLI. This
has been added as well now. Without it, the CLI will not be able to call
mem_get() successfully which results in a hang of the process.

Change-Id: I1de0153dfe600fd79eac7468cc070e4bd35e71dd
BUG: 1470170
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17779
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
It is not possible to call pthread_key_delete for the pool_key that is
intialized in the constructor for the memory pools. This makes it
difficult to do a full cleanup of all the resources in mem_pools_fini().
For this, the initialization of pool_key should be moved to
mem_pool_init().

However, the glusterfsd binary has a rather complex initialization
procedure. The memory pools need to get initialized partially to get
mem_get() functionality working. But, the pool_sweeper thread can get
killed in case it is started before glusterfsd deamonizes.

In order to solve this, mem_pools_init() is split into two pieces:
1. mem_pools_init_early() for initializing the basic structures
2. mem_pools_init_late() to start the pool_sweeper thread

With the split of mem_pools_init(), and placing the pthread_key_create()
in mem_pools_init_early(), it is now possible to correctly cleanup the
pool_key with pthread_key_delete() in mem_pools_fini().

It seems that there was no memory pool initialization in the CLI. This
has been added as well now. Without it, the CLI will not be able to call
mem_get() successfully which results in a hang of the process.

Change-Id: I1de0153dfe600fd79eac7468cc070e4bd35e71dd
BUG: 1470170
Signed-off-by: Niels de Vos &lt;ndevos@redhat.com&gt;
Reviewed-on: https://review.gluster.org/17779
Smoke: Gluster Build System &lt;jenkins@build.gluster.org&gt;
CentOS-regression: Gluster Build System &lt;jenkins@build.gluster.org&gt;
Reviewed-by: Jeff Darcy &lt;jeff@pl.atyp.us&gt;
</pre>
</div>
</content>
</entry>
</feed>
