<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs.git/rpc/rpc-lib, branch v8dev</title>
<subtitle>GlusterFS is a distributed file-system capable of scaling to several petabytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system.</subtitle>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/'/>
<entry>
<title>multiple files: another attempt to remove includes</title>
<updated>2019-06-14T16:50:32+00:00</updated>
<author>
<name>Yaniv Kaul</name>
<email>ykaul@redhat.com</email>
</author>
<published>2019-06-09T10:31:31+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=0a6fe8551ac9807a8b6ad62241ec8048cf9f9025'/>
<id>0a6fe8551ac9807a8b6ad62241ec8048cf9f9025</id>
<content type='text'>
There are many include statements that are not needed.
A previous more ambitious attempt failed because of *BSD plafrom
(see https://review.gluster.org/#/c/glusterfs/+/21929/ )

Now trying a more conservative reduction.
It does not solve all circular deps that we have, but it
does reduce some of them. There is just too much to handle
reasonably (dht-common.h includes dht-lock.h which includes
dht-common.h ...), but it does reduce the overall number of lines
of include we need to look at in the future to understand and fix
the mess later one.

Change-Id: I550cd001bdefb8be0fe67632f783c0ef6bee3f9f
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There are many include statements that are not needed.
A previous more ambitious attempt failed because of *BSD plafrom
(see https://review.gluster.org/#/c/glusterfs/+/21929/ )

Now trying a more conservative reduction.
It does not solve all circular deps that we have, but it
does reduce some of them. There is just too much to handle
reasonably (dht-common.h includes dht-lock.h which includes
dht-common.h ...), but it does reduce the overall number of lines
of include we need to look at in the future to understand and fix
the mess later one.

Change-Id: I550cd001bdefb8be0fe67632f783c0ef6bee3f9f
updates: bz#1193929
Signed-off-by: Yaniv Kaul &lt;ykaul@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix some "Null pointer dereference" coverity issues</title>
<updated>2019-05-26T13:59:13+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-05-22T15:46:19+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=5d88111a142b3c37e92bdd36699a04fd054d27f4'/>
<id>5d88111a142b3c37e92bdd36699a04fd054d27f4</id>
<content type='text'>
This patch fixes the following CID's:

  * 1124829
  * 1274075
  * 1274083
  * 1274128
  * 1274135
  * 1274141
  * 1274143
  * 1274197
  * 1274205
  * 1274210
  * 1274211
  * 1288801
  * 1398629

Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558
Updates: bz#789278
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch fixes the following CID's:

  * 1124829
  * 1274075
  * 1274083
  * 1274128
  * 1274135
  * 1274141
  * 1274143
  * 1274197
  * 1274205
  * 1274210
  * 1274211
  * 1288801
  * 1398629

Change-Id: Ia7c86cfab3245b20777ffa296e1a59748040f558
Updates: bz#789278
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Revert "rpc: implement reconnect back-off strategy"</title>
<updated>2019-05-21T08:36:32+00:00</updated>
<author>
<name>Amar Tumballi</name>
<email>amarts@redhat.com</email>
</author>
<published>2019-05-21T01:37:47+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=2f50486f431b37f4a16541911f50f2042049b0e6'/>
<id>2f50486f431b37f4a16541911f50f2042049b0e6</id>
<content type='text'>
This reverts commit 59841f7e1ff0511b04884015441a181a56d07bea.

This revert is done as a 'possible' fix for frequent regression
failures, which are random in nature too (ie, different tests fails
in different runs).

Why exactly this patch? Because this patch seemed like most probable
candidate which got merged in last 15days, and after which regressions
are failing more often.

Updates: bz#1711827
Change-Id: I35333162fcd4064f9609525ca93c666053c6d959
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This reverts commit 59841f7e1ff0511b04884015441a181a56d07bea.

This revert is done as a 'possible' fix for frequent regression
failures, which are random in nature too (ie, different tests fails
in different runs).

Why exactly this patch? Because this patch seemed like most probable
candidate which got merged in last 15days, and after which regressions
are failing more often.

Updates: bz#1711827
Change-Id: I35333162fcd4064f9609525ca93c666053c6d959
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: implement reconnect back-off strategy</title>
<updated>2019-05-11T14:25:53+00:00</updated>
<author>
<name>Xavier Hernandez</name>
<email>jahernan@redhat.com</email>
</author>
<published>2018-01-19T11:18:13+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=59841f7e1ff0511b04884015441a181a56d07bea'/>
<id>59841f7e1ff0511b04884015441a181a56d07bea</id>
<content type='text'>
When a connection failure happens, gluster tries to reconnect every 3
seconds. In some cases the failure is spurious, so a delay of 3 seconds
could be unnecessarily long.

This patch implements a back-off strategy that tries a reconnect as soon
as 1 tenth of a second. If this fails, the time is doubled until it's
around 3 seconds. After that, the reconnect is attempted every 3 seconds
as before.

Change-Id: Icb3fbe20d618f50cbbb599dce542b4e871c22149
Updates: bz#1193929
Signed-off-by: Xavier Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When a connection failure happens, gluster tries to reconnect every 3
seconds. In some cases the failure is spurious, so a delay of 3 seconds
could be unnecessarily long.

This patch implements a back-off strategy that tries a reconnect as soon
as 1 tenth of a second. If this fails, the time is doubled until it's
around 3 seconds. After that, the reconnect is attempted every 3 seconds
as before.

Change-Id: Icb3fbe20d618f50cbbb599dce542b4e871c22149
Updates: bz#1193929
Signed-off-by: Xavier Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpclib: slow floating point math and libm</title>
<updated>2019-04-03T04:40:15+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2019-03-28T13:36:33+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=024cafea8965086267645be0eae86bcdf5a5d106'/>
<id>024cafea8965086267645be0eae86bcdf5a5d106</id>
<content type='text'>
In release-6 rpc/rpc-lib (libgfrpc) added the function
get_rightmost_set_bit() which calls log2(3), a call that takes
a floating point parameter.

It's used thusly:
    right_most_unset_bit = get_rightmost_set_bit(...);

(So is it really the right-most unset bit, or the right-most set bit?)

It's unclear to me whether this is in the data path or not. If it is,
it's rather scary to think about integer-to-float conversions and slow
calls to libm functions in the data path.

gcc and clang have __builtin_ctz() which returns the same result as
get_rightmost_set_bit(), and does it substantially faster. Approx
20M iterations of get_rightmost_set_bit() took ~33sec of wall clock
time on my devel machine, while 20M iterations of __builtin_ctz()
took &lt; 9sec; get_rightmost_set_bit() is 3x slower than __builtin_ctz().

And as a side benefit, we can again eliminate the need to link libgfrpc
with libm.

Change-Id: If9e7e80874577c52223f8125b385fc930de20699
updates: bz#1193929
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In release-6 rpc/rpc-lib (libgfrpc) added the function
get_rightmost_set_bit() which calls log2(3), a call that takes
a floating point parameter.

It's used thusly:
    right_most_unset_bit = get_rightmost_set_bit(...);

(So is it really the right-most unset bit, or the right-most set bit?)

It's unclear to me whether this is in the data path or not. If it is,
it's rather scary to think about integer-to-float conversions and slow
calls to libm functions in the data path.

gcc and clang have __builtin_ctz() which returns the same result as
get_rightmost_set_bit(), and does it substantially faster. Approx
20M iterations of get_rightmost_set_bit() took ~33sec of wall clock
time on my devel machine, while 20M iterations of __builtin_ctz()
took &lt; 9sec; get_rightmost_set_bit() is 3x slower than __builtin_ctz().

And as a side benefit, we can again eliminate the need to link libgfrpc
with libm.

Change-Id: If9e7e80874577c52223f8125b385fc930de20699
updates: bz#1193929
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mgmt/shd: Implement multiplexing in self heal daemon</title>
<updated>2019-04-01T03:44:23+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-02-25T04:35:32+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=bc3694d7cfc868a2ed6344ea123faf19fce28d13'/>
<id>bc3694d7cfc868a2ed6344ea123faf19fce28d13</id>
<content type='text'>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:

Shd daemon is per node, which means they create a graph
with all volumes on it. While this is a great for utilizing
resources, it is so good in terms of performance and managebility.

Because self-heal daemons doesn't have capability to automatically
reconfigure their graphs. So each time when any configurations
changes happens to the volumes(replicate/disperse), we need to restart
shd to bring the changes into the graph.

Because of this all on going heal for all other volumes has to be
stopped in the middle, and need to restart all over again.

Solution:

This changes makes shd as a per volume daemon, so that the graph
will be generated for each volumes.

When we want to start/reconfigure shd for a volume, we first search
for an existing shd running on the node, if there is none, we will
start a new process. If already a daemon is running for shd, then
we will simply detach a graph for a volume and reatach the updated
graph for the volume. This won't touch any of the on going operations
for any other volumes on the shd daemon.

Example of an shd graph when it is per volume

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /         |             \
                   /          |              \
              ---------    ---------      ----------
              | AFR-1 |    | AFR-2 |      |  AFR-3 |
              --------     ---------      ----------

A running shd daemon with 3 volumes will be like--&gt;

                           graph
                     -----------------------
                     |     debug-iostat    |
                     -----------------------
                    /           |           \
                   /            |            \
              ------------   ------------  ------------
              | volume-1 |   | volume-2 |  | volume-3 |
              ------------   ------------  ------------

Change-Id: Idcb2698be3eeb95beaac47125565c93370afbd99
fixes: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: Remove duplicate code</title>
<updated>2019-03-28T05:35:25+00:00</updated>
<author>
<name>Pranith Kumar K</name>
<email>pkarampu@redhat.com</email>
</author>
<published>2019-03-28T05:33:34+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=108e4f3481225f98c12f7c283e1ef0388863cf8b'/>
<id>108e4f3481225f98c12f7c283e1ef0388863cf8b</id>
<content type='text'>
rpc_clnt_disable() and rpc_clnt_disconnect() have same code.
Removed rpc_clnt_disconnect() and moved calls to rpc_clnt_disconnect()
to rpc_clnt_disable()

updates bz#1193929
Change-Id: I965f57cc1d5af36d266810125558b6f5e5f279d4
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
rpc_clnt_disable() and rpc_clnt_disconnect() have same code.
Removed rpc_clnt_disconnect() and moved calls to rpc_clnt_disconnect()
to rpc_clnt_disable()

updates bz#1193929
Change-Id: I965f57cc1d5af36d266810125558b6f5e5f279d4
Signed-off-by: Pranith Kumar K &lt;pkarampu@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>build: link libgfrpc with MATH_LIB (libm, -lm)</title>
<updated>2019-03-26T11:31:45+00:00</updated>
<author>
<name>Kaleb S. KEITHLEY</name>
<email>kkeithle@redhat.com</email>
</author>
<published>2019-03-21T05:21:13+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=a87038229e26723406e48035519f0e6dfea4e45c'/>
<id>a87038229e26723406e48035519f0e6dfea4e45c</id>
<content type='text'>
tl;dnr: libgfrpc.so calls log2(3) from libm; it should be explicitly
linked with -lm

the autoconf/automake/libtool stack is more or less forgiving on
different distributions. On forgiving systems libtool will semi-
magically link with implicit dependencies. But on Ubuntu, which
seems to be tending toward being less forgiving, the link of libgfrpc
will fail with an unresolved referencee to log2(3).

Change-Id: I9fae09ddb81e49004fbea4d7d83b95fb64a484b0
updates: bz#1193929
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
tl;dnr: libgfrpc.so calls log2(3) from libm; it should be explicitly
linked with -lm

the autoconf/automake/libtool stack is more or less forgiving on
different distributions. On forgiving systems libtool will semi-
magically link with implicit dependencies. But on Ubuntu, which
seems to be tending toward being less forgiving, the link of libgfrpc
will fail with an unresolved referencee to log2(3).

Change-Id: I9fae09ddb81e49004fbea4d7d83b95fb64a484b0
updates: bz#1193929
Signed-off-by: Kaleb S. KEITHLEY &lt;kkeithle@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc/transport: Missing a ref on dict while creating transport object</title>
<updated>2019-03-20T13:24:44+00:00</updated>
<author>
<name>Mohammed Rafi KC</name>
<email>rkavunga@redhat.com</email>
</author>
<published>2019-02-26T12:34:18+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=f2f07591b2de9ba45bbc3eb4f601d1e9a327190b'/>
<id>f2f07591b2de9ba45bbc3eb4f601d1e9a327190b</id>
<content type='text'>
while creating rpc_tranpsort object, we store a dictionary without
taking a ref on dict but it does an unref during the cleaning of the
transport object.

So the rpc layer expect the caller to take a ref on the dictionary
before passing dict to rpc layer. This leads to a lot of confusion
across the code base and leads to ref leaks.

Semantically, this is not correct. It is the rpc layer responsibility
to take a ref when storing it, and free during the cleanup.

I'm listing down the total issues or leaks across the code base because
of this confusion. These issues are currently present in the upstream
master.

1) changelog_rpc_client_init

2) quota_enforcer_init

3) rpcsvc_create_listeners : when there are two transport, like tcp,rdma.

4) quotad_aggregator_init

5) glusterd: init

6) nfs3_init_state

7) server: init

8) client:init

This patch does the cleanup according to the semantics.

Change-Id: I46373af9630373eb375ee6de0e6f2bbe2a677425
updates: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
while creating rpc_tranpsort object, we store a dictionary without
taking a ref on dict but it does an unref during the cleaning of the
transport object.

So the rpc layer expect the caller to take a ref on the dictionary
before passing dict to rpc layer. This leads to a lot of confusion
across the code base and leads to ref leaks.

Semantically, this is not correct. It is the rpc layer responsibility
to take a ref when storing it, and free during the cleanup.

I'm listing down the total issues or leaks across the code base because
of this confusion. These issues are currently present in the upstream
master.

1) changelog_rpc_client_init

2) quota_enforcer_init

3) rpcsvc_create_listeners : when there are two transport, like tcp,rdma.

4) quotad_aggregator_init

5) glusterd: init

6) nfs3_init_state

7) server: init

8) client:init

This patch does the cleanup according to the semantics.

Change-Id: I46373af9630373eb375ee6de0e6f2bbe2a677425
updates: bz#1659708
Signed-off-by: Mohammed Rafi KC &lt;rkavunga@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>core: implement a global thread pool</title>
<updated>2019-02-18T02:58:24+00:00</updated>
<author>
<name>Xavi Hernandez</name>
<email>xhernandez@redhat.com</email>
</author>
<published>2019-01-24T17:44:06+00:00</published>
<link rel='alternate' type='text/html' href='https://fedorapeople.org/cgit/anoopcs/public_git/glusterfs.git/commit/?id=dddcf52020004d98f688ebef968de51d76cbf9a6'/>
<id>dddcf52020004d98f688ebef968de51d76cbf9a6</id>
<content type='text'>
This patch implements a thread pool that is wait-free for adding jobs to
the queue and uses a very small locked region to get jobs. This makes it
possible to decrease contention drastically. It's based on wfcqueue
structure provided by urcu library.

It automatically enables more threads when load demands it, and stops
them when not needed. There's a maximum number of threads that can be
used. This value can be configured.

Depending on the workload, the maximum number of threads plays an
important role. So it needs to be configured for optimal performance.
Currently the thread pool doesn't self adjust the maximum for the
workload, so this configuration needs to be changed manually.

For this reason, the global thread pool has been made optional, so that
volumes can still use the thread pool provided by io-threads.

To enable it for bricks, the following option needs to be set:

   config.global-threading = on

This option has no effect if bricks are already running. A restart is
required to activate it. It's recommended to also enable the following
option when running bricks with the global thread pool:

   performance.iot-pass-through = on

To enable it for a FUSE mount point, the option '--global-threading'
must be added to the mount command. To change it, an umount and remount
is needed. It's recommended to disable the following option when using
global threading on a mount point:

   performance.client-io-threads = off

To enable it for services managed by glusterd, glusterd needs to be
started with option '--global-threading'. In this case all daemons, like
self-heal, will be using the global thread pool.

Currently it can only be enabled for bricks, FUSE mounts and glusterd
services.

The maximum number of threads for clients and bricks can be configured
using the following options:

   config.client-threads
   config.brick-threads

These options can be applied online and its effect is immediate most of
the times. If one of them is set to 0, the maximum number of threads
will be calcutated as #cores * 2.

Some distributions use a very old userspace-rcu library (version 0.7)
for this reason, some header files from version 0.10 have been copied
into contrib/userspace-rcu and are used if the detected version is 0.7
or older.

An additional change has been made to io-threads to prevent that threads
are started when iot-pass-through is set.

Change-Id: I09d19e246b9e6d53c6247b29dfca6af6ee00a24b
updates: #532
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch implements a thread pool that is wait-free for adding jobs to
the queue and uses a very small locked region to get jobs. This makes it
possible to decrease contention drastically. It's based on wfcqueue
structure provided by urcu library.

It automatically enables more threads when load demands it, and stops
them when not needed. There's a maximum number of threads that can be
used. This value can be configured.

Depending on the workload, the maximum number of threads plays an
important role. So it needs to be configured for optimal performance.
Currently the thread pool doesn't self adjust the maximum for the
workload, so this configuration needs to be changed manually.

For this reason, the global thread pool has been made optional, so that
volumes can still use the thread pool provided by io-threads.

To enable it for bricks, the following option needs to be set:

   config.global-threading = on

This option has no effect if bricks are already running. A restart is
required to activate it. It's recommended to also enable the following
option when running bricks with the global thread pool:

   performance.iot-pass-through = on

To enable it for a FUSE mount point, the option '--global-threading'
must be added to the mount command. To change it, an umount and remount
is needed. It's recommended to disable the following option when using
global threading on a mount point:

   performance.client-io-threads = off

To enable it for services managed by glusterd, glusterd needs to be
started with option '--global-threading'. In this case all daemons, like
self-heal, will be using the global thread pool.

Currently it can only be enabled for bricks, FUSE mounts and glusterd
services.

The maximum number of threads for clients and bricks can be configured
using the following options:

   config.client-threads
   config.brick-threads

These options can be applied online and its effect is immediate most of
the times. If one of them is set to 0, the maximum number of threads
will be calcutated as #cores * 2.

Some distributions use a very old userspace-rcu library (version 0.7)
for this reason, some header files from version 0.10 have been copied
into contrib/userspace-rcu and are used if the detected version is 0.7
or older.

An additional change has been made to io-threads to prevent that threads
are started when iot-pass-through is set.

Change-Id: I09d19e246b9e6d53c6247b29dfca6af6ee00a24b
updates: #532
Signed-off-by: Xavi Hernandez &lt;xhernandez@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
