summaryrefslogtreecommitdiffstats
path: root/doc/queues.html
blob: 75b70fbfa191078b41a60807cd63b01a065eab30 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html><head>
<title>Understanding rsyslog queues</title></head>
<body>
<a href="rsyslog_conf_global.html">back</a>

<h1>Understanding rsyslog Queues</h1>
<p>Rsyslog uses queues whenever two activities need to be loosely coupled. With a 
queue, one part of the system "produces" something while another part "consumes" 
this something. The "something" is most often syslog messages, but queues may 
also be used for other purposes.</p>
<p>This document provides a good insight into technical details, operation modes
and implications. In addition to it, an
<a href="queues_analogy.html">rsyslog queue concepts overview</a> document
exists which tries to explain queues with the help of some analogies. This may
probably be a better place to start reading about queues. I assume that once you
have understood that document, the material here will be much easier to grasp
and look much more natural.
<p>The most prominent example is the main message queue. Whenever rsyslog 
receives a message (e.g. locally, via UDP, TCP or in whatever else way), it 
places these messages into the main message queue. Later, it is dequeued by the 
rule processor, which then evaluates which actions are to be carried out. In 
front of each action, there is also a queue, which potentially de-couples the 
filter processing from the actual action (e.g. writing to file, database or 
forwarding to another host).</p>
<h1>Where are Queues Used?</h1>
<p>Currently, queues are used for the main message queue and for the 
actions.</p>
<p>There is a single main message queue inside rsyslog. Each input module 
delivers messages to it. The main message queue worker filters messages based on 
rules specified in rsyslog.conf and dispatches them to the individual action 
queues. Once a message is in an action queue, it is deleted from the main 
message queue.</p>
<p>There are multiple action queues, one for each configured action. By default, 
these queues operate in direct (non-queueing) mode. Action queues are fully 
configurable and thus can be changed to whatever is best for the given use case.</p>
<p>Future versions of rsyslog will most probably utilize queues at other places, 
too.</p>
<p> Wherever "<i>&lt;object&gt;</i>"&nbsp; is used in the config file 
statements, substitute "<i>&lt;object&gt;</i>" with either "MainMsg" or "Action". The 
former will set main message queue
parameters, the later parameters for the next action that will be 
created. Action queue parameters can not be modified once the action has been 
specified. For example, to tell the main message queue to save its content on 
shutdown, use <i>$MainMsgQueueSaveOnShutdown on</i>".</p>
<p>If the same parameter is specified multiple times before a queue is created, 
the last one specified takes precedence. The main message queue is created after 
parsing the config file and all of its potential includes. An action queue is 
created each time an action selector is specified. Action queue parameters are 
reset to default after an action queue has been created (to provide a clean 
environment for the next action).</p>
<p>Not all queues necessarily support the full set of queue configuration 
parameters, because not all are applicable. For example, in current output 
module design, actions do not support multi-threading. Consequently, the number 
of worker threads is fixed to one for action queues and can not be changed.</p>
<h1>Queue Modes</h1>
<p>Rsyslog supports different queue modes, some with submodes. Each of them has 
specific advantages and disadvantages. Selecting the right queue mode is quite 
important when tuning rsyslogd. The queue mode (aka "type") is set via the "<i>$&lt;object&gt;QueueType</i>" 
config directive.</p>
<h2>Direct Queues</h2>
<p>Direct queues are <b>non</b>-queuing queues. A queue in direct mode does 
neither queue nor buffer any of the queue elements but rather passes the element 
directly (and immediately) from the producer to the consumer. This sounds strange, 
but there is a good reason for this queue type.</p>
<p>Direct mode queues allow to use queues generically, even in places where 
queuing is not always desired. A good example is the queue in front of output 
actions. While it makes perfect sense to buffer forwarding actions or database 
writes, it makes only limited sense to build up a queue in front of simple local 
file writes. Yet, rsyslog still has a queue in front of every action. So for 
file writes, the queue mode can simply be set to "direct", in which case no 
queuing happens.</p>
<p>Please note that a direct queue also is the only queue type that passes back 
the execution return code (success/failure) from the consumer to the producer. 
This, for example, is needed for the backup action logic. Consequently, backup 
actions require the to-be-checked action to use a "direct" mode queue.</p>
<p>To create a direct queue, use the "<i>$&lt;object&gt;QueueType Direct</i>" config 
directive.</p>
<h2>Disk Queues</h2>
<p>Disk queues use disk drives for buffering. The important fact is that the 
always use the disk and do not buffer anything in memory. Thus, the queue is 
ultra-reliable, but by far the slowest mode. For regular use cases, this queue 
mode is not recommended. It is useful if log data is so important that it must 
not be lost, even in extreme cases.</p>
<p>When a disk queue is written, it is done in chunks. Each chunk receives its 
individual file. Files are named with a prefix (set via the "<i>$&lt;object&gt;QueueFilename</i>" 
config directive) and followed by a 7-digit number (starting at one and 
incremented for each file). Chunks are 10mb by default, a different size can be 
set via the"<i>$&lt;object&gt;QueueMaxFileSize</i>" config directive. Note that 
the size limit is not a sharp one: rsyslog always writes one complete queue 
entry, even if it violates the size limit. So chunks are actually a little but 
(usually less than 1k) larger then the configured size. Each chunk also has a 
different size for the same reason. If you observe different chunk sizes, you 
can relax: this is not a problem.</p>
<p>Writing in chunks is used so that processed data can quickly be deleted and 
is free for other uses - while at the same time keeping no artificial upper 
limit on disk space used. If a disk quota is set (instructions further below), 
be sure that the quota/chunk size allows at least two chunks to be written. 
Rsyslog currently does not check that and will fail miserably if a single chunk 
is over the quota.</p>
<p>Creating new chunks costs performance but provides quicker ability to free 
disk space. The 10mb default is considered a good compromise between these two. 
However, it may make sense to adapt these settings to local policies. For 
example, if a disk queue is written on a dedicated 200gb disk, it may make sense 
to use a 2gb (or even larger) chunk size.</p>
<p>Please note, however, that the disk queue by default does not update its 
housekeeping structures every time it writes to disk. This is for performance 
reasons. In the event of failure, data will still be lost (except when manually 
is mangled with the file structures). However, disk queues can be set to write 
bookkeeping information on checkpoints (every n records), so that this can be 
made ultra-reliable, too. If the checkpoint interval is set to one, no data can 
be lost, but the queue is exceptionally slow.</p>
<p>Each queue can be placed on a different disk for best performance and/or 
isolation. This is currently selected by specifying different <i>$WorkDirectory</i> 
config directives before the queue creation statement.</p>
<p>To create a disk queue, use the "<i>$&lt;object&gt;QueueType Disk</i>" config 
directive. Checkpoint intervals can be specified via "<i>$&lt;object&gt;QueueCheckpointInterval</i>", 
with 0 meaning no checkpoints. Note that disk-based queues can be made very reliable
by issuing a (f)sync after each write operation. Starting with version 4.3.2, this can
be requested via "<i>&lt;object&gt;QueueSyncQueueFiles on/off</i> with the
default being off. Activating this option has a performance penalty, so it should
not be turned on without reason.</p>
<h2>In-Memory Queues</h2>
<p>In-memory queue mode is what most people have on their mind when they think 
about computing queues. Here, the enqueued data elements are held in memory. 
Consequently, in-memory queues are very fast. But of course, they do not survive 
any program or operating system abort (what usually is tolerable and unlikely). 
Be sure to use an UPS if you use in-memory mode and your log data is important 
to you. Note that even in-memory queues may hold data for an infinite amount of 
time when e.g. an output destination system is down and there is no reason to move 
the data out of memory (lying around in memory for an extended period of time is 
NOT a reason). Pure in-memory queues can't even store queue elements anywhere 
else than in core memory. </p>
<p>There exist two different in-memory queue modes: LinkedList and FixedArray. 
Both are quite similar from the user's point of view, but utilize different 
algorithms. </p>
<p>A FixedArray queue uses a fixed, pre-allocated array that holds pointers to 
queue elements. The majority of space is taken up by the actual user data 
elements, to which the pointers in the array point. The pointer array itself is 
comparatively small. However, it has a certain memory footprint even if the 
queue is empty. As there is no need to dynamically allocate any housekeeping 
structures, FixedArray offers the best run time performance (uses the least CPU 
cycle). FixedArray is best if there is a relatively low number of queue elements 
expected and performance is desired. It is the default mode for the main message 
queue (with a limit of 10,000 elements).</p>
<p>A LinkedList queue is quite the opposite. All housekeeping structures are 
dynamically allocated (in a linked list, as its name implies). This requires 
somewhat more runtime processing overhead, but ensures that memory is only 
allocated in cases where it is needed. LinkedList queues are especially 
well-suited for queues where only occasionally a than-high number of elements 
need to be queued. A use case may be occasional message burst. Memory 
permitting, it could be limited to e.g. 200,000 elements which would take up 
only memory if in use. A FixedArray queue may have a too large static memory 
footprint in such cases.</p>
<p><b>In general, it is advised to use LinkedList mode if in doubt</b>. The 
processing overhead compared to FixedArray is low and may be
outweigh by the reduction in memory use. Paging in most-often-unused 
pointer array pages can be much slower than dynamically allocating them.</p>
<p>To create an in-memory queue, use the "<i>$&lt;object&gt;QueueType LinkedList</i>" 
or&nbsp; "<i>$&lt;object&gt;QueueType FixedArray</i>" config directive.</p>
<h3>Disk-Assisted Memory Queues</h3>
<p>If a disk queue name is defined for in-memory queues (via <i>
$&lt;object&gt;QueueFileName</i>), they automatically 
become "disk-assisted" (DA). In that mode, data is written to disk (and read 
back) on an as-needed basis.</p>
<p>Actually, the regular memory queue (called the 
"primary queue") and a disk queue (called the "DA queue") work in tandem in this 
mode. Most importantly, the disk queue is activated if the primary queue is full 
or needs to be persisted on shutdown. Disk-assisted queues combine the 
advantages of pure memory queues with those of&nbsp; pure disk queues. Under normal 
operations, they are very fast and messages will never touch the disk. But if 
there is need to, an unlimited amount of messages can be buffered (actually 
limited by free disk space only) and data can be persisted between rsyslogd runs.</p>
<p>With a DA-queue, both disk-specific and in-memory specific configuration 
parameters can be set. From the user's point of view, think of a DA queue like a 
"super-queue" which does all within a single queue [from the code perspective, 
there is some specific handling for this case, so it is actually much like a 
single object].</p>
<p>DA queues are typically used to de-couple potentially long-running and 
unreliable actions (to make them reliable). For example, it is recommended to 
use a disk-assisted linked list in-memory queue in front of each database and 
"send via tcp" action. Doing so makes these actions reliable and de-couples 
their potential low execution speed from the rest of your rules (e.g. the local 
file writes). There is a howto on <a href="rsyslog_high_database_rate.html">
massive database inserts</a> which nicely describes this use case. It may even 
be a good read if you do not intend to use databases.</p>
<p>With DA queues, we do not simply write out everything to disk and then run as 
a disk queue once the in-memory queue is full. A much smarter algorithm is used, 
which involves a "high watermark" and a "low watermark". Both specify numbers of 
queued items. If the queue size reaches high watermark elements, the queue 
begins to write data elements to disk. It does so until it reaches the low water 
mark elements. At this point, it stops writing until either high water mark is 
reached again or the on-disk queue becomes empty, in which case the queue 
reverts back to in-memory mode, only. While holding at the low watermark, new 
elements are actually enqueued in memory. They are eventually written to disk, 
but only if the high water mark is ever reached again. If it isn't, these items 
never touch the disk. So even when a queue runs disk-assisted, there is 
in-memory data present (this is a big difference to pure disk queues!).</p>
<p>This algorithm prevents unnecessary disk writes, but also leaves some 
additional buffer space for message bursts. Remember that creating disk files 
and writing to them is a lengthy operation. It is too lengthy to e.g. block 
receiving UDP messages. Doing so would result in message loss. Thus, the queue 
initiates DA mode, but still is able to receive messages and enqueue them - as 
long as the maximum queue size is not reached. The number of elements between 
the high water mark and the maximum queue size serves as this "emergency 
buffer". Size it according to your needs, if traffic is very bursty you will 
probably need a large buffer here. Keep in mind, though, that under normal 
operations these queue elements will probably never be used. Setting the high 
water mark too low will cause disk-assistance to be turned on more often than 
actually needed.</p>
<p>The water marks can be set via the "<i>$&lt;object&gt;QueueHighWatermark</i>" and&nbsp; 
"<i>$&lt;object&gt;QueueHighWatermark</i>" configuration file directives. Note that 
these are actual numbers, not precentages. Be sure they make sense (also in 
respect to "<i>$&lt;object&gt;QueueSize</i>"), as rsyslodg does currently not perform 
any checks on the numbers provided. It is easy to screw up the system here (yes, 
a feature enhancement request is filed ;)).</p>
<h1>Limiting the Queue Size</h1>
<p>All queues, including disk queues, have a limit of the number of elements 
they can enqueue. This is set via the "<i>$&lt;object&gt;QueueSize</i>" config 
parameter. Note that the size is specified in number of enqueued elements, not 
their actual memory size. Memory size limits can not be set. A conservative 
assumption is that a single syslog messages takes up 512 bytes on average 
(in-memory, NOT on the wire, this *is* a difference).</p>
<p>Disk assisted queues are special in that they do <b>not</b> have any size 
limit. The enqueue an unlimited amount of elements. To prevent running out of 
space, disk and disk-assisted queues can be size-limited via the "<i>$&lt;object&gt;QueueMaxDiskSpace</i>" 
configuration parameter. If it is not set, the limit is only available free 
space (and reaching this limit is currently not very gracefully handled, so 
avoid running into it!). If a limit is set, the queue can not grow larger than 
it. Note, however, that the limit is approximate. The engine always writes 
complete records. As such, it is possible that slightly more than the set limit 
is used (usually less than 1k, given the average message size). Keeping strictly 
on the limit would be a performance hurt, and thus the design decision was to 
favour performance. If you don't like that policy, simply specify a slightly 
lower limit (e.g. 999,999K instead of 1G).</p>
<p>In general, it is a good idea to limit the pysical disk space even if you 
dedicate a whole disk to rsyslog. That way, you prevent it from running out of 
space (future version will have an auto-size-limit logic, that then kicks in in 
such situations).</p>
<h1>Worker Thread Pools</h1>
<p>Each queue (except in "direct" mode) has an associated pool of worker 
threads. Worker threads carry out the action to be performed on the data 
elements enqueued. As an actual sample, the main message queue's worker task is 
to apply filter logic to each incoming message and enqueue them to the relevant 
output queues (actions).</p>
<p>Worker threads are started and stopped on an as-needed basis. On a system 
without activity, there may be no worker at all running. One is automatically 
started when a message comes in. Similarily, additional workers are started if 
the queue grows above a specific size. The "<i>$&lt;object&gt;QueueWorkerThreadMinimumMessages</i>"&nbsp; 
config parameter controls worker startup. If it is set to the minimum number of 
elements that must be enqueued in order to justify a new worker startup. For 
example, let's assume it is set to 100. As long as no more than 100 messages are 
in the queue, a single worker will be used. When more than 100 messages arrive, 
a new worker thread is automatically started. Similarily, a third worker will be 
started when there are at least 300 messages, a forth when reaching 400 and so 
on.</p>
<p>It, however, does not make sense to have too many worker threads running in 
parall. Thus, the upper limit ca be set via "<i>$&lt;object&gt;QueueWorkerThreads</i>". 
If it, for example, is set to four, no more than four workers will ever be 
started, no matter how many elements are enqueued. </p>
<p>Worker threads that have been started are kept running until an inactivity 
timeout happens. The timeout can be set via "<i>$&lt;object&gt;QueueWorkerTimeoutThreadShutdown</i>" 
and is specified in milliseconds. If you do not like to keep the workers 
running, simply set it to 0, which means immediate timeout and thus immediate 
shutdown. But consider that creating threads involves some overhead, and this is 
why we keep them running. If you would like to never shutdown any worker
threads, specify -1 for this parameter.</p>
<h2>Discarding Messages</h2>
<p>If the queue reaches the so called "discard watermark" (a number of queued 
elements), less important messages can automatically be discarded. This is in an 
effort to save queue space for more important messages, which you even less like 
to loose. Please note that whenever there are more than "discard watermark" 
messages, both newly incoming as well as already enqueued low-priority messages 
are discarded. The algorithm discards messages newly coming in and those at the 
front of the queue.</p>
<p>The discard watermark is a last resort setting. It should be set sufficiently 
high, but low enough to allow for large message burst. Please note that it take 
effect immediately and thus shows effect promptly - but that doesn't help if the 
burst mainly consist of high-priority messages...</p>
<p>The discard watermark is set via the "<i>$&lt;object&gt;QueueDiscardMark</i>" 
directive. The priority of messages to be discarded is set via "<i>$&lt;object&gt;QueueDiscardSeverity</i>".
This directive accepts both the usual textual severity as well as a
numerical one. To understand it, you must be aware of the numerical
severity values. They are defined in RFC 3164:</p>
<pre>        Numerical         Severity<br>          Code<br><br>           0       Emergency: system is unusable<br>           1       Alert: action must be taken immediately<br>           2       Critical: critical conditions<br>           3       Error: error conditions<br>           4       Warning: warning conditions<br>           5       Notice: normal but significant condition<br>           6       Informational: informational messages<br>           7       Debug: debug-level messages</pre>
<p>Anything of the specified severity and (numerically) above it is
discarded. To turn message discarding off, simply specify the discard
watermark to be higher than the queue size. An alternative is to
specify the numerical value 8 as DiscardSeverity. This is also the
default setting to prevent unintentional message loss. So if you would
like to use message discarding, you need to set" <i>$&lt;object&gt;QueueDiscardSeverity</i>" to an actual value.</p>
<p>An interesting application is with disk-assisted queues: if the discard 
watermark is set lower than the high watermark, message discarding will start 
before the queue becomes disk-assisted. This may be a good thing if you would 
like to switch to disk-assisted mode only in cases where it is absolutely 
unavoidable and you prefer to discard less important messages first.</p>
<h1>Filled-Up Queues</h1>
<p>If the queue has either reached its configured maximum number of entries or 
disk space, it is finally full. If so, rsyslogd throttles the data element 
submitter. If that, for example, is a reliable input (TCP, local log socket), 
that will slow down the message originator which is a good
resolution for this scenario.</p>
<p>During throtteling, a disk-assisted queue continues to write to disk and 
messages are also discarded based on severity as well as regular dequeuing and 
processing continues. So chances are good the situation will be resolved by 
simply throttling. Note, though, that throtteling is highly undesirable for 
unreliable sources, like UDP message reception. So it is not a good thing to run 
into throtteling mode at all.</p>
<p>We can not hold processing
infinitely, not even when throtteling. For example, throtteling the local 
log socket too long would cause the system at whole come to a standstill. To 
prevent this, rsyslogd times out after a configured period ("<i>$&lt;object&gt;QueueTimeoutEnqueue</i>", 
specified in milliseconds) if no space becomes available. As a last resort, it 
then discards the newly arrived message.</p>
<p>If you do not like throtteling, set the timeout to 0 - the message will then 
immediately be discarded. If you use a high timeout, be sure you know what you 
do. If a high main message queue enqueue timeout is set, it can lead to 
something like a complete hang of the system. The same problem does not apply to 
action queues.</p>
<h2>Rate Limiting</h2>
<p>Rate limiting provides a way to prevent rsyslogd from processing things too 
fast. It can, for example, prevent overruning a receiver system.</p>
<p>Currently, there are only limited rate-limiting features available. The "<i>$&lt;object&gt;QueueDequeueSlowdown</i>"&nbsp; 
directive allows to specify how long (in microseconds) dequeueing should be 
delayed. While simple, it still is powerful. For example, using a 
DequeueSlowdown delay of 1,000 microseconds on a UDP send action ensures that no 
more than 1,000 messages can be sent within a second (actually less, as there is 
also some time needed for the processing itself).</p><h2>Processing Timeframes</h2><p>Queues
can be set to dequeue (process) messages only during certain
timeframes. This is useful if you, for example, would like to transfer
the bulk of messages only during off-peak hours, e.g. when you have
only limited bandwidth on the network path the the central server.</p><p>Currently,
only a single timeframe is supported and, even worse, it can only be
specified by the hour. It is not hard to extend rsyslog's capabilities
in this regard - it was just not requested so far. So if you need more
fine-grained control, let us know and we'll probably implement it.
There are two configuration directives, both should be used together or
results are unpredictable:" <i>$&lt;object&gt;QueueDequeueTimeBegin &lt;hour&gt;</i>" and&nbsp;"<i>$&lt;object&gt;QueueDequeueTimeEnd &lt;hour&gt;</i>". The hour parameter must be specified in 24-hour format (so 10pm is 22). A use case for this parameter can be found in the <a href="http://wiki.rsyslog.com/index.php/OffPeakHours">rsyslog wiki</a>. </p>
<h2>Performance</h2>
<p>The locking involved with maintaining the queue has a potentially large
performance impact. How large this is, and if it exists at all, depends much on
the configuration and actual use case. However, the queue is able to work on
so-called &quot;batches&quot; when dequeueing data elements. With batches,
multiple data elements are dequeued at once (with a single locking call).
The queue dequeues all available elements up to a configured upper
limit (<i>&lt;object&gt;DequeueBatchSize &lt;number&gt;</i>). It is important
to note that the actual upper limit is dictated by availability. The queue engine
will never wait for a batch to fill. So even if a high upper limit is configured,
batches may consist of fewer elements, even just one, if there are no more elements
waiting in the queue.
<p>Batching
can improve performance considerably. Note, however, that it affects the
order in which messages are passed to the queue worker threads, as each worker
now receive as batch of messages. Also, the larger the batch size and the higher
the maximum number of permitted worker threads, the more main memory is needed. 
For a busy server, large batch sizes (around 1,000 or even more elements) may be useful.
Please note that with batching, the main memory must hold BatchSize * NumOfWorkers
objects in memory (worst-case scenario), even if running in disk-only mode. So if you
use the default 5 workers at the main message queue and set the batch size to 1,000, you need
to be prepared that the main message queue holds up to 5,000 messages in main memory
<b>in addition</b> to the configured queue size limits!
<p>The queue object's default maximum batch size
is eight, but there exists different defaults for the actual parts of
rsyslog processing that utilize queues. So you need to check these object's
defaults.
<h2>Terminating Queues</h2>
<p>Terminating a process sounds easy, but can be complex.
Terminating a running queue is in fact the most complex operation a queue 
object can perform. You don't see that from a user's point of view, but its 
quite hard work for the developer to do everything in the right order.</p>
<p>The complexity arises when the queue has still data enqueued when it 
finishes. Rsyslog tries to preserve as much of it as possible. As a first 
measure, there is a regular queue time out ("<i>$&lt;object&gt;QueueTimeoutShutdown</i>", 
specified in milliseconds): the queue workers are given that time period to 
finish processing the queue.</p>
<p>If after that period there is still data in the queue, workers are instructed 
to finish the current data element and then terminate. This essentially means 
any other data is lost. There is another timeout ("<i>$&lt;object&gt;QueueTimeoutActionCompletion</i>", 
also specified in milliseconds) that specifies how long the workers have to 
finish the current element. If that timeout expires, any remaining workers are 
cancelled and the queue is brought down.</p>
<p>If you do not like to lose data on shutdown, the "<i>$&lt;object&gt;QueueSaveOnShutdown</i>" 
parameter can be set to "on". This requires either a disk or disk-assisted 
queue. If set, rsyslogd ensures that any queue elements are saved to disk before 
it terminates. This includes data elements there were begun being processed by 
workers that needed to be cancelled due to too-long processing. For a large 
queue, this operation may be lengthy. No timeout applies to a required shutdown 
save.</p>
[<a href="manual.html">manual index</a>]
[<a href="rsyslog_conf.html">rsyslog.conf</a>]
[<a href="http://www.rsyslog.com/">rsyslog site</a>]</p>
<p><font size="2">This documentation is part of the
<a href="http://www.rsyslog.com/">rsyslog</a> project.<br>
Copyright &copy; 2008, 2009 by <a href="http://www.gerhards.net/rainer">Rainer Gerhards</a> and
<a href="http://www.adiscon.com/">Adiscon</a>. Released under the GNU GPL
version 3 or higher.</font></p>

</body></html>