| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
... could even remove one mutex by using a better algorithm. I think I also
spotted some situation in which a hang could have happened. As I can't fix it
in v4 and less without moving to the new engine, I make no effort in testing
this out. Hangs occur during shutdown, only (if at all). The code changes
should also result in some mild performance improvement. Some bug potential,
but overall the bug potential should have been greatly reduced.
|
|
|
|
| |
reducing the number of thread cancellation state changes
|
| |
|
|\
| |
| |
| |
| |
| | |
Conflicts:
runtime/debug.h
runtime/stream.c
|
|\| |
|
| |
| |
| |
| |
| |
| | |
(depending on configuration). This was a small change, but with big
results. There is more potential to explore, but the effects were so
dramatic that I think it makes sense to include this fix.
|
| |
| |
| |
| |
| | |
... this one could cause trouble, but I really don't think it caused
any actual harm.
|
| |
| |
| |
| |
| |
| | |
This may have caused a segfault under strange circumstances (but if
we just run long enough with a high enough message volume, even the
strangest circumstances will occur...)
|
| |
| |
| |
| |
| | |
mostly to get thread debugger errors clean (plus, of course, it
makes things more deterministic)
|
| | |
|
| |
| |
| |
| |
| | |
(made rsyslogd unusable in production). Occured if at least one queue
was in direct mode (the default for action queues).
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This was a complex manual merge, especially in action.c. So if
there occur some problems, this would be a good point to start
troubleshooting. I run a couple of tests before commiting and
they all went well.
Conflicts:
action.c
action.h
runtime/queue.c
runtime/queue.h
runtime/wti.c
runtime/wti.h
|
| |
| |
| |
| |
| |
| | |
... as it was not even optimal on uniprocessors any longer ;) I keep
the config directive in, maybe we can utilize it again at some later
point in time (questionable).
|
| | |
|
| | |
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Note that this was NOT a trivial merge, and there may be
some issues. This needs to be seen when we continue developing.
Conflicts:
runtime/msg.h
runtime/obj.h
runtime/queue.c
runtime/srUtils.h
runtime/stream.c
runtime/stream.h
runtime/wti.c
tests/Makefile.am
tools/omfile.c
tools/syslogd.c
|
| |
| |
| |
| |
| |
| |
| | |
also adds speed, because you do no longer need to run the whole file
system in sync mode. New testbench and new config directives:
- $MainMsgQueueSyncQueueFiles
- $ActionQueueSyncQueueFiles
|
| |
| |
| |
| | |
now some basic operations are carried out via the stream class.
|
| |
| |
| |
| |
| |
| | |
... and also made it callable via an rsyslog interface rather then
relying on the OS loader (important if we go for using it inside
loadbale modules, which we soon possible will)
|
| | |
|
| |
| |
| |
| | |
The enhanced testbench now runs without failures, again
|
| |
| |
| |
| |
| | |
also changed DA queue mode in that the regular workers now run
concurrently.
|
| |
| |
| |
| |
| |
| | |
... in preparation for some larger changes - I need to apply some
serious design changes, as the current system does not play well
at all with ultra-reliable queues. Will do that in a totally new version.
|
| |
| |
| |
| | |
slightly improved situation, would like to save it before carrying on
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
... and also improved the test suite. There is a design issue in the
v3 queue engine that manifested to some serious problems with the new
processing mode. However, in v3 shutdown may take eternally if a queue
runs in DA mode, is configured to preserve data AND the action fails and
retries immediately. There is no cure available for v3, it would
require doing much of the work we have done on the new engine. The window
of exposure, as one might guess from the description, is very small. That
is probably the reason why we have not seen it in practice.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
so far, the last processed message was only freed when the next
one was processed. This has been changed now. More precisely, a
better algorithm has been selected for the queue worker process, which
also involves less overhead than the previous one. The fix for
"free last processed message" as then more or less a side-effect
(easy to do) of the new algorithm.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
... needed to split the old single counter into two. I wouldn't bet that
I made some mistakes while doing so, but at least some ad-hoc tests plus
the testbench do no longer indicate errors.
|
| |
| |
| |
| | |
... which is no longer needed thanks to the new queue design.
|
| |
| |
| |
| |
| |
| | |
... on the way to the ultra-reliable queue modes (redesign doc). This
version does not really work, but is a good commit point. Next comes
queue size calculation. DA mode does not yet work.
|
| |
| |
| |
| |
| |
| | |
So far, the consumer was responsible for destroying objects. However, this
does not work well with ultra-reliable queues. This is the first move to
support them.
|
| |
| |
| |
| | |
... now that we know what we need from a theoretical POV.
|
| | |
|
| |
| |
| |
| |
| | |
... plus simplifying free() calls after agreement on mailing list
that we no longer need to check if the pointer is non-NULL
|
| |
| |
| |
| | |
configuration directives
|
|\| |
|
| |\
| | |
| | |
| | |
| | |
| | | |
Conflicts:
ChangeLog
runtime/queue.c
|
| | |
| | |
| | |
| | |
| | | |
... badly affecting performance for delayable inputs (but not causeing
any other issues)
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
- if queues could not be drained before timeout - thanks to
David Lang for pointing this out
- added link to german-language forum to doc set
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
... but the action consumer does not do anything really intelligent
with them. But the DA consumer is already done, as is the
main message queue consumer.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
... but this code has serious problems when terminating the queue, also
it is far from being optimal. I will commit a series of patches (hopefully)
as I am on the path to the final implementation.
|
|/ / / |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Unfortunatley, I do not have the full list of contributors
available. The patch set was compiled by Ben Taylor, and I made
some further changes to adopt it to the news rsyslog branch. Others
provided much of the base work, but I can not find the names of the
original authors. If you happen to be one of them, please let me
know so that I can give proper credits.
|
| | |
| | |
| | |
| | |
| | |
| | | |
...to enable users to turn off pthread_yield calls which are
counter-productive on multiprocessor machines (but have been
shown to be useful on uniprocessors)
|
| | |
| | |
| | |
| | |
| | | |
This occured if queues could not be drained before timeout.
Thanks to David Lang for pointing this out.
|