| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
primarily a problem of imdiag. Also added some fix for a potential
situation during cancel processing. That one is not considered vital
and may later be removed again.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
We do now enqueue those objects that are left unprocessed. This enables
us to delete the full batch, what is exactly what we need to do.
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
... but this brings a lot of problems with it. The issue is that
we still have a sequential store and we do not know how we could
delete the one entry right in the middle of processing. I keep this
branch if we intend to move on with it - but for now I look into a
different solution...
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
support for enhancing probability of memory addressing failure by
using non-NULL default value for malloced memory (optional, only if
requested by configure option). This helps to track down some
otherwise undetected issues within the testbench and is expected
to be very useful in the future.
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
ChangeLog
runtime/queue.c
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
however, this had no negative effect, as the message processing state
was not evaluated when a batch was deleted, and that was the only case
where the state could be wrong.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
simplified and thus speeded up the queue engine, also fixed some
potential race conditions (in very unusual shutdown conditions)
along the way. The threading model has seriously changes, so there may
be some regressions.
NOTE: the code passed basic tests, but there is still more work
and testing to be done. This commit should be treated with care.
|
| | | |
| | | |
| | | |
| | | | |
... non-working version!
|
|/ / /
| | |
| | |
| | |
| | | |
Failed for both pure disk as well as DA queues. Now, we emit an error
message and disable disk queueing facility.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- bugfix: solved potential (temporary) stall of messages when the queue was
almost empty and few new data added (caused testbench to sometimes hang!)
- fixed some race condition in testbench
- added more elaborate diagnostics to parts of the testbench
- solved a potential race inside the queue engine
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
made shutdown more reliable by makeing sure that the main queue DA worker
is only cancelled if this is actually unavoidable. Also moved down the
deletion of rsyslogd's pid file to immediately before termination, so
that absence of the file is a proper indication that rsyslogd has
finished (in the past, e.g. the testbench accidently ran two intances
as the pid file was deleted too early). Also some improvments to the
testbench, namely to handle aborts more intelligently (but still not
perfect).
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
the new handling will hopefully spare a few cycles, as function calls
(and most importantly parameter generation!) or now only done when
debug messages are actually active.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
code review brought up some few places where we may have run into a race.
They have most probably been introduced during the recent set of changes. But
I do not look at older versions because of the changed architecture, one can
not simply backport this patch.
|
| | |
| | |
| | |
| | |
| | | |
This did NOT leak based on message volume. Also, did some cleanup during
the commit.
|
| | |
| | |
| | |
| | |
| | |
| | | |
...if not running in direct mode. Previous versions could run without
any active workers. This simplifies the code at a very small expense.
See v5 compatibility note document for more in-depth discussion.
|
| | | |
|
| | |
| | |
| | |
| | | |
... as well as some cleanup
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
... could even remove one mutex by using a better algorithm. I think I also
spotted some situation in which a hang could have happened. As I can't fix it
in v4 and less without moving to the new engine, I make no effort in testing
this out. Hangs occur during shutdown, only (if at all). The code changes
should also result in some mild performance improvement. Some bug potential,
but overall the bug potential should have been greatly reduced.
|
| | |
| | |
| | |
| | | |
reducing the number of thread cancellation state changes
|
| | | |
|
|\| |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
runtime/debug.h
runtime/stream.c
|
|\| | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
(depending on configuration). This was a small change, but with big
results. There is more potential to explore, but the effects were so
dramatic that I think it makes sense to include this fix.
|
| | |
| | |
| | |
| | |
| | | |
... this one could cause trouble, but I really don't think it caused
any actual harm.
|
| | |
| | |
| | |
| | |
| | |
| | | |
This may have caused a segfault under strange circumstances (but if
we just run long enough with a high enough message volume, even the
strangest circumstances will occur...)
|
| | |
| | |
| | |
| | |
| | | |
mostly to get thread debugger errors clean (plus, of course, it
makes things more deterministic)
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
(made rsyslogd unusable in production). Occured if at least one queue
was in direct mode (the default for action queues).
|
|\| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This was a complex manual merge, especially in action.c. So if
there occur some problems, this would be a good point to start
troubleshooting. I run a couple of tests before commiting and
they all went well.
Conflicts:
action.c
action.h
runtime/queue.c
runtime/queue.h
runtime/wti.c
runtime/wti.h
|
| | |
| | |
| | |
| | |
| | |
| | | |
... as it was not even optimal on uniprocessors any longer ;) I keep
the config directive in, maybe we can utilize it again at some later
point in time (questionable).
|
| | | |
|
| | | |
|
|\| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Note that this was NOT a trivial merge, and there may be
some issues. This needs to be seen when we continue developing.
Conflicts:
runtime/msg.h
runtime/obj.h
runtime/queue.c
runtime/srUtils.h
runtime/stream.c
runtime/stream.h
runtime/wti.c
tests/Makefile.am
tools/omfile.c
tools/syslogd.c
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
also adds speed, because you do no longer need to run the whole file
system in sync mode. New testbench and new config directives:
- $MainMsgQueueSyncQueueFiles
- $ActionQueueSyncQueueFiles
|
| | |
| | |
| | |
| | | |
now some basic operations are carried out via the stream class.
|
| |/
| |
| |
| |
| |
| | |
... and also made it callable via an rsyslog interface rather then
relying on the OS loader (important if we go for using it inside
loadbale modules, which we soon possible will)
|
| | |
|
| |
| |
| |
| | |
The enhanced testbench now runs without failures, again
|
| |
| |
| |
| |
| | |
also changed DA queue mode in that the regular workers now run
concurrently.
|
| |
| |
| |
| |
| |
| | |
... in preparation for some larger changes - I need to apply some
serious design changes, as the current system does not play well
at all with ultra-reliable queues. Will do that in a totally new version.
|
| |
| |
| |
| | |
slightly improved situation, would like to save it before carrying on
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
... and also improved the test suite. There is a design issue in the
v3 queue engine that manifested to some serious problems with the new
processing mode. However, in v3 shutdown may take eternally if a queue
runs in DA mode, is configured to preserve data AND the action fails and
retries immediately. There is no cure available for v3, it would
require doing much of the work we have done on the new engine. The window
of exposure, as one might guess from the description, is very small. That
is probably the reason why we have not seen it in practice.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
so far, the last processed message was only freed when the next
one was processed. This has been changed now. More precisely, a
better algorithm has been selected for the queue worker process, which
also involves less overhead than the previous one. The fix for
"free last processed message" as then more or less a side-effect
(easy to do) of the new algorithm.
|
| | |
|
| | |
|