| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
code did not compile after merge from v4
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
runtime/Makefile.am
runtime/atomic.h
runtime/queue.c
runtime/queue.h
runtime/wti.c
runtime/wti.h
runtime/wtp.c
runtime/wtp.h
|
| |
| |
| |
| |
| |
| | |
replaced atomic operation emulation with new code. The previous code
seemed to have some issue and also limited concurrency severely. The
whole atomic operation emulation has been rewritten.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
simplified and thus speeded up the queue engine, also fixed some
potential race conditions (in very unusual shutdown conditions)
along the way. The threading model has seriously changes, so there may
be some regressions.
NOTE: the code passed basic tests, but there is still more work
and testing to be done. This commit should be treated with care.
|
| |
| |
| |
| |
| | |
This did NOT leak based on message volume. Also, did some cleanup during
the commit.
|
| | |
|
| |
| |
| |
| | |
... as well as some cleanup
|
| | |
|
| |
| |
| |
| |
| | |
... by utilizing that we need to modify a state variable only in
a sequential way during shutdown.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
... could even remove one mutex by using a better algorithm. I think I also
spotted some situation in which a hang could have happened. As I can't fix it
in v4 and less without moving to the new engine, I make no effort in testing
this out. Hangs occur during shutdown, only (if at all). The code changes
should also result in some mild performance improvement. Some bug potential,
but overall the bug potential should have been greatly reduced.
|
| |
| |
| |
| | |
reducing the number of thread cancellation state changes
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
based on now working with detached threads. This is probably the biggest
patch in this series and with large bug potential.
|
|\|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This was a complex manual merge, especially in action.c. So if
there occur some problems, this would be a good point to start
troubleshooting. I run a couple of tests before commiting and
they all went well.
Conflicts:
action.c
action.h
runtime/queue.c
runtime/queue.h
runtime/wti.c
runtime/wti.h
|
| |
| |
| |
| |
| |
| | |
... as it was not even optimal on uniprocessors any longer ;) I keep
the config directive in, maybe we can utilize it again at some later
point in time (questionable).
|
| |
| |
| |
| | |
slightly improved situation, would like to save it before carrying on
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
so far, the last processed message was only freed when the next
one was processed. This has been changed now. More precisely, a
better algorithm has been selected for the queue worker process, which
also involves less overhead than the previous one. The fix for
"free last processed message" as then more or less a side-effect
(easy to do) of the new algorithm.
|
| |
| |
| |
| | |
configuration directives
|
|/
|
|
|
|
| |
... but this code has serious problems when terminating the queue, also
it is far from being optimal. I will commit a series of patches (hopefully)
as I am on the path to the final implementation.
|
|\ |
|
| |
| |
| |
| |
| |
| | |
There exists a race condition that can lead to a segfault. Thanks
go to vbernetr, who performed the analysis and provided patch, which
I only tweaked a very little bit.
|
|/
|
|
|
|
| |
...to enable users to turn off pthread_yield calls which are
counter-productive on multiprocessor machines (but have been
shown to be useful on uniprocessors)
|
|
|