| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
"pid" and "sock" are sufficient I guess as randomizers to distinguish messages.
In theory, a pid could be recycled very quickly, which might mix up in-flight
messages. But once a few messages have passed, "cookie" would be incremented as
another indicator of a fresh message.
Why? Remove messages_dgm dependency on samba-util
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
|
|
|
|
|
|
| |
Pair-Programmed-With: Stefan Metzmacher <metze@samba.org>
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Signed-off-by: Michael Adam <obnox@samba.org>
|
|
|
|
|
| |
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Michael Adam <obnox@samba.org>
|
|
|
|
|
| |
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
|
|
|
|
|
| |
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this patch it will be possible to use nested event contexts with
messaging_filtered_read_send/recv. Before this patchset only the one and only
event context a messaging_context is initialized with is able to receive
datagrams from the unix domain socket. So if you want to code a synchronous
RPC-like operation using a nested event context, you will not see the reply,
because the nested event context does not have the required tevent_fd's.
Unfortunately, this patchset has to add some advanced array voodoo. The idea
is that state->watches[] contains what we hand out with watch_new, and
state->contexts contains references to the tevent_contexts. For every watch we
need a tevent_fd in every event context, and the routines make sure that the
arrays are properly maintained.
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
|
|
This is a messaging layer based on unix domain datagram sockets.
Sending to an idle socket is just one single nonblocking sendmsg call. If the
recv queue is full, we start a background thread to do a blocking call. The
source4 based imessaging uses a polling fallback. In a situation where
thousands of senders beat one single blocked socket, this will generate load on
the system due to the constant polling. This does not happen with a threaded
blocking send call.
The threaded approach has another advantage: We save become_root() calls on the
retries. The access checks are done when the blocking socket is connected, the
threaded blocking send call does not check permissions anymore.
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
|