| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Instead of moving the setup for SIGCHLD handling, I'm removing it.
waitpid() isn't that expensive and we know the pid we are waiting for.
|
|
|
|
|
| |
The example from our SELinux devs used this construct so
we're going to follow suit.
|
| |
|
|
|
|
|
| |
Also drop the rstat of the parent directory, just let
the remote side send us an error.
|
|
|
|
|
|
|
| |
Pass errors up from the transfer functions so we can
keep track of errors and continue trying to transfer
all files. Make output more consistent and all on
stderr.
|
|
|
|
| |
We never did support recursive copy and probably never will.
|
| |
|
|
|
|
|
|
| |
If we don't get a heart beat when qarsh starts, print a warning to check
on the btimed service. Commands could be stopped early if it doesn't
produce any output and qarsh thinks the host is down.
|
| |
|
| |
|
|
|
|
|
| |
We really don't need this field since we
always copy data sequentially.
|
| |
|
|
|
|
|
|
|
|
|
| |
This change breaks the qacp protocol!
Before there was a chance we would exit before receiving and checking
all packets from qarshd. Now we look at all packets and check them.
Use data allow packets and larger buffers.
Handle errors on the write end.
|
|
|
|
|
|
|
| |
We do ignore SIGPIPE inside qarshd so we can handle
the error and continue. We do want commands we run
to receive SIGPIPE by default so they may die if they
don't handle SIGPIPE.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
If xiogen is flooding requests across qarsh and
xdoio decides to stop, we need to handle that gracefully.
Also, making the pipe non-blocking was not a good idea, xdoio
gets the read error EAGAIN and stops there.
|
| |
|
| |
|
|
|
|
|
| |
We don't need 32bits for packet type or remote fd.
Keeping the field small also helps with reading traces.
|
|
|
|
|
| |
We are already closing it in recvfiles because we could create multiple
connections. This caused us to close the fd twice.
|
|
|
|
|
|
|
| |
When qarshd is run via xinetd, stderr still goes out the socket
and messages from sockutil.c or qarsh_packet.c can interfere
with the protocol. Create a thin wrapper which qacp and qarsh can
send to stderr and qarshd can send to syslog.
|
| |
|
| |
|
|
|
|
| |
Believe it or not, we can get short reads here.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Since we keep a socket open we don't need this anymore
and freeing it causes problems.
|
| |
|
|
|
|
|
|
|
|
|
| |
This coordinates the buffer sizes with the
max packet size. qarshd and qarsh will probably break
if this value does not match between client and server
builds. Also increase the value to reduce overhead.
A max packet size of 16k only yields 40MB/s. Increase
that to 128k and we can do 500MB/s.
|
| |
|
|
|
|
|
|
|
| |
We don't need to lookup the addresses every
time we get btime. Do it once during hbeat_init
and reuse the socket in hbeat. This cleans up
the qarsh strace so the hbeat is only a send and recv.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
If the user is specified as part of the host,
we don't need to free it and if it was a
separate option, it will get freed when the
process ends
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Net removal of 12 strdup calls
rstat is properly freed
Don't need to strdup for basename, the original is not modified
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Only handle one file transfer at a time so we don't need
an array to track multiple transfers or know the remote's
fd number.
Loop in recv_packet until we read a whole packet.
|
|
|
|
| |
Added a new packet to limit data sent from the other side.
|
| |
|
|
|
|
|
|
| |
This allows us to attach gdb before anything
interesting happens. Use the command
'signal 14' to get the process running again.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
I removed the buffering layer from recv_packet because
it made the logic too complex around the pselect in qarshd.
Now only read as much as needed to get each packet.
qarshd adds an array for remote file descriptors which is
only a stub for now. This needs to be expanded to allow
multiple file transfers at the same time for runcmd.
|