| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
XMonitor::do_restore (called for example when going out of
fullscreen) restore the screen resolution to its previous state,
but it doesn't take care of repositioning the screen to their
previous position, which is one of the advantages of using randr
1.2.
Since MultyMonScreen::restore handles all of this for us, just call
it to restore the monitor position/resolutions to their previous
settings. Before doing any changes, MultyMonScreen::restore checks
if there's something to do, so calling it once per monitor won't be
an issue, the resolution/position will only be set the first time.
This has the side-effect of fixing bug #693431. This bug occurs when
closing the client after the client went in and out of fullscreen.
MultyMonScreen::~MultyMonScreen calls MultyMonScreen::restore, which
decides to change the screen positions since they were lost when going
to fullscreen because XMonitor::restore didn't restore the positions.
After this change, the positions will be properly restored and
MultyMonScreen::restore won't be needlessly called upon client
shutdown.
|
|
|
|
|
|
|
|
|
| |
MultyMonScreen::restore changes the X11 Screen resolution, but it
doesn't use MultyMonScreen::set_size. This means
MultyMonScreen::_width and MultyMonScreen::_height don't get
updated to reflect the new resolution settings, which could cause
issues later on. Until now this was safe since the only caller of
MultyMonScreen::restore was MultyMonScreen destructor.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a Xinerama setup, when X starts up and creates one of the
secondary screens, first a non-primary surface is created on the
secondary screen, and then the primary surface for this screen is
created.
This causes a crash when the guest uses Xinerama and the client
is attached to the VM before X starts (ie while the guest is
booting).
This happens because DisplayChannel::create_canvas (which is called
when creating a non-primary surface) assumes a screen has already been
set for the DisplayChannel while this only happens upon primary surface
creation. However, it uses the screen for non important stuff, so we
can test if screen() is non NULL before using it. This is what is done
in other parts of this file.
Fixes rhbz #732423
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When running some xinerama tests, I got several
glz_usr_free_image: error
messages. Looking at the code, this error is reported when this
function is called from a different DisplayChannelClient than the
one which created the glz compressed image.
When this happens, the backtrace is
at glz_encoder_dictionary.c:362
0x7fff940b6670) at glz_encoder_dictionary.c:449
image_type=LZ_IMAGE_TYPE_RGB32, image_width=512, image_height=256, image_stride=2048, first_lines=0x0,
num_first_lines=0, usr_image_context=0x7fff7420da40, image_head_dist=0x7fff9b2a3194)
at glz_encoder_dictionary.c:570
top_down=4, lines=0x0, num_lines=0, stride=2048, io_ptr=0x7fff740ea7c0 " ZL", num_io_bytes=65536, usr_context=
0x7fff7420da40, o_enc_dict_context=0x7fff7420da60) at glz_encoder.c:255
drawable=0x7fff9b46bc08, o_comp_data=0x7fff9b2a3350) at red_worker.c:5753
0x7fff9b46bc08, can_lossy=0, o_comp_data=0x7fff9b2a3350) at red_worker.c:6211
0x7fff9b46bc08, can_lossy=0) at red_worker.c:6344
0x7fff74085c50, dpi=0x7fff7445b890, src_allowed_lossy=0) at red_worker.c:7046
0x7fff7445b890) at red_worker.c:7720
at red_worker.c:7964
at red_worker.c:8431
Since the glz dictionary is shared between all the
DisplayChannelClient instances that belong to the same client, it can
happen that the glz dictionary code decides to free an image from one
thread while it was added from another thread (thread ==
DisplayChannelClient), so the error message that is printed is not an
actual error. This commit removes this message and adds a comment
explaining what's going on.
|
|
|
|
|
|
|
| |
applicaion => application
Attache => Attach
Detache => Detach
_layes => _layers
|
|
|
|
|
|
|
|
| |
Several functions in server/ were not specifying an argument list,
ie they were declared as void foo(); When compiling with
-Wstrict-prototypes, this leads to:
test_playback.c:93:5: erreur: function declaration isn’t a prototype
[-Werror=strict-prototypes]
|
| |
|
|
|
|
|
| |
Without these, spice_backtrace() can't be used from the C++ client
code.
|
|
|
|
|
|
|
|
|
| |
red_display_marshall_stream_start initializes a
SpiceMsgDisplayStreamCreate structure before marshalling it and
sending it on the wire. However, it never fills
SpiceMsgDisplayStreamCreate::stamp which then causes a complaint
from valgrind. This patch sets this value to 0, it's not used
by the client so the value shouldn't matter.
|
|
|
|
|
|
|
|
| |
create_test_primary_surface::test_display_base.c creates a
QXLDevSurfaceCreate structure and initialize it, but doesn't set
the position field. Moreover, this structure has 4 bytes of padding
to the end (as shown by pahole from dwarves), so initialize the whole
structure to 0 before using it.
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
|
|
|
|
|
|
|
|
| |
Issue found by the Coverity scanner.
HDG: Fixup don't free RGB24_line if it was not allocated by do_jpeg_encode
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
|
|
|
|
| |
Issue found by the Coverity scanner
|
|
|
|
| |
Issue found by the Coverity scanner.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While discussing various things with Alon in Vancouver, it came up that
having a channel which simply passes through data coming out of a qemu
chardev frontend unmodified, like the usbredir channel does, can be used
for a lot of other cases too. To facilitate this the usbredir channel code
will be turned into a generic spicevmc channel, which is just a passthrough
to the client, from the spicevmc chardev.
This patch renames usbredir.c to spicevmc.c and changes the prefix of all
functions / structs to match. This should make clear that the code is not
usbredir specific.
Some examples of why having a generic spicevmc pass through is good:
1) We could add a monitor channel, allowing access to the qemu monitor from
the spice client, since the monitor is a chardev frontend we could re-use
the generic spicevmc channel server code, so all that is needed to add this
(server side) would be reserving a new channel id for this.
2) We could allow users to come up with new channels of their own, without
requiring qemu or server modification. The idea is to allow doing something
like this on the qemu startup cmdline:
-chardev spicevmc,name=generic,channelid=128
To ensure these new "generic" channels cannot conflict with newly added
official types, they must start at the SPICE_CHANNEL_USER_DEFINED_START value
(128).
These new user defined channels could then either be used with a special
modified client, with client plugins (if we add support for those), or
by exporting them on the client side for use by an external ap, see below.
3) We could also add support to the client to make user-defined channels
end in a unix socket / pipe, allowing handling of the data by an external app,
we could for example have a new spice client cmdline argument like this:
--custom-channel unixsocket=/tmp/mysocket,channelid=128
This would allow for something like:
$random app on guest -> virtio-serial -> spicevmc chardev ->
-> spicevmc channel -> unix socket -> $random app on client
4) On hind sight this could also have been used for the smartcard stuff,
with a 1 channel / reader model, rather then the current multiplexing code
where we've our own multiplexing protocol wrapper over the libcacard
smartcard protocol.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
|
|
|
|
|
|
|
| |
Now that the Channel struct is gone and the RedChannel has the same lifetime
as the chardev interface there is no need to have these 2 separate.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
|
|
|
|
| |
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
|
|
|
|
| |
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
|
|
|
|
| |
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
|
|
|
|
| |
fix for "client: fix endless recursion in rearrange_monitors, RHBZ #692976"
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Fixes a valgrind discovered possible bug in spice-server - valgrind on
test_playback saw it, didn't see it happen with qemu.
The problem is that the frames buffers returned by spice_server_playback_get_buffer
are part of the malloc'ed SndChannel, whose lifetime is smaller then that of SndWorker.
As a result a pointer to a previously returned spice_server_playback_get_buffer could
remain and be used after SndChannel has been freed because the client disconnected.
|
|
|
|
|
|
|
|
| |
reds_main_channel_connected
The "channel->disconnecting" parameter already protects against recursion.
Removed fixed TODOs.
|
|
|
|
|
| |
instead of checking just for reds->main_channel check that there is at least
one client as well.
|
| |
|
| |
|
|
|
|
|
| |
handler->cb->release_msg_buf was not being called except in the error path,
causing a memory leak.
|
|
|
|
| |
by using the new SPICE_DEBUG_LEVEL.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Merging the functionality of reds::channel, into RedChannel.
In addition, cleanup and fix disconnection code: before this patch,
red_dispatcher_disconnect_display_client
could have been called from the red_worker thread
(and it must be called only from the io thread).
RedChannel holds only connected channel clients. RedClient holds all the
channel clients that were created till it is destroyed
(and then it destroys them as well).
Note: snd_channel still doesn't use red_channel, however it
creates dummy channel and channel clients, in order to register itself
in reds.
server/red_channel.c: a channel is connected if it holds at least one channel client
Previously I changed RedChannel to hold only connected channel clients and
RedClient, to hold all the channel clients as long as it is not destroyed.
usbredir: multichannel has not been tested, it just compiles.
|
|
|
|
|
| |
client_cbs are supposed to be called from client context (reds). This patch will be used
in future patches for relacing reds::Channel with RedChannel in order to eliminate redundancy.
|
| |
|
|
|
|
|
|
|
|
| |
introduces ref_red_drawable and put_red_drawable (rename from free_red_drawable)
RedDrawable is already references by Drawable and RedGlzDrawable, with
a hack to NULL the drawable field in RedGlzDrawable to indicate RedGlzDrawable
is the last reference holder. Using an explicit reference count instead.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Add cursor allocation debugging code that is turned off as long as
DEBUG_CURSORS is not defined.
|
|
|
|
| |
small cleanup patch, only functional change is sending a set ack message.
|
|
|
|
|
| |
makes RED_WORKER_MESSAGE_CURSOR_DISCONNECT_CLIENT disconnect only a
single client.
|
| |
|
|
|
|
|
|
|
| |
CursorPipeItems and their corresponding cursor_item were not
freed when they were removed from the pipe without sending them.
In addition cursor_channel_hold_pipe_item used wrong conversion
to (CursorItem*) for a (CursorPipeItem*).
|
|
|
|
|
|
|
|
|
|
| |
Required to support multiple clients.
Also changes somewhat the way we produce PIPE_ITEM_TYPE_LOCAL_CURSOR. Btw,
I haven't managed to see when we actually produce such an item during my
tests.
Previously we had a single pipe item per CursorItem, this is impossible
with two pipes, which happens when we have two clients.
|
| |
|
|
|
|
|
| |
Currently set by environment variable SPICE_DEBUG_ALLOW_MC (any value means
to allow multiple connections). Later will be set by spice api from qemu.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also adds Drawable pipes and glz rings.
main_channel and red_worker had several locations that still accessed rcc
directly, so they had to be touched too, but the changes are minimal.
Most changes are in red_channel: drop the single client reference in RedChannel
and add a ring of channels.
Things missing / not done right in red_worker:
* StreamAgents are in DCC - right/wrong?
* GlzDict is multiplied - multiple compressions.
We still are missing:
* remove the disconnect calls on new connections
|
| |
|