all lists on lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v2 qemu 1/2] block/gluster: correctly set max_pdiscard which is int64_t
@ 2022-05-06 12:38 Fabian Ebner
  2022-05-06 12:38 ` [pve-devel] [PATCH v2 qemu 2/2] ui/vnc.c: Fixed a deadlock bug Fabian Ebner
  2022-05-11 10:08 ` [pve-devel] applied-series: [PATCH v2 qemu 1/2] block/gluster: correctly set max_pdiscard which is int64_t Thomas Lamprecht
  0 siblings, 2 replies; 3+ messages in thread
From: Fabian Ebner @ 2022-05-06 12:38 UTC (permalink / raw)
  To: pve-devel

Previously, max_pdiscard would be zero in the following assertion:
qemu-system-x86_64: ../block/io.c:3166: bdrv_co_pdiscard: Assertion
`max_pdiscard >= bs->bl.request_alignment' failed.

Fixes: 0c8022876f ("block: use int64_t instead of int in driver discard handlers")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

Changes from v1:
    * Re-send as patch so both fixes can be applied independently.

Reported in the community forum:
https://forum.proxmox.com/threads/108875
and thanks to Mira for telling me how to reproduce it.

 block/gluster.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 398976bc66..592e71b22a 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -891,7 +891,7 @@ out:
 static void qemu_gluster_refresh_limits(BlockDriverState *bs, Error **errp)
 {
     bs->bl.max_transfer = GLUSTER_MAX_TRANSFER;
-    bs->bl.max_pdiscard = SIZE_MAX;
+    bs->bl.max_pdiscard = INT64_MAX;
 }
 
 static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
@@ -1304,7 +1304,7 @@ static coroutine_fn int qemu_gluster_co_pdiscard(BlockDriverState *bs,
     GlusterAIOCB acb;
     BDRVGlusterState *s = bs->opaque;
 
-    assert(bytes <= SIZE_MAX); /* rely on max_pdiscard */
+    assert(bytes <= INT64_MAX); /* rely on max_pdiscard */
 
     acb.size = 0;
     acb.ret = 0;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 3+ messages in thread

* [pve-devel] [PATCH v2 qemu 2/2] ui/vnc.c: Fixed a deadlock bug.
  2022-05-06 12:38 [pve-devel] [PATCH v2 qemu 1/2] block/gluster: correctly set max_pdiscard which is int64_t Fabian Ebner
@ 2022-05-06 12:38 ` Fabian Ebner
  2022-05-11 10:08 ` [pve-devel] applied-series: [PATCH v2 qemu 1/2] block/gluster: correctly set max_pdiscard which is int64_t Thomas Lamprecht
  1 sibling, 0 replies; 3+ messages in thread
From: Fabian Ebner @ 2022-05-06 12:38 UTC (permalink / raw)
  To: pve-devel

From: Rao Lei <lei.rao@intel.com>

The GDB statck is as follows:
(gdb) bt
0  __lll_lock_wait (futex=futex@entry=0x56211df20360, private=0) at lowlevellock.c:52
1  0x00007f263caf20a3 in __GI___pthread_mutex_lock (mutex=0x56211df20360) at ../nptl/pthread_mutex_lock.c:80
2  0x000056211a757364 in qemu_mutex_lock_impl (mutex=0x56211df20360, file=0x56211a804857 "../ui/vnc-jobs.h", line=60)
    at ../util/qemu-thread-posix.c:80
3  0x000056211a0ef8c7 in vnc_lock_output (vs=0x56211df14200) at ../ui/vnc-jobs.h:60
4  0x000056211a0efcb7 in vnc_clipboard_send (vs=0x56211df14200, count=1, dwords=0x7ffdf1701338) at ../ui/vnc-clipboard.c:138
5  0x000056211a0f0129 in vnc_clipboard_notify (notifier=0x56211df244c8, data=0x56211dd1bbf0) at ../ui/vnc-clipboard.c:209
6  0x000056211a75dde8 in notifier_list_notify (list=0x56211afa17d0 <clipboard_notifiers>, data=0x56211dd1bbf0) at ../util/notify.c:39
7  0x000056211a0bf0e6 in qemu_clipboard_update (info=0x56211dd1bbf0) at ../ui/clipboard.c:50
8  0x000056211a0bf05d in qemu_clipboard_peer_release (peer=0x56211df244c0, selection=QEMU_CLIPBOARD_SELECTION_CLIPBOARD)
    at ../ui/clipboard.c:41
9  0x000056211a0bef9b in qemu_clipboard_peer_unregister (peer=0x56211df244c0) at ../ui/clipboard.c:19
10 0x000056211a0d45f3 in vnc_disconnect_finish (vs=0x56211df14200) at ../ui/vnc.c:1358
11 0x000056211a0d4c9d in vnc_client_read (vs=0x56211df14200) at ../ui/vnc.c:1611
12 0x000056211a0d4df8 in vnc_client_io (ioc=0x56211ce70690, condition=G_IO_IN, opaque=0x56211df14200) at ../ui/vnc.c:1649
13 0x000056211a5b976c in qio_channel_fd_source_dispatch
    (source=0x56211ce50a00, callback=0x56211a0d4d71 <vnc_client_io>, user_data=0x56211df14200) at ../io/channel-watch.c:84
14 0x00007f263ccede8e in g_main_context_dispatch () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
15 0x000056211a77d4a1 in glib_pollfds_poll () at ../util/main-loop.c:232
16 0x000056211a77d51f in os_host_main_loop_wait (timeout=958545) at ../util/main-loop.c:255
17 0x000056211a77d630 in main_loop_wait (nonblocking=0) at ../util/main-loop.c:531
18 0x000056211a45bc8e in qemu_main_loop () at ../softmmu/runstate.c:726
19 0x000056211a0b45fa in main (argc=69, argv=0x7ffdf1701778, envp=0x7ffdf17019a8) at ../softmmu/main.c:50

From the call trace, we can see it is a deadlock bug.
vnc_disconnect_finish will acquire the output_mutex.
But, the output_mutex will be acquired again in vnc_clipboard_send.
Repeated locking will cause deadlock. So, I move
qemu_clipboard_peer_unregister() behind vnc_unlock_output();

Fixes: 0bf41cab93e ("ui/vnc: clipboard support")
Signed-off-by: Lei Rao <lei.rao@intel.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220105020808.597325-1-lei.rao@intel.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry-picked from commit 1dbbe6f172810026c51dc84ed927a3cc23017949)
[FE: trivial backport for 6.2]
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

New in v2.

Reported in the community forum:
https://forum.proxmox.com/threads/109075/

 ui/vnc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/ui/vnc.c b/ui/vnc.c
index af02522e84..b253e85c65 100644
--- a/ui/vnc.c
+++ b/ui/vnc.c
@@ -1354,12 +1354,12 @@ void vnc_disconnect_finish(VncState *vs)
         /* last client gone */
         vnc_update_server_surface(vs->vd);
     }
+    vnc_unlock_output(vs);
+
     if (vs->cbpeer.update.notify) {
         qemu_clipboard_peer_unregister(&vs->cbpeer);
     }
 
-    vnc_unlock_output(vs);
-
     qemu_mutex_destroy(&vs->output_mutex);
     if (vs->bh != NULL) {
         qemu_bh_delete(vs->bh);
-- 
2.30.2





^ permalink raw reply	[flat|nested] 3+ messages in thread

* [pve-devel] applied-series: [PATCH v2 qemu 1/2] block/gluster: correctly set max_pdiscard which is int64_t
  2022-05-06 12:38 [pve-devel] [PATCH v2 qemu 1/2] block/gluster: correctly set max_pdiscard which is int64_t Fabian Ebner
  2022-05-06 12:38 ` [pve-devel] [PATCH v2 qemu 2/2] ui/vnc.c: Fixed a deadlock bug Fabian Ebner
@ 2022-05-11 10:08 ` Thomas Lamprecht
  1 sibling, 0 replies; 3+ messages in thread
From: Thomas Lamprecht @ 2022-05-11 10:08 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

Am 5/6/22 um 14:38 schrieb Fabian Ebner:
> Previously, max_pdiscard would be zero in the following assertion:
> qemu-system-x86_64: ../block/io.c:3166: bdrv_co_pdiscard: Assertion
> `max_pdiscard >= bs->bl.request_alignment' failed.
> 
> Fixes: 0c8022876f ("block: use int64_t instead of int in driver discard handlers")
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> Changes from v1:
>     * Re-send as patch so both fixes can be applied independently.
> 
> Reported in the community forum:
> https://forum.proxmox.com/threads/108875
> and thanks to Mira for telling me how to reproduce it.
> 
>  block/gluster.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
>

applied both patches, thanks!




^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-05-11 10:08 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-06 12:38 [pve-devel] [PATCH v2 qemu 1/2] block/gluster: correctly set max_pdiscard which is int64_t Fabian Ebner
2022-05-06 12:38 ` [pve-devel] [PATCH v2 qemu 2/2] ui/vnc.c: Fixed a deadlock bug Fabian Ebner
2022-05-11 10:08 ` [pve-devel] applied-series: [PATCH v2 qemu 1/2] block/gluster: correctly set max_pdiscard which is int64_t Thomas Lamprecht

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal