From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 32556C7B8 for ; Mon, 14 Aug 2023 14:55:52 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 1BBE234F1 for ; Mon, 14 Aug 2023 14:55:52 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Mon, 14 Aug 2023 14:55:51 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 30E52473E2 for ; Mon, 14 Aug 2023 14:55:51 +0200 (CEST) Date: Mon, 14 Aug 2023 14:55:49 +0200 From: Wolfgang Bumiller To: Fiona Ebner Cc: pve-devel@lists.proxmox.com Message-ID: <2o4qfyjudbfwwtlackkvg525g3d3xue3zbxwrj534ubhkfldt3@yiqskg63fima> References: <20230814092133.45002-1-f.ebner@proxmox.com> <20230814092133.45002-2-f.ebner@proxmox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230814092133.45002-2-f.ebner@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.105 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH qemu 2/2] backup: trim heap after finishing X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Aug 2023 12:55:52 -0000 On Mon, Aug 14, 2023 at 11:21:33AM +0200, Fiona Ebner wrote: > Reported in the community forum [0]. By default, there can be large > amounts of memory left assigned to the QEMU process after backup. > Likely because of fragmentation, it's necessary to explicitly call > malloc_trim() to tell glibc that it shouldn't keep all that memory > resident for the process. > > QEMU itself already does a malloc_trim() in the RCU thread, but that > code path might not be reached (or not for a long time) under usual > operation. The value of 4 MiB for the argument was also copied from > there. > > Example with the following configuration: > > agent: 1 > > boot: order=scsi0 > > cores: 4 > > cpu: x86-64-v2-AES > > ide2: none,media=cdrom > > memory: 1024 > > name: backup-mem > > net0: virtio=DA:58:18:26:59:9F,bridge=vmbr0,firewall=1 > > numa: 0 > > ostype: l26 > > scsi0: rbd:base-107-disk-0/vm-106-disk-1,size=4302M > > scsihw: virtio-scsi-pci > > smbios1: uuid=b2d4511e-8d01-44f1-afd6-9581b30c24a6 > > sockets: 2 > > startup: order=2 > > virtio0: lvmthin:vm-106-disk-1,iothread=1,size=1G > > virtio1: lvmthin:vm-106-disk-2,iothread=1,size=1G > > virtio2: lvmthin:vm-106-disk-3,iothread=1,size=1G > > vmgenid: 0a1d8751-5e02-449d-977e-c0160e900231 > > Before the change: > > > root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status > > VmRSS: 370948 kB > > root@pve8a1 ~ # vzdump 106 --storage pbs > > (...) > > INFO: Backup job finished successfully > > root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status > > VmRSS: 2114964 kB > > After the change: > > > root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status > > VmRSS: 398788 kB > > root@pve8a1 ~ # vzdump 106 --storage pbs > > (...) > > INFO: Backup job finished successfully > > root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status > > VmRSS: 424356 kB > > [0]: https://forum.proxmox.com/threads/131339/ > > Co-diagnosed-by: Friedrich Weber > Co-diagnosed-by: Dominik Csapak > Signed-off-by: Fiona Ebner Both patches Acked-by: Wolfgang Bumiller > --- > ...ckup-Proxmox-backup-patches-for-QEMU.patch | 23 +++++++++++++++---- > ...igrate-dirty-bitmap-state-via-savevm.patch | 4 ++-- > 2 files changed, 20 insertions(+), 7 deletions(-) > > diff --git a/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch b/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch > index 3753eff..d873601 100644 > --- a/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch > +++ b/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch > @@ -79,7 +79,8 @@ Signed-off-by: Wolfgang Bumiller > adapt for new job lock mechanism replacing AioContext locks > adapt to QAPI changes > improve canceling > - allow passing max-workers setting] > + allow passing max-workers setting > + use malloc_trim after backup] > Signed-off-by: Fiona Ebner > --- > block/meson.build | 5 + > @@ -92,11 +93,11 @@ Signed-off-by: Fiona Ebner > monitor/hmp-cmds.c | 72 +++ > proxmox-backup-client.c | 146 +++++ > proxmox-backup-client.h | 60 ++ > - pve-backup.c | 1097 ++++++++++++++++++++++++++++++++ > + pve-backup.c | 1109 ++++++++++++++++++++++++++++++++ > qapi/block-core.json | 226 +++++++ > qapi/common.json | 13 + > qapi/machine.json | 15 +- > - 14 files changed, 1711 insertions(+), 13 deletions(-) > + 14 files changed, 1723 insertions(+), 13 deletions(-) > create mode 100644 proxmox-backup-client.c > create mode 100644 proxmox-backup-client.h > create mode 100644 pve-backup.c > @@ -587,10 +588,10 @@ index 0000000000..8cbf645b2c > +#endif /* PROXMOX_BACKUP_CLIENT_H */ > diff --git a/pve-backup.c b/pve-backup.c > new file mode 100644 > -index 0000000000..dd72ee0ed6 > +index 0000000000..10ca8a0b1d > --- /dev/null > +++ b/pve-backup.c > -@@ -0,0 +1,1097 @@ > +@@ -0,0 +1,1109 @@ > +#include "proxmox-backup-client.h" > +#include "vma.h" > + > @@ -605,6 +606,10 @@ index 0000000000..dd72ee0ed6 > +#include "qapi/qmp/qerror.h" > +#include "qemu/cutils.h" > + > ++#if defined(CONFIG_MALLOC_TRIM) > ++#include > ++#endif > ++ > +#include > + > +/* PVE backup state and related function */ > @@ -869,6 +874,14 @@ index 0000000000..dd72ee0ed6 > + backup_state.stat.end_time = time(NULL); > + backup_state.stat.finishing = false; > + qemu_mutex_unlock(&backup_state.stat.lock); > ++ > ++#if defined(CONFIG_MALLOC_TRIM) > ++ /* > ++ * Try to reclaim memory for buffers (and, in case of PBS, Rust futures), etc. > ++ * Won't happen by default if there is fragmentation. > ++ */ > ++ malloc_trim(4 * 1024 * 1024); > ++#endif > +} > + > +static void coroutine_fn pvebackup_co_complete_stream(void *opaque) > diff --git a/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch b/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch > index 9437869..7a906e9 100644 > --- a/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch > +++ b/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch > @@ -175,10 +175,10 @@ index 0000000000..887e998b9e > + NULL); > +} > diff --git a/pve-backup.c b/pve-backup.c > -index dd72ee0ed6..cb5312fff3 100644 > +index 10ca8a0b1d..0a5ce2cab8 100644 > --- a/pve-backup.c > +++ b/pve-backup.c > -@@ -1090,6 +1090,7 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp) > +@@ -1102,6 +1102,7 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp) > ret->pbs_library_version = g_strdup(proxmox_backup_qemu_version()); > ret->pbs_dirty_bitmap = true; > ret->pbs_dirty_bitmap_savevm = true; > -- > 2.39.2 > > > > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > >