public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Wolfgang Bumiller <w.bumiller@proxmox.com>
To: Fiona Ebner <f.ebner@proxmox.com>
Cc: pve-devel@lists.proxmox.com
Subject: Re: [pve-devel] [PATCH qemu 2/2] backup: trim heap after finishing
Date: Mon, 14 Aug 2023 14:55:49 +0200	[thread overview]
Message-ID: <2o4qfyjudbfwwtlackkvg525g3d3xue3zbxwrj534ubhkfldt3@yiqskg63fima> (raw)
In-Reply-To: <20230814092133.45002-2-f.ebner@proxmox.com>

On Mon, Aug 14, 2023 at 11:21:33AM +0200, Fiona Ebner wrote:
> Reported in the community forum [0]. By default, there can be large
> amounts of memory left assigned to the QEMU process after backup.
> Likely because of fragmentation, it's necessary to explicitly call
> malloc_trim() to tell glibc that it shouldn't keep all that memory
> resident for the process.
> 
> QEMU itself already does a malloc_trim() in the RCU thread, but that
> code path might not be reached (or not for a long time) under usual
> operation. The value of 4 MiB for the argument was also copied from
> there.
> 
> Example with the following configuration:
> > agent: 1
> > boot: order=scsi0
> > cores: 4
> > cpu: x86-64-v2-AES
> > ide2: none,media=cdrom
> > memory: 1024
> > name: backup-mem
> > net0: virtio=DA:58:18:26:59:9F,bridge=vmbr0,firewall=1
> > numa: 0
> > ostype: l26
> > scsi0: rbd:base-107-disk-0/vm-106-disk-1,size=4302M
> > scsihw: virtio-scsi-pci
> > smbios1: uuid=b2d4511e-8d01-44f1-afd6-9581b30c24a6
> > sockets: 2
> > startup: order=2
> > virtio0: lvmthin:vm-106-disk-1,iothread=1,size=1G
> > virtio1: lvmthin:vm-106-disk-2,iothread=1,size=1G
> > virtio2: lvmthin:vm-106-disk-3,iothread=1,size=1G
> > vmgenid: 0a1d8751-5e02-449d-977e-c0160e900231
> 
> Before the change:
> 
> > root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> > VmRSS:	  370948 kB
> > root@pve8a1 ~ # vzdump 106 --storage pbs
> > (...)
> > INFO: Backup job finished successfully
> > root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> > VmRSS:	 2114964 kB
> 
> After the change:
> 
> > root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> > VmRSS:	  398788 kB
> > root@pve8a1 ~ # vzdump 106 --storage pbs
> > (...)
> > INFO: Backup job finished successfully
> > root@pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> > VmRSS:	  424356 kB
> 
> [0]: https://forum.proxmox.com/threads/131339/
> 
> Co-diagnosed-by: Friedrich Weber <f.weber@proxmox.com>
> Co-diagnosed-by: Dominik Csapak <d.csapak@proxmox.com>
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>

Both patches

Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>

> ---
>  ...ckup-Proxmox-backup-patches-for-QEMU.patch | 23 +++++++++++++++----
>  ...igrate-dirty-bitmap-state-via-savevm.patch |  4 ++--
>  2 files changed, 20 insertions(+), 7 deletions(-)
> 
> diff --git a/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch b/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
> index 3753eff..d873601 100644
> --- a/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
> +++ b/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
> @@ -79,7 +79,8 @@ Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
>       adapt for new job lock mechanism replacing AioContext locks
>       adapt to QAPI changes
>       improve canceling
> -     allow passing max-workers setting]
> +     allow passing max-workers setting
> +     use malloc_trim after backup]
>  Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
>  ---
>   block/meson.build              |    5 +
> @@ -92,11 +93,11 @@ Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
>   monitor/hmp-cmds.c             |   72 +++
>   proxmox-backup-client.c        |  146 +++++
>   proxmox-backup-client.h        |   60 ++
> - pve-backup.c                   | 1097 ++++++++++++++++++++++++++++++++
> + pve-backup.c                   | 1109 ++++++++++++++++++++++++++++++++
>   qapi/block-core.json           |  226 +++++++
>   qapi/common.json               |   13 +
>   qapi/machine.json              |   15 +-
> - 14 files changed, 1711 insertions(+), 13 deletions(-)
> + 14 files changed, 1723 insertions(+), 13 deletions(-)
>   create mode 100644 proxmox-backup-client.c
>   create mode 100644 proxmox-backup-client.h
>   create mode 100644 pve-backup.c
> @@ -587,10 +588,10 @@ index 0000000000..8cbf645b2c
>  +#endif /* PROXMOX_BACKUP_CLIENT_H */
>  diff --git a/pve-backup.c b/pve-backup.c
>  new file mode 100644
> -index 0000000000..dd72ee0ed6
> +index 0000000000..10ca8a0b1d
>  --- /dev/null
>  +++ b/pve-backup.c
> -@@ -0,0 +1,1097 @@
> +@@ -0,0 +1,1109 @@
>  +#include "proxmox-backup-client.h"
>  +#include "vma.h"
>  +
> @@ -605,6 +606,10 @@ index 0000000000..dd72ee0ed6
>  +#include "qapi/qmp/qerror.h"
>  +#include "qemu/cutils.h"
>  +
> ++#if defined(CONFIG_MALLOC_TRIM)
> ++#include <malloc.h>
> ++#endif
> ++
>  +#include <proxmox-backup-qemu.h>
>  +
>  +/* PVE backup state and related function */
> @@ -869,6 +874,14 @@ index 0000000000..dd72ee0ed6
>  +    backup_state.stat.end_time = time(NULL);
>  +    backup_state.stat.finishing = false;
>  +    qemu_mutex_unlock(&backup_state.stat.lock);
> ++
> ++#if defined(CONFIG_MALLOC_TRIM)
> ++    /*
> ++     * Try to reclaim memory for buffers (and, in case of PBS, Rust futures), etc.
> ++     * Won't happen by default if there is fragmentation.
> ++     */
> ++    malloc_trim(4 * 1024 * 1024);
> ++#endif
>  +}
>  +
>  +static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
> diff --git a/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch b/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
> index 9437869..7a906e9 100644
> --- a/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
> +++ b/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
> @@ -175,10 +175,10 @@ index 0000000000..887e998b9e
>  +                         NULL);
>  +}
>  diff --git a/pve-backup.c b/pve-backup.c
> -index dd72ee0ed6..cb5312fff3 100644
> +index 10ca8a0b1d..0a5ce2cab8 100644
>  --- a/pve-backup.c
>  +++ b/pve-backup.c
> -@@ -1090,6 +1090,7 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
> +@@ -1102,6 +1102,7 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
>       ret->pbs_library_version = g_strdup(proxmox_backup_qemu_version());
>       ret->pbs_dirty_bitmap = true;
>       ret->pbs_dirty_bitmap_savevm = true;
> -- 
> 2.39.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 




  reply	other threads:[~2023-08-14 12:55 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-14  9:21 [pve-devel] [PATCH qemu 1/2] refresh patch context Fiona Ebner
2023-08-14  9:21 ` [pve-devel] [PATCH qemu 2/2] backup: trim heap after finishing Fiona Ebner
2023-08-14 12:55   ` Wolfgang Bumiller [this message]
2023-08-16  9:59 ` [pve-devel] applied-series: [PATCH qemu 1/2] refresh patch context Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2o4qfyjudbfwwtlackkvg525g3d3xue3zbxwrj534ubhkfldt3@yiqskg63fima \
    --to=w.bumiller@proxmox.com \
    --cc=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal