From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 15BDE631B3 for ; Wed, 15 Jul 2020 08:51:52 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 0539D2C885 for ; Wed, 15 Jul 2020 08:51:52 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 0D8212C87A for ; Wed, 15 Jul 2020 08:51:51 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id CAD7C43077 for ; Wed, 15 Jul 2020 08:51:50 +0200 (CEST) Date: Wed, 15 Jul 2020 08:51:41 +0200 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox Backup Server development discussion References: <20200714131717.8494-1-s.reiter@proxmox.com> In-Reply-To: <20200714131717.8494-1-s.reiter@proxmox.com> MIME-Version: 1.0 User-Agent: astroid/0.15.0 (https://github.com/astroidmail/astroid) Message-Id: <1594795230.dwj4hlyxw4.astroid@nora.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.144 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com] Subject: Re: [pbs-devel] [PATCH qemu] PVE: handle PBS write callback with big blocks correctly X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Jul 2020 06:51:52 -0000 On July 14, 2020 3:17 pm, Stefan Reiter wrote: > Under certain conditions QEMU will push more than the given blocksize > into the callback at once. Handle it like VMA does, by iterating the > data in PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE (or smaller, for last one) > sized blocks. >=20 > Signed-off-by: Stefan Reiter > --- >=20 > As briefly tested by Fabian, it seems to fix the original issue. Tested and it fixes the issue (patched smaller fixed chunk size + I/O=20 pressure from within the guest), but it only works for the currently=20 hard-coded case of chunk_size =3D=3D PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE. Given that we likely only trigger this when/if we make chunk size=20 configurable by the caller, we either need to handle this inside=20 libproxmox-backup-qemu (where we have the configured chunk size=20 available), or expose the configured chunk size to Qemu throughout. We can still apply it on the off-chance that it is also triggerable with=20 the default 4M chunk size (haven't been able to yet), but then I'd add a=20 comment here to make it clear that this constant is just because we=20 don't have the actual one available (yet). >=20 > pve-backup.c | 28 ++++++++++++++++++++-------- > 1 file changed, 20 insertions(+), 8 deletions(-) >=20 > diff --git a/pve-backup.c b/pve-backup.c > index 77eb475563..4d423611e1 100644 > --- a/pve-backup.c > +++ b/pve-backup.c > @@ -147,17 +147,29 @@ pvebackup_co_dump_pbs_cb( > return -1; > } > =20 > - pbs_res =3D proxmox_backup_co_write_data(backup_state.pbs, di->dev_i= d, buf, start, size, &local_err); > - qemu_co_mutex_unlock(&backup_state.dump_callback_mutex); > + uint64_t transferred =3D 0; > + uint64_t reused =3D 0; > + while (transferred < size) { > + uint64_t left =3D size - transferred; > + uint64_t to_transfer =3D left < PROXMOX_BACKUP_DEFAULT_CHUNK_SIZ= E ? > + left : PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE; > =20 > - if (pbs_res < 0) { > - pvebackup_propagate_error(local_err); > - return pbs_res; > - } else { > - size_t reused =3D (pbs_res =3D=3D 0) ? size : 0; > - pvebackup_add_transfered_bytes(size, !buf ? size : 0, reused); > + pbs_res =3D proxmox_backup_co_write_data(backup_state.pbs, di->d= ev_id, > + buf ? buf + transferred : NULL, start + transferred, to_tran= sfer, &local_err); > + transferred +=3D to_transfer; > + > + if (pbs_res < 0) { > + pvebackup_propagate_error(local_err); > + qemu_co_mutex_unlock(&backup_state.dump_callback_mutex); > + return pbs_res; > + } > + > + reused +=3D pbs_res =3D=3D 0 ? to_transfer : 0; > } > =20 > + qemu_co_mutex_unlock(&backup_state.dump_callback_mutex); > + pvebackup_add_transfered_bytes(size, !buf ? size : 0, reused); > + > return size; > } > =20 > --=20 > 2.20.1 >=20 >=20 >=20 > _______________________________________________ > pbs-devel mailing list > pbs-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel >=20 >=20 >=20 =