From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 03FA81FF165 for ; Thu, 3 Jul 2025 16:27:36 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 7C71A16F14; Thu, 3 Jul 2025 16:28:15 +0200 (CEST) To: Proxmox VE development discussion Date: Thu, 03 Jul 2025 16:27:35 +0200 In-Reply-To: References: <1842547039.6962.1750750098094@webmail.proxmox.com> <644731967.7191.1750761799812@webmail.proxmox.com> MIME-Version: 1.0 Message-ID: List-Id: Proxmox VE development discussion List-Post: From: Adam Kalisz via pve-devel Precedence: list Cc: Adam Kalisz X-Mailman-Version: 2.1.29 X-BeenThere: pve-devel@lists.proxmox.com List-Subscribe: , List-Unsubscribe: , List-Archive: Reply-To: Proxmox VE development discussion List-Help: Subject: Re: [pve-devel] Discussion of major PBS restore speedup in proxmox-backup-qemu Content-Type: multipart/mixed; boundary="===============3761960832760330525==" Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" --===============3761960832760330525== Content-Type: message/rfc822 Content-Disposition: inline Return-Path: X-Original-To: pve-devel@lists.proxmox.com Delivered-To: pve-devel@lists.proxmox.com Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 17357D5225 for ; Thu, 3 Jul 2025 16:28:14 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id E58C916E3F for ; Thu, 3 Jul 2025 16:27:43 +0200 (CEST) Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [IPv6:2a00:1450:4864:20::532]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Thu, 3 Jul 2025 16:27:43 +0200 (CEST) Received: by mail-ed1-x532.google.com with SMTP id 4fb4d7f45d1cf-606b58241c9so8963097a12.3 for ; Thu, 03 Jul 2025 07:27:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=notnullmakers.com; s=google; t=1751552856; x=1752157656; darn=lists.proxmox.com; h=mime-version:user-agent:content-transfer-encoding:references :in-reply-to:date:to:from:subject:message-id:from:to:cc:subject:date :message-id:reply-to; bh=rVCD6bfGCjDtk8LyK6ihU2PSeY4BBSZ6fAA1M4gtmKM=; b=qYiEjJqKEOAyVGdMKMHSSxwH1zDCo7VIVqI4axYqV85T3T9Lni6FcevNyDpH5C5uPY D8/kmUFveDrHn2r7qZIb4WGJZmNEHo/nAr7ytFkVo26x5VNz1nHRaXk5QfWWBk73es/J LrR8N2zKrwdgzh7ggbnwRfaFAEH7x3uH2XjwVAtEOcjOkI0pNGQsMEiScMwPWbZj7UNh dbEMtUmRmkyGsEKEOLwtcJFEjNR4DL/Ktk7b3qkVmHje3gmdnsQ8h6K+kve2rASYJI8/ +MYH4benE9aWVnfKmiMOtaJs+IuQ9TP+63D5ULohcFER+pM/VUjOYeu7Q2wnCg667dcK jT2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751552856; x=1752157656; h=mime-version:user-agent:content-transfer-encoding:references :in-reply-to:date:to:from:subject:message-id:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=rVCD6bfGCjDtk8LyK6ihU2PSeY4BBSZ6fAA1M4gtmKM=; b=C2gN84krT97Zgf/PBujz6ShixEDFQiXIgDE1s+5RM+CSCPiyaMf1U8L/eCzY5oy4Wj Q5DK4Cv494uIRLei0fIm3a9zUWx6E1MGl8L6cNoYteYiaB8QSOJiE6l3C5mDQ1O+OMac D1lVZ5b3kS73yATkxCYTcKU6zF3GzUL+ZJ2FWMKaWGUK9B9djf/kO4dzumD5KalvKxVE ZETT6XyBvH/W2VrZ2HxK3ByxW3FTua/J6XBvf4lnWzS+6+knzjVWsgxUZWHOlAlsYTAw tjNkSeFn7lUXcg3LGNiR+bpBa4gfWM6bd2RxDCEzpgC9orp9CQ3FZZMplGz/w0a7zgrv A8nQ== X-Gm-Message-State: AOJu0YzPIBuToF0x9GVJDeRSn+IGf2Th7O6sKKnJm5SUjkJX0iQ+d5wv Q+Qf2M2AY/U7Y/J8ain8w5GP9Qq22VeiRmRxiXQcztf2T2Ppf3jWyBUGHILmLJmoBxm0f5SFKmF C0syf X-Gm-Gg: ASbGncsL9uUBPYCjdwutl7DNjjccLGgHXEs4zf4l9leWe2b+LmQLp7gJ3zfs3YojsnF LflA4PiL299Ndz4YTdsDrgYQhmrKLn5znOOn9Z/hUzYdho+3VFJqxWv/5UHoEbhpdarqg9nGwid 2bFXCyZPLqI9IykmfcrxPswSWEA1nnMCp5L8P+lqSNU7NXMy+w1PfBexdKm8QYRP2XSiuwHTRpn 8NOnzXRDHI0C5Rg5twzaWmG6aOgYjA6/d+h+tOn0h/ZEggqj4g6BSyC1xoPh4IMPKLQbNQrEgBq gL5bsLg+MyWjFM7tstlKmbokObu1GJF05CszJQW3KJXgpOPXnnywdR9BCkhfQ7TtsYBH6eFMOkF ROOxFjmZN+3j1H/efB0sd X-Google-Smtp-Source: AGHT+IGPlazNEE++qXjtulKukJQxk+2ChMlD3oeN0kOGNfqC2IfHXHmEfYbpbP8bCQkOIchyJtdKRQ== X-Received: by 2002:a05:6402:210a:b0:607:35d8:4cf8 with SMTP id 4fb4d7f45d1cf-60e52ce3110mr7004596a12.11.1751552856214; Thu, 03 Jul 2025 07:27:36 -0700 (PDT) Received: from ?IPv6:2a02:8308:299:4600::5753? ([2a02:8308:299:4600::5753]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-60e4149c2a4sm3632163a12.59.2025.07.03.07.27.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Jul 2025 07:27:35 -0700 (PDT) Message-ID: Subject: Re: [pve-devel] Discussion of major PBS restore speedup in proxmox-backup-qemu From: Adam Kalisz To: Proxmox VE development discussion Date: Thu, 03 Jul 2025 16:27:35 +0200 In-Reply-To: References: <1842547039.6962.1750750098094@webmail.proxmox.com> <644731967.7191.1750761799812@webmail.proxmox.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.56.1-1 MIME-Version: 1.0 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.000 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DKIM_SIGNED 0.1 Message has a DKIM or DK signature, not necessarily valid DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's domain DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from domain DMARC_PASS -0.1 DMARC pass policy RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/, no trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [notnullmakers.com] Hi, On Thu, 2025-07-03 at 10:57 +0200, Dominik Csapak wrote: > Hi, >=20 > On 7/3/25 10:29, Adam Kalisz via pve-devel wrote: > > Hi, > >=20 > > On Friday I have submitted the patch with a slight edit to allow > > setting the number of threads from an environment variable. > >=20 >=20 > Yes, we saw, thanks for tackling this. >=20 > > On Tue, 2025-06-24 at 12:43 +0200, Fabian Gr=C3=BCnbichler wrote: > > > > Adam Kalisz hat am 24.06.2025 > > > > 12:22 > > > > CEST geschrieben: > > > > Hi Fabian, > > > CCing the list again, assuming it got dropped by accident. > > >=20 > > > > the CPU usage is higher, I see about 400% for the restore > > > > process. > > > > I > > > > didn't investigate the original much because it's unbearably > > > > slow. > > > >=20 > > > > Yes, having configurable CONCURRENT_REQUESTS and > > > > max_blocking_threads > > > > would be great. However we would need to wire it up all the way > > > > to > > > > qmrestore or similar or ensure it is read from some env vars. I > > > > didn't > > > > feel confident to introduce this kind of infrastructure as a > > > > first > > > > time > > > > contribution. > > > we can guide you if you want, but it's also possible to follow-up > > > on > > > our end with that as part of applying the change. > > That would be great, it shouldn't be too much work for somebody > > more familiar with the project structure where everything needs to > > be. >=20 > Just to clarify, it's OK (and preferred?) for you if we continue > working with this patch? In that case I'd take a swing at it. Yes, please do. If it improves performance/ efficiency further why not? > >=20 > > > > The writer to disk is single thread still so a CPU that can > > > > ramp up a single core to a high frequency/ IPC will usually do > > > > better on the benchmarks. > > > I think that limitation is no longer there on the QEMU side > > > nowadays, but it would likely require some more changes to > > > actually make use of multiple threads submitting IO. > > The storage writing seemed to be less of a bottleneck than the > > fetching of chunks. It seems to me there still is a bottleneck in > > the network part because I haven't seen an instance with > > substantially higher speed than 1.1 GBps. >=20 > I guess this largely depends on the actual storage and network > config, e.g. if the target storage IO depth is the bottle neck, > multiple writers will speed up that too. That's possible but better feeding a single thread would most likely lead to a big speed improvement too.=20 > > Perhaps we could have a discussion about the backup, restore and > > synchronization speeds and strategies for debugging and improving > > the situation after we have taken the intermediate step of > > improving the restore speed as proposed to gather more feedback > > from the field? >=20 > I'd at least like to take a very short glance at how hard it would > be to add multiple writers to the image before deciding. If > it's not trivial, then IMHO yes, we can increase the fetching threads > for now. Sure, please have at it. I have tried to make both the fetching and writing concurrent but ended up in a corner trying to explain that I will not overwrite data to the borrow checker with my limited Rust -> C interop knowledge. And the fetch concurrency fortunately was a big bottleneck too. > Though I have to look in how we'd want to limit/configure that from > outside. E.g. a valid way to view that would maybe to limit the > threads from exceeding what the vm config says + some extra? >=20 > (have to think about that) Yes, the CPU count from the VM config might be a great default. However most of the time the CPU is blocking on IO and could do other stuff so having an option to configure something else or for the case of a critical recovery a different setting might be suitable. > > >=20 >=20 > Thanks >=20 > > > Fabian > > Adam > >=20 >=20 >=20 > Best Regards > Dominik Thanks / Danke Adam --===============3761960832760330525== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel --===============3761960832760330525==--