From: Jonathan Nicklin via pve-devel <pve-devel@lists.proxmox.com>
To: Dietmar Maurer <dietmar@proxmox.com>
Cc: Jonathan Nicklin <jnicklin@blockbridge.com>,
Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [RFC qemu/storage/qemu-server/container/manager 00/23] backup provider API
Date: Sun, 28 Jul 2024 09:54:48 -0400 [thread overview]
Message-ID: <mailman.37.1722286187.302.pve-devel@lists.proxmox.com> (raw)
In-Reply-To: <392733040.7156.1722149167597@webmail.proxmox.com>
[-- Attachment #1: Type: message/rfc822, Size: 6617 bytes --]
From: Jonathan Nicklin <jnicklin@blockbridge.com>
To: Dietmar Maurer <dietmar@proxmox.com>
Cc: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [RFC qemu/storage/qemu-server/container/manager 00/23] backup provider API
Date: Sun, 28 Jul 2024 09:54:48 -0400
Message-ID: <1C86CC96-2C9C-466A-A2A9-FC95906C098E@blockbridge.com>
In hyper-converged deployments, the node performing the backup is sourcing ((nodes-1)/(nodes))*bytes) of backup data (i.e., ingress traffic) and then sending 1*bytes to PBS (i.e., egress traffic). If PBS were to pull the data from the nodes directly, the maximum load on any one host would be (1/nodes)*bytes of egress traffic only... that's a considerable improvement!
Further, nodes that don't host OSDs would be completely quiet. So, in the case of non-converged CEPH, the hypervisor nodes do not need to participate in the backup flow at all.
> On Jul 28, 2024, at 2:46 AM, Dietmar Maurer <dietmar@proxmox.com> wrote:
>
>> Today, I believe the client is reading the data and pushing it to
>> PBS. In the case of CEPH, wouldn't this involve sourcing data from
>> multiple nodes and then sending it to PBS? Wouldn't it be more
>> efficient for PBS to read it directly from storage? In the case of
>> centralized storage, we'd like to eliminate the client load
>> completely, having PBS ingest increment differences directly from
>> storage without passing through the client.
>
> But Ceph is not a central storage. Instead, data is distributed among the nodes, so you always need to send some data over the network.
> There is no way to "read it directly from storage".
>
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2024-07-29 20:49 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-26 19:47 Jonathan Nicklin via pve-devel
2024-07-27 15:20 ` Dietmar Maurer
2024-07-27 20:36 ` Jonathan Nicklin via pve-devel
[not found] ` <E6295C3B-9E33-47C2-BC0E-9CEC701A2716@blockbridge.com>
2024-07-28 6:46 ` Dietmar Maurer
2024-07-28 13:54 ` Jonathan Nicklin via pve-devel [this message]
[not found] ` <1C86CC96-2C9C-466A-A2A9-FC95906C098E@blockbridge.com>
2024-07-28 14:58 ` Dietmar Maurer
2024-07-28 7:55 ` Dietmar Maurer
2024-07-28 14:12 ` Jonathan Nicklin via pve-devel
2024-07-29 8:15 ` Fiona Ebner
2024-07-29 21:29 ` Jonathan Nicklin via pve-devel
-- strict thread matches above, loose matches on Subject: below --
2024-07-23 9:56 Fiona Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=mailman.37.1722286187.302.pve-devel@lists.proxmox.com \
--to=pve-devel@lists.proxmox.com \
--cc=dietmar@proxmox.com \
--cc=jnicklin@blockbridge.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox