public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Dietmar Maurer <dietmar@proxmox.com>
To: Jonathan Nicklin <jnicklin@blockbridge.com>
Cc: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [RFC qemu/storage/qemu-server/container/manager 00/23] backup provider API
Date: Sun, 28 Jul 2024 16:58:38 +0200 (CEST)	[thread overview]
Message-ID: <1007402234.7276.1722178718125@webmail.proxmox.com> (raw)
In-Reply-To: <1C86CC96-2C9C-466A-A2A9-FC95906C098E@blockbridge.com>

> In hyper-converged deployments, the node performing the backup is sourcing ((nodes-1)/(nodes))*bytes) of backup data (i.e., ingress traffic) and then sending 1*bytes to PBS (i.e., egress traffic). If PBS were to pull the data from the nodes directly, the maximum load on any one host would be (1/nodes)*bytes of egress traffic only... that's a considerable improvement!

I guess it would be possible to write a tool like proxmox-backup-client that pull ceph backups directly from PBS. Or extend the backup protokoll allowing direct storage access. But this is a considerable amount of development, and needs much more configuration/setup than the current approach. But patches are always welcome...

Also, it is not clear to me how we can implement a "backup provider API" if we add such optimizations?

And yes, network traffic would be reduced. But IMHO it is easier to add a dedicated network card for the backup server (if the network is the limiting factor). With this setup, the maximum load on the ceph network is (1/nodes)*bytess of egress traffic only. The backup traffic is on the dedicated backup net.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2024-07-28 14:58 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-26 19:47 Jonathan Nicklin via pve-devel
2024-07-27 15:20 ` Dietmar Maurer
2024-07-27 20:36   ` Jonathan Nicklin via pve-devel
     [not found]   ` <E6295C3B-9E33-47C2-BC0E-9CEC701A2716@blockbridge.com>
2024-07-28  6:46     ` Dietmar Maurer
2024-07-28 13:54       ` Jonathan Nicklin via pve-devel
     [not found]       ` <1C86CC96-2C9C-466A-A2A9-FC95906C098E@blockbridge.com>
2024-07-28 14:58         ` Dietmar Maurer [this message]
2024-07-28  7:55     ` Dietmar Maurer
2024-07-28 14:12       ` Jonathan Nicklin via pve-devel
2024-07-29  8:15 ` Fiona Ebner
2024-07-29 21:29   ` Jonathan Nicklin via pve-devel
  -- strict thread matches above, loose matches on Subject: below --
2024-07-23  9:56 Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1007402234.7276.1722178718125@webmail.proxmox.com \
    --to=dietmar@proxmox.com \
    --cc=jnicklin@blockbridge.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal