public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Ivaylo Markov via pve-devel <pve-devel@lists.proxmox.com>
To: Max Carrara <m.carrara@proxmox.com>, pve-devel@lists.proxmox.com
Cc: Ivaylo Markov <ivaylo.markov@storpool.com>
Subject: Re: [pve-devel] Proposal: support for atomic snapshot of all VM disks at once
Date: Fri, 22 Nov 2024 17:58:16 +0200	[thread overview]
Message-ID: <mailman.593.1732291106.391.pve-devel@lists.proxmox.com> (raw)
In-Reply-To: <D5RXUIH451HX.QRJI4MEUZWIK@proxmox.com>

[-- Attachment #1: Type: message/rfc822, Size: 8930 bytes --]

From: Ivaylo Markov <ivaylo.markov@storpool.com>
To: Max Carrara <m.carrara@proxmox.com>, pve-devel@lists.proxmox.com
Subject: Re: Proposal: support for atomic snapshot of all VM disks at once
Date: Fri, 22 Nov 2024 17:58:16 +0200
Message-ID: <8539f93e-a890-4dee-95a6-fe7926ceeb48@storpool.com>

Hi,

It's clear now :) My current implementation allows only for group 
snapshots if the storage supports it, and regular sequential snapshots 
otherwise. Due to the considerations around 
application/crash-consistency, it would be best to let the user pick 
(with a default set by the storage plugin/backend?).

On 21/11/2024 16:49, Max Carrara wrote:
> On Wed Nov 20, 2024 at 5:10 PM CET, Ivaylo Markov wrote:
>> Hello,
>>
>> I've been caught up in other things and it's been a while, but as I was
>> collating and testing my proposed changes, I came across this again and
>> thought I'd clarify something.
>>
>> On 08/10/2024 13:50, Max Carrara wrote:
>>>> I was directed here to discuss this proposal and my implementation idea
>>>> after an initial post in Bugzilla[1]. The goal is to give storage
>>>> plugins the option to perform atomic crash-consistent snapshots of the
>>>> virtual disks associated with a virtual machine where the backend
>>>> supports it  (e.g. Ceph, StorPool, and ZFS) without affecting those
>>>> without such a feature.
>>> Since you mentioned that the backends without such a feature won't be
>>> affected, will the disks of the storage types that *do* support it still
>>> be addressable individually? As in: Would it be possible to run both
>>> group snapshots and individual snapshots on a VM's disks? (I'm assuming
>>> that they will, but still wanted to ask.)
>>>
>> Do you mean this this as "you can snapshot the whole VM (all disks) at
>> once *or* each disk individually" or "when making a snapshot of the
>> entire VM, the user can chose between individual/group snapshot of all
>> drives". What I have so far matches the first description, but I just
>> realized you might have meant the second one, so I thought I'd ask and
>> potentially create more work for myself :)
> Hey!
>
> No worries, always feel free to ask :)
>
> I should have phrased myself better there, so I'll try to be as specific
> as possible here. Let me define some terms first:
>
> - regular snapshots: This is how things work currently; a snapshot of
>    each disk of the VM is made after the previous one finishes. In other
>    words, disk snapshots are made *sequentially*. Works for every storage
>    type.
>
> - group snapshots: What you're implementing -- essentially creating a
>    snapshot of *all* disks of a VM at once, atomically. Snapshots of this
>    type are crash-consistent. Only works for storage types that support
>    it.
>
> What I meant to ask was whether *both* options will remain available to
> users. For example, if I click on a VM, then go to Snapshots -> Take
> Snapshot, will I have the option to say whether the snapshot should be a
> group snapshot or a regular snapshot?
>
> I guess calling it "individual" was a bit ambiguous here; I didn't mean
> that I could e.g. snapshot just one disk only (if that's what confused
> you). When performing a snapshot of a VM, a snapshot for *all* disks of
> the VM is *always* made, regardless of whether a regular snapshot or a
> group snapshot is performed on a VM.
>
> I hope all that isn't too long -- just wanted to make sure everything's
> clear! :)
>
>
-- 
Ivaylo Markov
Quality & Automation Engineer
StorPool Storage
https://www.storpool.com



[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

  reply	other threads:[~2024-11-22 15:58 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <b16012bc-d4a9-42ff-b9ba-523649c6cfa6@storpool.com>
2024-10-08 10:50 ` Max Carrara
2024-10-08 12:49   ` Ivaylo Markov via pve-devel
2024-11-20 16:10   ` Ivaylo Markov via pve-devel
     [not found]   ` <de35bf0d-5637-40a5-8286-a391807ab1d9@storpool.com>
2024-11-21 14:49     ` Max Carrara
2024-11-22 15:58       ` Ivaylo Markov via pve-devel [this message]
2024-10-04 13:54 Ivaylo Markov via pve-devel
2024-10-04 15:13 ` Dietmar Maurer
2024-10-07  6:12   ` Fabian Grünbichler
2024-10-07  7:27     ` Ivaylo Markov via pve-devel
2024-10-07 10:58       ` DERUMIER, Alexandre via pve-devel
2024-10-07  7:12   ` Daniel Berteaud via pve-devel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mailman.593.1732291106.391.pve-devel@lists.proxmox.com \
    --to=pve-devel@lists.proxmox.com \
    --cc=ivaylo.markov@storpool.com \
    --cc=m.carrara@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal