public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
	Aaron Lauterer <a.lauterer@proxmox.com>
Subject: Re: [pve-devel] [PATCH v5 qemu-server 1/10] qemuserver: foreach_volid: include pending volumes
Date: Mon, 19 Jun 2023 14:20:59 +0200	[thread overview]
Message-ID: <b651b505-00fc-b634-9db1-542108e43d24@proxmox.com> (raw)
In-Reply-To: <20230619092937.604628-2-a.lauterer@proxmox.com>

Am 19.06.23 um 11:29 schrieb Aaron Lauterer:
>  
> @@ -4876,11 +4876,13 @@ sub foreach_volid {
>  	$volhash->{$volid}->{shared} = 1 if $drive->{shared};
>  
>  	$volhash->{$volid}->{referenced_in_config} //= 0;
> -	$volhash->{$volid}->{referenced_in_config} = 1 if !defined($snapname);
> +	$volhash->{$volid}->{referenced_in_config} = 1 if !defined($snapname) && !defined($pending);

Nit: I would've made $pending behave like a boolean, i.e. check for
$pending rather than defined($pending). $snapname is a string, so there
one wouldn't accidentally pass in an explicit 0.

>  
>  	$volhash->{$volid}->{referenced_in_snapshot}->{$snapname} = 1
>  	    if defined($snapname);
>  
> +	$volhash->{$volid}->{referenced_in_pending} = 1 if defined($pending);
> +

Same.




  reply	other threads:[~2023-06-19 12:21 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-19  9:29 [pve-devel] [PATCH v5 qemu-server 0/7] migration: don't scan all storages, fail on aliases Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 1/10] qemuserver: foreach_volid: include pending volumes Aaron Lauterer
2023-06-19 12:20   ` Fiona Ebner [this message]
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 2/10] qemuserver: foreach_volid: always include pending disks Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 3/10] migration: only migrate disks used by the guest Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 4/10] qemuserver: migration: test_volid: change attr name and ref handling Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 5/10] tests: add migration test for pending disk Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 6/10] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-19 12:21   ` Fiona Ebner
2023-06-19  9:29 ` [pve-devel] [PATCH v5 qemu-server 7/10] tests: add migration alias check Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 container 8/10] migration: only migrate volumes used by the guest Aaron Lauterer
2023-06-19  9:29 ` [pve-devel] [PATCH v5 container 9/10] migration: fail when aliased volume is detected Aaron Lauterer
2023-06-19 12:21   ` Fiona Ebner
2023-06-19  9:29 ` [pve-devel] [PATCH v5 docs 10/10] storage: add hint to avoid storage aliasing Aaron Lauterer
2023-06-19 12:21 ` [pve-devel] [PATCH v5 qemu-server 0/7] migration: don't scan all storages, fail on aliases Fiona Ebner
2023-06-21 10:53 ` [pve-devel] applied-series: " Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b651b505-00fc-b634-9db1-542108e43d24@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal