all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [RFC container] alloc disk: fix #3970 avoid ambiguous rbd image path
Date: Mon, 04 Apr 2022 17:44:57 +0200	[thread overview]
Message-ID: <1649086015.n05il0cmo5.astroid@nora.none> (raw)
In-Reply-To: <20220401152424.3811621-1-a.lauterer@proxmox.com>

On April 1, 2022 5:24 pm, Aaron Lauterer wrote:
> If two RBD storages use the same pool, but connect to different
> clusters, we cannot say to which cluster the mapped RBD image belongs
> to. To avoid potential data loss, we need to verify that no other
> storage is configured that could have a volume mapped under the same
> path before we format anything.
> 
> The ambiguous mapping is in
> /dev/rbd/<pool>/<ns>/<image> where the namespace <ns> is optional.
> 
> Once we can tell the clusters apart in the mapping, we can remove these
> checks again.
> 
> See bug #3969 for more information on the root cause.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> RFC because I would like someone else to take a look at the logic and I
> wasn't sure how to format the grouping of the conditions in a way that
> is easy to read
> 
>  src/PVE/LXC.pm | 45 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 45 insertions(+)
> 
> diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
> index fe63087..b048ce0 100644
> --- a/src/PVE/LXC.pm
> +++ b/src/PVE/LXC.pm
> @@ -1970,6 +1970,51 @@ sub alloc_disk {
>      my $scfg = PVE::Storage::storage_config($storecfg, $storage);
>      # fixme: use better naming ct-$vmid-disk-X.raw?
>  
> +    # check if another rbd storage with the same pool name but different
> +    # cluster exists. If so, allocating a new volume can potentially be
> +    # dangerous because the RBD mapping, exposes it in an ambiguous way under
> +    # /dev/rbd/<pool>/<ns>/<image>. Without any information to which cluster it
> +    # belongs, we cannot clearly determine which image we access and
> +    # potentially format an already used image.  See
> +    # https://bugzilla.proxmox.com/show_bug.cgi?id=3969 and
> +    # https://bugzilla.proxmox.com/show_bug.cgi?id=3970
> +    # TODO: remove these checks once #3969 is fixed and we can clearly tell to
> +    # which cluster an image belongs to
> +    if ($scfg->{type} eq 'rbd') {
> +	my $pool = $storecfg->{ids}->{$storage}->{pool};

not used anywhere?

> +	foreach my $stor  (keys %{$storecfg->{ids}}) {
> +	    next if $stor eq $storage;
> +
> +	    my $ccfg = PVE::Storage::storage_config($storecfg, $stor);

that's actually not needed if you are iterating over the keys ;) just 
use

my $checked_scfg = $storecfg->{ids}->{$store};

> +	    next if $ccfg->{type} ne 'rbd';
> +
> +	    if ($scfg->{pool} eq $ccfg->{pool}) {

why not simply

# different pools -> no clash possible
next if $scfg->{pool} ne $checked_scfg->{pool};

> +		if (
> +		    (
> +			defined($scfg->{monhost})
> +			&& defined($ccfg->{monhost})
> +			&& $scfg->{monhost} eq $ccfg->{monhost}
> +		    )
> +		    || (
> +			!defined($scfg->{monhost})
> +			&& !defined($ccfg->{monhost})
> +		    )
> +		) {
> +		    # both external ones same or both hyperconverged
> +		    next;

untested, but something like the following is probably more readable ;) 
could also be adapted to check for monhost overlap instead if desired 
(to catch storage A and B not having identical 
strings/formatting/elements - if any single mon is listed for both, they 
ARE the same cluster for all intents and purposes?)

my $safe_compare = sub {
  my ($key) = shift;
  my $cmp = sub { $_[0] eq $_[1] };
  return PVE::Tools::safe_compare($scfg->{$key}, $checked_scfg->{$key}, $cmp);
};

# internal and internal or external and external with identical monitors 
# => same cluster
next if $safe_compare->("monhost");

# different namespaces => no clash possible
next if !$safe_compare->("namespace");

die ..

> +		}
> +		# different cluster here
> +		# we are ok if namespaces are not the same or only one storage has one
> +		if (defined($scfg->{namespace}) && defined($ccfg->{namespace})) {
> +		    next if $scfg->{namespace} ne $ccfg->{namespace};
> +		} elsif (defined($scfg->{namespace}) || defined($ccfg->{namespace})) {
> +		    next;
> +		}
> +		die "Cannot determine which Ceph cluster the volume mapping belongs to. Abort!\n";
> +	    }
> +	}
> +    }
> +
>      eval {
>  	my $do_format = 0;
>  	if ($scfg->{content}->{rootdir} && $scfg->{path}) {
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




      parent reply	other threads:[~2022-04-04 15:45 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-01 15:24 Aaron Lauterer
2022-04-01 15:24 ` [pve-devel] [RFC qemu-server] clone disk: fix #3970 catch same source and destination Aaron Lauterer
2022-04-04 15:26   ` Fabian Grünbichler
2022-04-05  7:28     ` Aaron Lauterer
2022-04-04 15:44 ` Fabian Grünbichler [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1649086015.n05il0cmo5.astroid@nora.none \
    --to=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal