From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: [pve-devel] applied: [RFC v2 qemu-server 4/5] adapt to new storage_migrate activation behavior
Date: Tue, 24 Nov 2020 16:29:17 +0100 [thread overview]
Message-ID: <1606231728.g2jrh59drg.astroid@nora.none> (raw)
In-Reply-To: <20201106143059.22047-4-f.ebner@proxmox.com>
On November 6, 2020 3:30 pm, Fabian Ebner wrote:
> Offline migrated volumes are now activated within storage_migrate.
> Online migrated volumes can be assumed to be already active.
>
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
>
> dependency bump needed
>
> Sent as RFC, because I'm not completly sure if this is fine here.
> Is the assumption about online volumes correct or is there some weird
> edge case I'm missing?
> I only found run_replication as a potential place that might need active
> local volumes, but that also uses storage_migrate in the end.
>
> PVE/QemuMigrate.pm | 8 --------
> 1 file changed, 8 deletions(-)
>
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 2f4eec3..f2c2b07 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -251,7 +251,6 @@ sub prepare {
>
> my $vollist = PVE::QemuServer::get_vm_volumes($conf);
>
> - my $need_activate = [];
> foreach my $volid (@$vollist) {
> my ($sid, $volname) = PVE::Storage::parse_volume_id($volid, 1);
>
> @@ -266,16 +265,9 @@ sub prepare {
> my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> warn "Used shared storage '$sid' is not online on source node!\n"
> if !$plugin->check_connection($sid, $scfg);
> - } else {
> - # only activate if not shared
> - next if ($volid =~ m/vm-\d+-cloudinit/);
> - push @$need_activate, $volid;
> }
> }
>
> - # activate volumes
> - PVE::Storage::activate_volumes($self->{storecfg}, $need_activate);
> -
> # test ssh connection
> my $cmd = [ @{$self->{rem_ssh}}, '/bin/true' ];
> eval { $self->cmd_quiet($cmd); };
> --
> 2.20.1
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
next prev parent reply other threads:[~2020-11-24 15:29 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-06 14:30 [pve-devel] [PATCH v2 storage 1/5] fix #3030: always activate volumes in storage_migrate Fabian Ebner
2020-11-06 14:30 ` [pve-devel] [PATCH v2 container 2/5] adapt to new storage_migrate activation behavior Fabian Ebner
2020-11-10 18:29 ` [pve-devel] applied: " Thomas Lamprecht
2020-11-06 14:30 ` [pve-devel] [RFC v2 container 3/5] deactivate volumes after storage_migrate Fabian Ebner
2020-11-24 15:26 ` [pve-devel] applied: " Fabian Grünbichler
2020-11-06 14:30 ` [pve-devel] [RFC v2 qemu-server 4/5] adapt to new storage_migrate activation behavior Fabian Ebner
2020-11-24 15:29 ` Fabian Grünbichler [this message]
2020-11-06 14:30 ` [pve-devel] [RFC v2 qemu-server 5/5] deactivate volumes after storage_migrate Fabian Ebner
2020-11-24 15:29 ` [pve-devel] applied: " Fabian Grünbichler
2020-11-10 18:12 ` [pve-devel] applied: [PATCH v2 storage 1/5] fix #3030: always activate volumes in storage_migrate Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1606231728.g2jrh59drg.astroid@nora.none \
--to=f.gruenbichler@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox