From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>,
Roland <devzero@web.de>, PVE User List <pve-user@pve.proxmox.com>
Subject: Re: [PVE-User] offline VM migration node1->node2 with local storage
Date: Fri, 26 Mar 2021 17:15:26 +0100 (CET) [thread overview]
Message-ID: <386108369.821.1616775326688@webmail.proxmox.com> (raw)
> Roland <devzero@web.de> hat am 26.03.2021 16:29 geschrieben:
>
>
> Hello,
>
> to pick up this older one:
>
> >>On 2/16/20 11:28 AM, Roland @web.de wrote:
> >/> why do i need to have the same local storage name when migrating a vm />>/from node1 to node2 in dual-node cluster with local disks ? />>//>>/i'm curious that migration is possible in online state (which is much />>/more complex/challenging task) without a problem, but offline i get />/> "storage is not available on selected target" (because there are />/> differenz zfs pools on both machines) />
> >This is because offline and online migration use two very different
> >mechanism.
> >AFAIK Qemu NBD is used for online migration and ZFS send->recv is used
> >for offline migration.
>
> i had a closer look on offline-migration, and apparently zfs send->recv is only
> being used with ZVOLS, the default for VMs on ZFS.
> for normal (qcow/raw...) files on any filesystem (even zfs), pvesm export/import
> is being used.
>
> this is working straightforward and apparently, it seems there is missing
> appropriate logic inside proxmox including missing parameterization in the webgui
> (and probably error handling etc..) !?
>
> for example, on the target system i can open a "receiver" like this:
> # pvesm import ${TARGETDS}:100/vm-100-disk-0.qcow2 qcow2+size tcp://10.16.37.0/24 -with-snapshots 1 -allow-rename 1
>
> where on the source i can send the data like this:
> # /sbin/pvesm export ${SOURCEDS}:100/vm-100-disk-0.qcow2 qcow2+size - -with-snapshots 1|mbuffer -O 10.16.37.55:60000
>
> so we apparently see, what's being needed exists at the base level...
>
> >>//>>/i guess there is no real technical hurdle, it just needs to get />>/implemented appropriatley !? />
> >There is a patch in the works to make different target storages possible
> >for offline migration.
>
> has there been any progress on this in the meantime ?
for compatible storages and setups (e.g., snapshots/replication impose further restrictions since that is hard to carry across different storage types/formats), --targetstorage should allow both live and offline migration and switching storages in one go. you can provide either a single targetstorage, or mappings of source to target storages, or a combination (which means the single storage is used as fallback for storages for which no explicit mapping exists).
next reply other threads:[~2021-03-26 16:15 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-26 16:15 Fabian Grünbichler [this message]
2021-03-26 20:14 ` Roland privat
-- strict thread matches above, loose matches on Subject: below --
2021-03-26 15:29 Roland
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=386108369.821.1616775326688@webmail.proxmox.com \
--to=f.gruenbichler@proxmox.com \
--cc=devzero@web.de \
--cc=pve-user@lists.proxmox.com \
--cc=pve-user@pve.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox