public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* Re: [PVE-User] offline VM migration node1->node2 with local storage
@ 2021-03-26 15:29 Roland
  0 siblings, 0 replies; 3+ messages in thread
From: Roland @ 2021-03-26 15:29 UTC (permalink / raw)
  To: PVE User List

Hello,

to pick up this older one:

>>On 2/16/20 11:28 AM, Roland @web.de wrote:
>/> why do i need to have the same local storage name when migrating a vm />>/from node1 to node2 in dual-node cluster with local disks ? />>//>>/i'm curious that migration is possible in online state (which is much />>/more complex/challenging task) without a problem, but offline i get />/> "storage is not available on selected target" (because there are />/> differenz zfs pools on both machines) />
>This is because offline and online migration use two very different
>mechanism.
>AFAIK Qemu NBD is used for online migration and ZFS send->recv is used
>for offline migration.

i had a closer look on offline-migration, and apparently zfs send->recv is only
being used with ZVOLS, the default for VMs on ZFS.
for normal (qcow/raw...) files on any filesystem (even zfs), pvesm export/import
is being used.

this is working straightforward and apparently, it seems there is missing
appropriate logic inside proxmox including missing parameterization in the webgui
(and probably error handling etc..) !?

for example, on the target system i can open a "receiver" like this:
# pvesm import ${TARGETDS}:100/vm-100-disk-0.qcow2 qcow2+size tcp://10.16.37.0/24 -with-snapshots 1 -allow-rename 1

where on the source i can send the data like this:
# /sbin/pvesm export ${SOURCEDS}:100/vm-100-disk-0.qcow2 qcow2+size - -with-snapshots 1|mbuffer -O 10.16.37.55:60000

so we apparently see, what's being needed exists at the base level...

>>//>>/i guess there is no real technical hurdle, it just needs to get />>/implemented appropriatley !? />
>There is a patch in the works to make different target storages possible
>for offline migration.

has there been any progress on this in the meantime ?

regards
Roland



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PVE-User] offline VM migration node1->node2 with local storage
  2021-03-26 16:15 Fabian Grünbichler
@ 2021-03-26 20:14 ` Roland privat
  0 siblings, 0 replies; 3+ messages in thread
From: Roland privat @ 2021-03-26 20:14 UTC (permalink / raw)
  To: Fabian Grünbichler; +Cc: Proxmox VE user list, PVE User List

thanks, very nice feature! 

hope it will be available in webgui one day, though.

roland

> 
> Am 26.03.2021 um 17:15 schrieb Fabian Grünbichler <f.gruenbichler@proxmox.com>:
> 
> 
>> Roland <devzero@web.de> hat am 26.03.2021 16:29 geschrieben:
>> 
>> 
>> Hello,
>> 
>> to pick up this older one:
>> 
>>>>> On 2/16/20 11:28 AM, Roland @web.de wrote:
>>> /> why do i need to have the same local storage name when migrating a vm />>/from node1 to node2 in dual-node cluster with local disks ? />>//>>/i'm curious that migration is possible in online state (which is much />>/more complex/challenging task) without a problem, but offline i get />/> "storage is not available on selected target" (because there are />/> differenz zfs pools on both machines) />
>>> This is because offline and online migration use two very different
>>> mechanism.
>>> AFAIK Qemu NBD is used for online migration and ZFS send->recv is used
>>> for offline migration.
>> 
>> i had a closer look on offline-migration, and apparently zfs send->recv is only
>> being used with ZVOLS, the default for VMs on ZFS.
>> for normal (qcow/raw...) files on any filesystem (even zfs), pvesm export/import
>> is being used.
>> 
>> this is working straightforward and apparently, it seems there is missing
>> appropriate logic inside proxmox including missing parameterization in the webgui
>> (and probably error handling etc..) !?
>> 
>> for example, on the target system i can open a "receiver" like this:
>> # pvesm import ${TARGETDS}:100/vm-100-disk-0.qcow2 qcow2+size tcp://10.16.37.0/24 -with-snapshots 1 -allow-rename 1
>> 
>> where on the source i can send the data like this:
>> # /sbin/pvesm export ${SOURCEDS}:100/vm-100-disk-0.qcow2 qcow2+size - -with-snapshots 1|mbuffer -O 10.16.37.55:60000
>> 
>> so we apparently see, what's being needed exists at the base level...
>> 
>>>> //>>/i guess there is no real technical hurdle, it just needs to get />>/implemented appropriatley !? />
>>> There is a patch in the works to make different target storages possible
>>> for offline migration.
>> 
>> has there been any progress on this in the meantime ?
> 
> for compatible storages and setups (e.g., snapshots/replication impose further restrictions since that is hard to carry across different storage types/formats), --targetstorage should allow both live and offline migration and switching storages in one go. you can provide either a single targetstorage, or mappings of source to target storages, or a combination (which means the single storage is used as fallback for storages for which no explicit mapping exists).
> 




^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PVE-User] offline VM migration node1->node2 with local storage
@ 2021-03-26 16:15 Fabian Grünbichler
  2021-03-26 20:14 ` Roland privat
  0 siblings, 1 reply; 3+ messages in thread
From: Fabian Grünbichler @ 2021-03-26 16:15 UTC (permalink / raw)
  To: Proxmox VE user list, Roland, PVE User List


> Roland <devzero@web.de> hat am 26.03.2021 16:29 geschrieben:
> 
>  
> Hello,
> 
> to pick up this older one:
> 
> >>On 2/16/20 11:28 AM, Roland @web.de wrote:
> >/> why do i need to have the same local storage name when migrating a vm />>/from node1 to node2 in dual-node cluster with local disks ? />>//>>/i'm curious that migration is possible in online state (which is much />>/more complex/challenging task) without a problem, but offline i get />/> "storage is not available on selected target" (because there are />/> differenz zfs pools on both machines) />
> >This is because offline and online migration use two very different
> >mechanism.
> >AFAIK Qemu NBD is used for online migration and ZFS send->recv is used
> >for offline migration.
> 
> i had a closer look on offline-migration, and apparently zfs send->recv is only
> being used with ZVOLS, the default for VMs on ZFS.
> for normal (qcow/raw...) files on any filesystem (even zfs), pvesm export/import
> is being used.
> 
> this is working straightforward and apparently, it seems there is missing
> appropriate logic inside proxmox including missing parameterization in the webgui
> (and probably error handling etc..) !?
> 
> for example, on the target system i can open a "receiver" like this:
> # pvesm import ${TARGETDS}:100/vm-100-disk-0.qcow2 qcow2+size tcp://10.16.37.0/24 -with-snapshots 1 -allow-rename 1
> 
> where on the source i can send the data like this:
> # /sbin/pvesm export ${SOURCEDS}:100/vm-100-disk-0.qcow2 qcow2+size - -with-snapshots 1|mbuffer -O 10.16.37.55:60000
> 
> so we apparently see, what's being needed exists at the base level...
> 
> >>//>>/i guess there is no real technical hurdle, it just needs to get />>/implemented appropriatley !? />
> >There is a patch in the works to make different target storages possible
> >for offline migration.
> 
> has there been any progress on this in the meantime ?

for compatible storages and setups (e.g., snapshots/replication impose further restrictions since that is hard to carry across different storage types/formats), --targetstorage should allow both live and offline migration and switching storages in one go. you can provide either a single targetstorage, or mappings of source to target storages, or a combination (which means the single storage is used as fallback for storages for which no explicit mapping exists).




^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-03-26 20:20 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-26 15:29 [PVE-User] offline VM migration node1->node2 with local storage Roland
2021-03-26 16:15 Fabian Grünbichler
2021-03-26 20:14 ` Roland privat

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal