public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>,
	"aderumier@odiso.com" <aderumier@odiso.com>,
	"f.ebner@proxmox.com" <f.ebner@proxmox.com>
Subject: Re: [pve-devel] [PATCH v4 qemu-server 2/2] remote-migration: add target-cpu && target-reboot params
Date: Wed, 25 Oct 2023 16:01:30 +0000	[thread overview]
Message-ID: <188c296857bc3ae42f0a5150770e8c3942ec74f0.camel@groupe-cyllene.com> (raw)
In-Reply-To: <8d06d2f6-b831-45b3-ac1b-2cc3f1721b85@proxmox.com>

> 

>>Is it required for this series?
for this series, no. 
It's only focus on migrating to remote with different cpu without too
much downtime.


>> Unused disks can just be migrated
>>offline via storage_migrate(), or? 

currently unused disk can't be migrate through the http tunnel for
remote-migration

2023-10-25 17:51:38 ERROR: error - tunnel command
'{"format":"raw","migration_snapshot":"1","export_formats":"zfs","allow
_rename":"1","snapshot":"__migration__","volname":"vm-1112-disk-
1","cmd":"disk-import","storage":"targetstorage","with_snapshots":1}'
failed - failed to handle 'disk-import' command - no matching
import/export format found for storage 'preprodkvm'
2023-10-25 17:51:38 aborting phase 1 - cleanup resources
tunnel: -> sending command "quit" to remote
tunnel: <- got reply
tunnel: CMD channel closed, shutting down
2023-10-25 17:51:39 ERROR: migration aborted (duration 00:00:01): error
- tunnel command
'{"format":"raw","migration_snapshot":"1","export_formats":"zfs","allow
_rename":"1","snapshot":"__migration__","volname":"vm-1112-disk-
1","cmd":"disk-import","storage":"targetstorage","with_snapshots":1}'
failed - failed to handle 'disk-import' command - no matching
import/export format found for storage 'preprodkvm'
migration aborted




>>If we want to switch to migrating
>>disks offline via QEMU instead of our current storage_migrate(),
>>going
>>for QEMU storage daemon + NBD seems the most natural to me.

Yes, I more for this solution.

>>If it's not too complicated to temporarily attach the disks to the
>>VM,
>>that can be done too, but is less re-usable (e.g. pure offline
>>migration
>>won't benefit from that).

No sure about attach/detach temporary once by once, or attach all
devices (but this need enough controllers slot).

qemu storage daemon seem to be a less hacky  solution ^_^


> but if it's work, I think we'll need to add config generation in pv
> storage for differents blockdriver
> 
> 
> like:
> 
> –blockdev driver=file,node-name=file0,filename=vm.img 
> 
> –blockdev driver=rbd,node-name=rbd0,pool=my-pool,image=vm01
> 

>>What other special cases besides (non-krbd) RBD are there? If it's
>>just
>>that, I'd much rather keep the special handling in QEMU itself then
>>burden all other storage plugins with implementing something specific
>>to
>>VMs.

not sure, maybe glusterfs, .raw (should works for block device like
lvm,zfs), .qcow2


>>Or is there a way to use the path from the storage plugin somehow
>>like
>>we do at the moment, i.e.
>>"rbd:rbd/vm-111-disk-
>>1:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/rbd.key
>>ring"?

I don't think it's possible just like this.I need to do more test, 
looking at libvirt before they are not too much doc about it.



> So maybe it'll take a little bit more time.
> 
> (Maybe a second patch serie later to implement it)
> 

>>Yes, I think that makes sense as a dedicated series.



  reply	other threads:[~2023-10-25 16:02 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-28 14:45 [pve-devel] [PATCH v4 qemu-server 0/2] remote-migration: migration with different cpu Alexandre Derumier
2023-09-28 14:45 ` [pve-devel] [PATCH v4 qemu-server 1/2] migration: move livemigration code in a dedicated sub Alexandre Derumier
2023-10-09 11:25   ` Fiona Ebner
2023-09-28 14:45 ` [pve-devel] [PATCH v4 qemu-server 2/2] remote-migration: add target-cpu && target-reboot params Alexandre Derumier
2023-10-09 12:13   ` Fiona Ebner
2023-10-09 13:47     ` DERUMIER, Alexandre
2023-10-10  9:19       ` Fiona Ebner
2023-10-10 16:29         ` DERUMIER, Alexandre
2023-10-11  7:51           ` Fiona Ebner
2023-10-23 18:03         ` DERUMIER, Alexandre
2023-10-24  8:11           ` Fiona Ebner
2023-10-24 12:20             ` DERUMIER, Alexandre
2023-10-25  8:30               ` Fiona Ebner
2023-10-25 16:01                 ` DERUMIER, Alexandre [this message]
2023-10-27  9:19                   ` Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=188c296857bc3ae42f0a5150770e8c3942ec74f0.camel@groupe-cyllene.com \
    --to=alexandre.derumier@groupe-cyllene.com \
    --cc=aderumier@odiso.com \
    --cc=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal