all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>,
	"pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>,
	"aderumier@odiso.com" <aderumier@odiso.com>
Subject: Re: [pve-devel] [PATCH v4 qemu-server 2/2] remote-migration: add target-cpu && target-reboot params
Date: Fri, 27 Oct 2023 11:19:33 +0200	[thread overview]
Message-ID: <ed1789ee-8c77-4152-ba8c-90b0b066e787@proxmox.com> (raw)
In-Reply-To: <188c296857bc3ae42f0a5150770e8c3942ec74f0.camel@groupe-cyllene.com>

Am 25.10.23 um 18:01 schrieb DERUMIER, Alexandre:
>>> Unused disks can just be migrated
>>> offline via storage_migrate(), or? 
> 
> currently unused disk can't be migrate through the http tunnel for
> remote-migration
> 
> 2023-10-25 17:51:38 ERROR: error - tunnel command
> '{"format":"raw","migration_snapshot":"1","export_formats":"zfs","allow
> _rename":"1","snapshot":"__migration__","volname":"vm-1112-disk-
> 1","cmd":"disk-import","storage":"targetstorage","with_snapshots":1}'
> failed - failed to handle 'disk-import' command - no matching
> import/export format found for storage 'preprodkvm'
> 2023-10-25 17:51:38 aborting phase 1 - cleanup resources
> tunnel: -> sending command "quit" to remote
> tunnel: <- got reply
> tunnel: CMD channel closed, shutting down
> 2023-10-25 17:51:39 ERROR: migration aborted (duration 00:00:01): error
> - tunnel command
> '{"format":"raw","migration_snapshot":"1","export_formats":"zfs","allow
> _rename":"1","snapshot":"__migration__","volname":"vm-1112-disk-
> 1","cmd":"disk-import","storage":"targetstorage","with_snapshots":1}'
> failed - failed to handle 'disk-import' command - no matching
> import/export format found for storage 'preprodkvm'
> migration aborted
> 

Well, yes, they can. But there needs to be a common import/export format
between the storage types. Which admittedly is a bit limited for certain
storage types, e.g. ZFS only supports ZFS and RBD does not implement
import/export at all yet (because in a single cluster it wasn't needed).


>>> If we want to switch to migrating
>>> disks offline via QEMU instead of our current storage_migrate(),
>>> going
>>> for QEMU storage daemon + NBD seems the most natural to me.
> 
> Yes, I more for this solution.
> 
>>> If it's not too complicated to temporarily attach the disks to the
>>> VM,
>>> that can be done too, but is less re-usable (e.g. pure offline
>>> migration
>>> won't benefit from that).
> 
> No sure about attach/detach temporary once by once, or attach all
> devices (but this need enough controllers slot).
> 

I think you can attach them to the VM without attaching to a controller
by using QMP blockdev-add, but...

> qemu storage daemon seem to be a less hacky  solution ^_^
> 

...sure, this should be nicer and more re-usable.

> 
>> but if it's work, I think we'll need to add config generation in pv
>> storage for differents blockdriver
>>
>>
>> like:
>>
>> –blockdev driver=file,node-name=file0,filename=vm.img 
>>
>> –blockdev driver=rbd,node-name=rbd0,pool=my-pool,image=vm01
>>
> 
>>> What other special cases besides (non-krbd) RBD are there? If it's
>>> just
>>> that, I'd much rather keep the special handling in QEMU itself then
>>> burden all other storage plugins with implementing something specific
>>> to
>>> VMs.
> 
> not sure, maybe glusterfs, .raw (should works for block device like
> lvm,zfs), .qcow2
> 

There's a whole lot of drivers
https://qemu.readthedocs.io/en/v8.1.0/interop/qemu-qmp-ref.html#qapidoc-883

But e.g. for NFS, we don't necessarily need it and can just use
qcow2/raw. Currently, with -drive we also just treat it like any other file.

I'd like to keep the logic for how to construct the -blockdev command
line option (mostly) in qemu-server itself. But I guess we can't avoid
some amount of coupling. Currently, for -drive we have the coupling in
path() which can e.g. return rbd: or gluster: and then QEMU will parse
what driver to use from that path.

Two approaches that make sense to me (no real preference at the moment):

1. Have a storage plugin method which tells qemu-server about the
necessary driver and properties for opening the image. E.g. return the
properties as a hash and then have qemu-server join them together and
then add the generic properties (e.g. aio,node-name) to construct the
full -blockdev option.

2. Do everything in qemu-server and special case for certain storage
types that have a dedicated driver. Still needs to get the info like
pool name from the RBD storage of course, but that should be possible
with existing methods.

Happy to hear other suggestions/opinions.

> 
>>> Or is there a way to use the path from the storage plugin somehow
>>> like
>>> we do at the moment, i.e.
>>> "rbd:rbd/vm-111-disk-
>>> 1:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/rbd.key
>>> ring"?
> 
> I don't think it's possible just like this.I need to do more test, 
> looking at libvirt before they are not too much doc about it.
> 

Probably they decided to get rid of this magic for the newer -blockdev
variant. I tried to cheat using driver=file and specify the "rbd:"-path
as the filename, but it doesn't work :P




      reply	other threads:[~2023-10-27  9:19 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-28 14:45 [pve-devel] [PATCH v4 qemu-server 0/2] remote-migration: migration with different cpu Alexandre Derumier
2023-09-28 14:45 ` [pve-devel] [PATCH v4 qemu-server 1/2] migration: move livemigration code in a dedicated sub Alexandre Derumier
2023-10-09 11:25   ` Fiona Ebner
2023-09-28 14:45 ` [pve-devel] [PATCH v4 qemu-server 2/2] remote-migration: add target-cpu && target-reboot params Alexandre Derumier
2023-10-09 12:13   ` Fiona Ebner
2023-10-09 13:47     ` DERUMIER, Alexandre
2023-10-10  9:19       ` Fiona Ebner
2023-10-10 16:29         ` DERUMIER, Alexandre
2023-10-11  7:51           ` Fiona Ebner
2023-10-23 18:03         ` DERUMIER, Alexandre
2023-10-24  8:11           ` Fiona Ebner
2023-10-24 12:20             ` DERUMIER, Alexandre
2023-10-25  8:30               ` Fiona Ebner
2023-10-25 16:01                 ` DERUMIER, Alexandre
2023-10-27  9:19                   ` Fiona Ebner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ed1789ee-8c77-4152-ba8c-90b0b066e787@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=aderumier@odiso.com \
    --cc=alexandre.derumier@groupe-cyllene.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal