public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fabrizio Cuseo <f.cuseo@panservice.it>
To: Stefan Lendl <s.lendl@proxmox.com>
Cc: pve-user <pve-user@pve.proxmox.com>
Subject: Re: [PVE-User] qm remote-migrate
Date: Wed, 11 Oct 2023 10:14:24 +0000 (UTC)	[thread overview]
Message-ID: <818088054.154408.1697019264217.JavaMail.zimbra@zimbra.panservice.it> (raw)
In-Reply-To: <87zg0plcss.fsf@gmail.com>



----- Il 11-ott-23, alle 11:37, Stefan Lendl s.lendl@proxmox.com ha scritto:

> Fabrizio Cuseo <f.cuseo@panservice.it> writes:
> 
> Thanks for providing the details.
> I will investigate the situation and we will consider a solution for our
> upcoming SDN upgrade.
> 
> As a solution for now, please try to remove the VLAN tag from the source VM
> and try to migrate again. The target net interface will not require a
> VLAN tag assigned (and is therefore not allowed) on the VM because it is
> configured via SDN already.

Yes, I have done with a test VM, but I can't do it with production VMs because if I remove vlan tag, the source VM will stop to work.
But I can install and configure SDN on the source cluster (upgrading it to the last 8.x), create a vlan zone, a vnet with that vlan id, change source bridge to vnet bridge and removing the vlan tag, and migrate. (I have just tested and seems to work).

Thank you again, Fabrizio 



> 
> Best regards,
> Stefan
> 
>> ----- Il 11-ott-23, alle 9:41, Stefan Lendl s.lendl@proxmox.com ha scritto:
>>
>>> Fabrizio Cuseo <f.cuseo@panservice.it> writes:
>>>
>>> Hello Fabrizio,
>>>
>>> To better understand your issue, the source cluster has a VM with a
>>> bridge with a VLAN tag assigned and the target cluster does not have the
>>> same setup but uses SDN (vnet) without vlan.
>>
>> Yes, it's correct.
>>
>>
>>> After migration you manually changed the VMs configuration to match the
>>> new setup?
>>
>> I can't because remote-migration returns an error (I cannot specify vlan tag on
>> that bridge)
>>
>>> What SDN configuration are you using on the traget cluster?
>>> Please send the output of the following:
>>>
>>> head -n -1 /etc/pve/sdn/*.cfg
>>
>> Can I send you in private ? Because is full of customer's names :/
>>
>> But, this is a part of files:
>>
>> ==> /etc/pve/sdn/zones.cfg <==
>> vlan: ZonaVLAN
>>         bridge vmbr0
>>         ipam pve
>>
>> qinq: VPVT
>>         bridge vmbr0
>>         tag 929
>>         ipam pve
>>         vlan-protocol 802.1q
>>
>>
>> ==> /etc/pve/sdn/vnets.cfg <==
>> vnet: test100
>>         zone FWHous
>>         alias Vlan 100 Test 921 qinq
>>         tag 100
>>
>> vnet: vlan902
>>         zone ZonaVLAN
>>         alias Vlan 902 Private-Vlan
>>         tag 902
>>
>>
>>
>>
>>> What was to exact command you ran to start the remote-migrate process?
>>
>> qm remote-migrate 4980 4980
>> 'host=172.16.20.41,apitoken=PVEAPIToken=root@pam!remotemigrate=hiddensecret,fingerprint=hiddenfingerprint'
>> --target-bridge vlan902 --target-storage NfsMirror --online
>>
>>
>>
>>> Did you notice any suspicios log messages in the source clusters
>>> journal?
>>
>> Source:
>>
>> tunnel: -> sending command "version" to remote
>> tunnel: <- got reply
>> 2023-10-10 18:08:48 local WS tunnel version: 2
>> 2023-10-10 18:08:48 remote WS tunnel version: 2
>> 2023-10-10 18:08:48 minimum required WS tunnel version: 2
>> websocket tunnel started
>> 2023-10-10 18:08:48 starting migration of VM 4980 to node 'nodo01-cluster1'
>> (172.16.20.41)
>> tunnel: -> sending command "bwlimit" to remote
>> tunnel: <- got reply
>> 2023-10-10 18:08:49 found local disk 'CephCluster3Copie:vm-4980-disk-0'
>> (attached)
>> 2023-10-10 18:08:49 mapped: net0 from vmbr1 to vlan902
>> 2023-10-10 18:08:49 Allocating volume for drive 'scsi0' on remote storage
>> 'NfsMirror'..
>> tunnel: -> sending command "disk" to remote
>> tunnel: <- got reply
>> 2023-10-10 18:08:49 volume 'CephCluster3Copie:vm-4980-disk-0' is
>> 'NfsMirror:4980/vm-4980-disk-0.raw' on the target
>> tunnel: -> sending command "config" to remote
>> tunnel: <- got reply
>> tunnel: -> sending command "start" to remote
>> tunnel: <- got reply
>> 2023-10-10 18:08:50 ERROR: online migrate failure - error - tunnel command
>> '{"start_params":{"forcemachine":"pc-i440fx-8.0+pve0","forcecpu":null,"statefile":"unix","skiplock":1},"cmd":"start","migrate_opts":{"network":null,"nbd
>> ":{"scsi0":{"volid":"NfsMirror:4980/vm-4980-disk-0.raw","success":true,"drivestr":"NfsMirror:4980/vm-4980-disk-0.raw,discard=on,format=raw,size=64G"}},"nbd_proto_version":1,"storagemap":{"default":"NfsMirror"},"migratedfrom":"n
>> ode06-cluster4","type":"websocket","remote_node":"nodo01-cluster1","spice_ticket":null}}'
>> failed - failed to handle 'start' command - start failed: QEMU exited with code
>> 1
>> 2023-10-10 18:08:50 aborting phase 2 - cleanup resources
>> 2023-10-10 18:08:50 migrate_cancel
>> tunnel: -> sending command "stop" to remote
>> tunnel: <- got reply
>> tunnel: -> sending command "quit" to remote
>> tunnel: <- got reply
>> 2023-10-10 18:08:51 ERROR: migration finished with problems (duration 00:00:03)
>>
>> TASK ERROR: migration problems
>>
>>
>>
>>
>>
>> DESTINATION:
>>
>> mtunnel started
>> received command 'version'
>> received command 'bwlimit'
>> received command 'disk'
>> Formatting '/mnt/pve/NfsMirror/images/4980/vm-4980-disk-0.raw', fmt=raw
>> size=68719476736 preallocation=off
>> received command 'config'
>> update VM 4980: -agent 1 -boot order=scsi0;ide2;net0 -cores 2 -ide2
>> none,media=cdrom -memory 8192 -name SeafileProTestS3 -net0
>> e1000=86:64:73:AB:33:AE,bridge=vlan902,tag=902 -numa 1 -ostype l26 -scsi0
>> NfsMirror:4980
>> /vm-4980-disk-0.raw,discard=on,format=raw,size=64G -scsihw virtio-scsi-pci
>> -smbios1 uuid=39a07e5b-16b5-45a3-aad9-4e3f2b4e87ce -sockets 2
>> received command 'start'
>> QEMU: vm vlans are not allowed on vnet vlan902 at
>> /usr/share/perl5/PVE/Network/SDN/Zones/Plugin.pm line 228.
>> QEMU: kvm: -netdev
>> type=tap,id=net0,ifname=tap4980i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown:
>> network script /var/lib/qemu-server/pve-bridge failed with status 6400
>> received command 'stop'
>> received command 'quit'
>> freeing volume 'NfsMirror:4980/vm-4980-disk-0.raw' as part of cleanup
>> disk image '/mnt/pve/NfsMirror/images/4980/vm-4980-disk-0.raw' does not exist
>> switching to exit-mode, waiting for client to disconnect
>> mtunnel exited
>> TASK OK
>>
>>
>> Source VM conf file:
>>
>> agent: 1
>> balloon: 2048
>> boot: order=scsi0;ide2;net0
>> cores: 2
>> ide2: none,media=cdrom
>> memory: 4096
>> name: SeafileProTestS3
>> net0: virtio=86:64:73:AB:33:AE,bridge=vmbr1,tag=902
>> numa: 1
>> ostype: l26
>> scsi0: CephCluster3Copie:vm-4980-disk-0,discard=on,size=64G
>> scsihw: virtio-scsi-pci
>> smbios1: uuid=39a07e5b-16b5-45a3-aad9-4e3f2b4e87ce
>> sockets: 2
>> vmgenid: 035cd26d-c74e-405e-9b4d-481f26d9cf5f
>>
>>
>>
>>
>>> Usually I would ask you to send me the entire journal but this is not
>>> feasible on the mailing list. If necessary, I would recommend you open a
>>> Thread in our community forum and I will take a look there.
>>
>>> https://forum.proxmox.com/
>>>
>>> Best regards,
>>> Stefan Lendl
>>
>>
>> Thank you in advance, Fabrizio
>>
>>
>>>
>>>> Hello.
>>>> I am testing qm remote-migrate with 2 pve 8.0.4 clusters.
>>>> Source cluster has one bridge with vlan id on every VM, destination cluster uses
>>>> SDN and a different bridge (vnet) without vlanid.
>>>> If I migrate the vm, i need to specify both bridge and vlan-id, but I have not
>>>> found an option to do it.
>>>>
>>>> PS: after migration, on the new cluster the vm is running without any problem,
>>>> but on source cluster remains locked and in migration, so I need to issue a "qm
>>>> unlock vmid" and stop/delete it.
>>>>
>>>> I know that is an experimental feature, so I send my test results.
>>>>
>>>> Regards, Fabrizio
>>>>
>>>>
>>>> --
>>>> ---
>>>> Fabrizio Cuseo - mailto:f.cuseo@panservice.it
>>>>
>>>> _______________________________________________
>>>> pve-user mailing list
>>>> pve-user@lists.proxmox.com
>>> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>> --
>> ---
>> Fabrizio Cuseo - mailto:f.cuseo@panservice.it
>> Direzione Generale - Panservice InterNetWorking
>> Servizi Professionali per Internet ed il Networking
>> Panservice e' associata AIIP - RIPE Local Registry
>> Phone: +39 0773 410020 - Fax: +39 0773 470219
>> http://www.panservice.it  mailto:info@panservice.it
> > Numero verde nazionale: 800 901492

-- 
---
Fabrizio Cuseo - mailto:f.cuseo@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:info@panservice.it
Numero verde nazionale: 800 901492



  reply	other threads:[~2023-10-11 10:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-10 19:01 Fabrizio Cuseo
2023-10-11  7:32 ` Fabrizio Cuseo
2023-10-11  7:41 ` Stefan Lendl
2023-10-11  7:55   ` Fabrizio Cuseo
2023-10-11  9:37     ` Stefan Lendl
2023-10-11 10:14       ` Fabrizio Cuseo [this message]
2023-10-11 16:06         ` DERUMIER, Alexandre

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=818088054.154408.1697019264217.JavaMail.zimbra@zimbra.panservice.it \
    --to=f.cuseo@panservice.it \
    --cc=pve-user@pve.proxmox.com \
    --cc=s.lendl@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal