* [PVE-User] 'asymmetrix storage' and migration...
@ 2021-07-15 12:48 Marco Gaiarin
2021-07-15 12:55 ` Aaron Lauterer
2021-07-15 12:56 ` Fabian Grünbichler
0 siblings, 2 replies; 4+ messages in thread
From: Marco Gaiarin @ 2021-07-15 12:48 UTC (permalink / raw)
To: pve-user
I'm a bit puzzled.
A little 'ZFS' cluster, with two asymmetrical nodes; second node is
there only, just in case, to run some little VMs (PBX, firewall, ...).
First node have a second RAIDZ volume called 'rpool-data', that does
not exist on second node; eg, storage.conf is:
dir: local
path /var/lib/vz
content iso,vztmpl,backup
zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1
zfspool: rpool-data
pool rpool-data
content images,rootdir
mountpoint /rpool-data
sparse 1
Clearly on node 2 'rpool-data' have a question mark.
I've tried to migrate a VMs from node 2 to 1 that have disks only on 'local-zfs':
root@brpve1:~# grep ^scsi /etc/pve/nodes/brpve2/qemu-server/100.conf
scsi0: local-zfs:vm-100-disk-0,size=100G
scsi1: local-zfs:vm-100-disk-1,backup=0,size=1000G
scsi2: local-zfs:vm-100-disk-2,backup=0,size=500G
scsihw: virtio-scsi-pci
but i get:
2021-07-15 14:14:54 starting migration of VM 100 to node 'brpve1' (10.15.5.21)
zfs error: cannot open 'rpool-data': no such pool
zfs error: cannot open 'rpool-data': no such pool
2021-07-15 14:14:54 ERROR: Problem found while scanning volumes - could not activate storage 'rpool-data', zfs error: cannot import 'rpool-data': no such pool available
2021-07-15 14:14:54 aborting phase 1 - cleanup resources
2021-07-15 14:14:54 ERROR: migration aborted (duration 00:00:00): Problem found while scanning volumes - could not activate storage 'rpool-data', zfs error: cannot import 'rpool-data': no such pool available
TASK ERROR: migration aborted
Because in 'rpool-data' currently we have no data, i've simply disabled
storage 'rpool-data', migrate the machine and then re-enabled back.
Why migration does not work, even if there's no disks in 'rpool-data'?
Thre's some way to 'fake' an 'rpool-data' on the second node, only to
have PVE not to complain?
Thanks.
--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797
Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PVE-User] 'asymmetrix storage' and migration...
2021-07-15 12:48 [PVE-User] 'asymmetrix storage' and migration Marco Gaiarin
@ 2021-07-15 12:55 ` Aaron Lauterer
2021-07-15 12:56 ` Fabian Grünbichler
1 sibling, 0 replies; 4+ messages in thread
From: Aaron Lauterer @ 2021-07-15 12:55 UTC (permalink / raw)
To: Proxmox VE user list, Marco Gaiarin
See inline
On 7/15/21 2:48 PM, Marco Gaiarin wrote:
>
> I'm a bit puzzled.
>
>
> A little 'ZFS' cluster, with two asymmetrical nodes; second node is
> there only, just in case, to run some little VMs (PBX, firewall, ...).
>
> First node have a second RAIDZ volume called 'rpool-data', that does
> not exist on second node; eg, storage.conf is:
>
> dir: local
> path /var/lib/vz
> content iso,vztmpl,backup
>
> zfspool: local-zfs
> pool rpool/data
> content images,rootdir
> sparse 1
>
> zfspool: rpool-data
> pool rpool-data
> content images,rootdir
> mountpoint /rpool-data
> sparse 1
>
> Clearly on node 2 'rpool-data' have a question mark.
>
>
> I've tried to migrate a VMs from node 2 to 1 that have disks only on 'local-zfs':
>
> root@brpve1:~# grep ^scsi /etc/pve/nodes/brpve2/qemu-server/100.conf
> scsi0: local-zfs:vm-100-disk-0,size=100G
> scsi1: local-zfs:vm-100-disk-1,backup=0,size=1000G
> scsi2: local-zfs:vm-100-disk-2,backup=0,size=500G
> scsihw: virtio-scsi-pci
>
> but i get:
>
> 2021-07-15 14:14:54 starting migration of VM 100 to node 'brpve1' (10.15.5.21)
> zfs error: cannot open 'rpool-data': no such pool
> zfs error: cannot open 'rpool-data': no such pool
> 2021-07-15 14:14:54 ERROR: Problem found while scanning volumes - could not activate storage 'rpool-data', zfs error: cannot import 'rpool-data': no such pool available
> 2021-07-15 14:14:54 aborting phase 1 - cleanup resources
> 2021-07-15 14:14:54 ERROR: migration aborted (duration 00:00:00): Problem found while scanning volumes - could not activate storage 'rpool-data', zfs error: cannot import 'rpool-data': no such pool available
> TASK ERROR: migration aborted
>
>
> Because in 'rpool-data' currently we have no data, i've simply disabled
> storage 'rpool-data', migrate the machine and then re-enabled back.
>
>
> Why migration does not work, even if there's no disks in 'rpool-data'?
>
> Thre's some way to 'fake' an 'rpool-data' on the second node, only to
> have PVE not to complain?
You can limit storages to certain nodes. To do this via the GUI, edit the storage and in the top right of that dialog, you should be able to select the nodes on which the storage exists. Otherwise you will run into certain problems as Proxmox VE expects the underlying storage to be present on the nodes.
>
>
> Thanks.
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PVE-User] 'asymmetrix storage' and migration...
2021-07-15 12:48 [PVE-User] 'asymmetrix storage' and migration Marco Gaiarin
2021-07-15 12:55 ` Aaron Lauterer
@ 2021-07-15 12:56 ` Fabian Grünbichler
2021-07-15 13:23 ` Marco Gaiarin
1 sibling, 1 reply; 4+ messages in thread
From: Fabian Grünbichler @ 2021-07-15 12:56 UTC (permalink / raw)
To: Proxmox VE user list
On July 15, 2021 2:48 pm, Marco Gaiarin wrote:
>
> I'm a bit puzzled.
>
>
> A little 'ZFS' cluster, with two asymmetrical nodes; second node is
> there only, just in case, to run some little VMs (PBX, firewall, ...).
>
> First node have a second RAIDZ volume called 'rpool-data', that does
> not exist on second node; eg, storage.conf is:
>
> dir: local
> path /var/lib/vz
> content iso,vztmpl,backup
>
> zfspool: local-zfs
> pool rpool/data
> content images,rootdir
> sparse 1
>
> zfspool: rpool-data
> pool rpool-data
> content images,rootdir
> mountpoint /rpool-data
> sparse 1
>
> Clearly on node 2 'rpool-data' have a question mark.
yes, because you didn't tell PVE that this storage is only available on
node 1
> [...]
> Why migration does not work, even if there's no disks in 'rpool-data'?
because PVE can't know that this storage is not supposed to be checked
if you don't tell it ;)
> Thre's some way to 'fake' an 'rpool-data' on the second node, only to
> have PVE not to complain?
you just need to set the list of nodes where the storage 'rpool-data'
exists in storage.cfg (via the GUI/API/pvesm).
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PVE-User] 'asymmetrix storage' and migration...
2021-07-15 12:56 ` Fabian Grünbichler
@ 2021-07-15 13:23 ` Marco Gaiarin
0 siblings, 0 replies; 4+ messages in thread
From: Marco Gaiarin @ 2021-07-15 13:23 UTC (permalink / raw)
To: pve-user
Mandi! Fabian Grünbichler
In chel di` si favelave...
> you just need to set the list of nodes where the storage 'rpool-data'
> exists in storage.cfg (via the GUI/API/pvesm).
AARRGGHH! Totally missed that!
Sorry... and thanks. ;-)
--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797
Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-07-15 13:23 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-15 12:48 [PVE-User] 'asymmetrix storage' and migration Marco Gaiarin
2021-07-15 12:55 ` Aaron Lauterer
2021-07-15 12:56 ` Fabian Grünbichler
2021-07-15 13:23 ` Marco Gaiarin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox