From: Aaron Lauterer <a.lauterer@proxmox.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>,
Marco Gaiarin <gaio@sv.lnf.it>
Subject: Re: [PVE-User] 'asymmetrix storage' and migration...
Date: Thu, 15 Jul 2021 14:55:22 +0200 [thread overview]
Message-ID: <8bde01da-f8fc-8d1f-bd5e-616dce26e989@proxmox.com> (raw)
In-Reply-To: <20210715124856.GC3399@sv.lnf.it>
See inline
On 7/15/21 2:48 PM, Marco Gaiarin wrote:
>
> I'm a bit puzzled.
>
>
> A little 'ZFS' cluster, with two asymmetrical nodes; second node is
> there only, just in case, to run some little VMs (PBX, firewall, ...).
>
> First node have a second RAIDZ volume called 'rpool-data', that does
> not exist on second node; eg, storage.conf is:
>
> dir: local
> path /var/lib/vz
> content iso,vztmpl,backup
>
> zfspool: local-zfs
> pool rpool/data
> content images,rootdir
> sparse 1
>
> zfspool: rpool-data
> pool rpool-data
> content images,rootdir
> mountpoint /rpool-data
> sparse 1
>
> Clearly on node 2 'rpool-data' have a question mark.
>
>
> I've tried to migrate a VMs from node 2 to 1 that have disks only on 'local-zfs':
>
> root@brpve1:~# grep ^scsi /etc/pve/nodes/brpve2/qemu-server/100.conf
> scsi0: local-zfs:vm-100-disk-0,size=100G
> scsi1: local-zfs:vm-100-disk-1,backup=0,size=1000G
> scsi2: local-zfs:vm-100-disk-2,backup=0,size=500G
> scsihw: virtio-scsi-pci
>
> but i get:
>
> 2021-07-15 14:14:54 starting migration of VM 100 to node 'brpve1' (10.15.5.21)
> zfs error: cannot open 'rpool-data': no such pool
> zfs error: cannot open 'rpool-data': no such pool
> 2021-07-15 14:14:54 ERROR: Problem found while scanning volumes - could not activate storage 'rpool-data', zfs error: cannot import 'rpool-data': no such pool available
> 2021-07-15 14:14:54 aborting phase 1 - cleanup resources
> 2021-07-15 14:14:54 ERROR: migration aborted (duration 00:00:00): Problem found while scanning volumes - could not activate storage 'rpool-data', zfs error: cannot import 'rpool-data': no such pool available
> TASK ERROR: migration aborted
>
>
> Because in 'rpool-data' currently we have no data, i've simply disabled
> storage 'rpool-data', migrate the machine and then re-enabled back.
>
>
> Why migration does not work, even if there's no disks in 'rpool-data'?
>
> Thre's some way to 'fake' an 'rpool-data' on the second node, only to
> have PVE not to complain?
You can limit storages to certain nodes. To do this via the GUI, edit the storage and in the top right of that dialog, you should be able to select the nodes on which the storage exists. Otherwise you will run into certain problems as Proxmox VE expects the underlying storage to be present on the nodes.
>
>
> Thanks.
>
next prev parent reply other threads:[~2021-07-15 13:06 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-15 12:48 Marco Gaiarin
2021-07-15 12:55 ` Aaron Lauterer [this message]
2021-07-15 12:56 ` Fabian Grünbichler
2021-07-15 13:23 ` Marco Gaiarin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8bde01da-f8fc-8d1f-bd5e-616dce26e989@proxmox.com \
--to=a.lauterer@proxmox.com \
--cc=gaio@sv.lnf.it \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.