From: Hermann <pve@hw.wewi.de>
To: PVE User List <pve-user@pve.proxmox.com>
Subject: Re: [PVE-User] FC-Luns only local devices?
Date: Sat, 25 Jul 2020 18:04:19 +0200 [thread overview]
Message-ID: <e75109af-629c-3a00-263e-9be6f1709b2f@hw.wewi.de> (raw)
In-Reply-To: <20200630102125.nnrf3okkiwtdmojw@zeha.at>
Hello Chris,
(Sorry for not answering to the list, PM was sent erroneously.)
Thank you for the eye-opener.
I managed to configure multipath and can see all the luns and was able
to create physical volumes and so on.
I assume it is correct to configure LVM as shared, because proxmox
itself handles the locking. At least a test with an ubuntu-vm went well.
Only problem was that I could not stop the vm from the GUI, because a
lock could not be aquired.
Could you explain in a short sentence, why you avoid partitioning? I
remember running into difficulties with an image I had created this way
in another setup. I could not change the size when it was necessary an
had to delete and recreate everything, which was quite painful, because
I had to move around a TB of Files.
Have a nice weekend,
Hermann.
Am 30.06.20 um 12:21 schrieb Chris Hofstaedtler | Deduktiva:
> Hi Hermann <NoLastName>,
>
> * Hermann <pve@hw.wewi.de> [200630 10:51]:
>> I would really appreciate being steered in the right direction as to the
>> connection of Fibre-Channel-Luns in Proxmox.
>>
>> As far as I can see, FC-LUNs only appear als local blockdevices in PVE.
>> If I have several LWL-Cables between my Cluster and these bloody
>> expensive Storages, do I have to set up multipath manually in debian?
> With most storages you need to configure multipath itself manually,
> with the settings your storage vendor hands you.
>
> Our setup for this is:
>
> 1. Manual multipath setup, we tend to enable find_multipaths "smart"
> to avoid configuring all WWIDs everywhere and so on.
>
> 2. The LVM PVs go directly on the mpathXX devices (no partitioning).
>
> 3. One VG per mpath device. The VGs are then seen by Proxmox just
> like always.
>
> You have to take great care when removing block devices again, so
> all PVE nodes release the VGs, PVs, all underlying device mapper
> devices, and remove the physical sdXX devices, before removing the
> exports from the storage side.
> Often it's easier to reboot, and during the reboot fence access to
> the to-be-removed LUN for the currently rebooting host.
>
> Chris
>
next parent reply other threads:[~2020-07-25 16:04 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <8e7395ee-c3ab-03fc-aa2d-9bf2e0781375@hw.wewi.de>
[not found] ` <20200630102125.nnrf3okkiwtdmojw@zeha.at>
2020-07-25 16:04 ` Hermann [this message]
2020-07-26 21:55 ` Chris Hofstaedtler | Deduktiva
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e75109af-629c-3a00-263e-9be6f1709b2f@hw.wewi.de \
--to=pve@hw.wewi.de \
--cc=pve-user@pve.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.