public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Bryan Fields <Bryan@bryanfields.net>
To: pve-user@lists.proxmox.com
Subject: [PVE-User] Shared ZFS storage that supports LXC's
Date: Sun, 29 Oct 2023 06:16:07 -0400	[thread overview]
Message-ID: <8bbed148-012b-cd0f-cf44-49f4e891464d@bryanfields.net> (raw)

I've been working to migrate my servers to a shared storage from a local-zfs.

My backend is a Linux server with 24 16T SAS drives in ZFS as follows:
4x (6d raidz2) + 3x mirror 4T NVME + 375G Optane split 32G/312G ZIL/L2ARC
768G of ram, and 1/2 that is for zfs arc.  All connected via 2x10g lag with 
jumbo frames to the other servers.

I have been working with the ZFS over ISCSI backend for my VM's and it works 
very well.  The only issue is there's not any support for multipath, but with 
a 10g network, I'm not running into limits here for practical purposes.

I do  get a warning in proxmox when moving a VM into it or making a snapshot:
Warning: volblocksize (8192) is much less than the minimum allocation
unit (32768), which wastes at least 75% of space. To reduce wasted space,
use a larger volblocksize (32768 is recommended), fewer dRAID data disks
per group, or smaller sector size (ashift).

I can't find exactly what this is referring to or how to fix it.  Does anyone 
have insight into this message?

With the LXC's I've found they don't support this backend storage.  (and it's 
not mentioned in the docs) I assume this is do to them needing a filesystem, 
not a block device.  My option here would be to run NFS for shared storage, 
but this loses the ability to do snapshots (a must have).  LVM would work, but 
it's not able to be shared.

I was thinking it might make sense to do this as a per LXC NFS mount via ZFS, 
and then the PVE node makes a new dataset in zfs on the shared storage server 
via ssh for that lxc.  This is basically how the zfs over iscsi is handled 
today, as how I understand it.

Has anyone solved this?  I'd like an option here to migrate LXC's from local 
to shared storage on a linux/zfs server.

Thanks,
-- 
Bryan Fields

727-409-1194 - Voice
http://bryanfields.net



             reply	other threads:[~2023-10-29 10:26 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-29 10:16 Bryan Fields [this message]
2023-10-31  9:20 ` Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8bbed148-012b-cd0f-cf44-49f4e891464d@bryanfields.net \
    --to=bryan@bryanfields.net \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal