public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Luis G. Coralle" <luiscoralle@fi.uncoma.edu.ar>
To: Marco Malavolti <marco.malavolti@gmail.com>
Cc: pve-user@lists.proxmox.com
Subject: Re: Proxmox VE 9.1 Homelab: ZFS, LXC vs VM, Docker Migration advice?
Date: Wed, 11 Mar 2026 12:02:02 -0300	[thread overview]
Message-ID: <CAG+yBPSt53TiXELPsfQ359WCFX7uDaxzFUsaHSQk451F+SH24A@mail.gmail.com> (raw)
In-Reply-To: <CAPu5L9HKvYbO7gkvvTPb0VQ8sVtz+A71g8OfA0f4zU1_AyfhNA@mail.gmail.com>

Hello,
I would like to share an overview of our current virtualization
infrastructure based on Proxmox.
*Compute cluster*

Our environment consists of a *Proxmox cluster with nine compute nodes*,
built using standard PC hardware. The nodes use a mix of *Intel Core i7 and
Intel Core i9 processors*, with memory configurations ranging from *64 GB
to 128 GB of RAM*, depending on the specific machine. Each node has a *local
1 TB SATA disk dedicated to the Proxmox installation*, formatted using
the *BTRFS
filesystem*. These local disks are used primarily for the operating system
and local storage.


*Shared storage*

For shared storage we operate *three dedicated storage servers*, also built
from standard desktop hardware. These machines have relatively modest
specifications, for example *Intel Core i3 CPUs with around 4 GB of RAM*.
Each storage server contains *four 4 TB SATA disks*, configured as a *software
RAID-5 array using Linux MDADM*. The RAID arrays are formatted with the *XFS
filesystem*, and each storage server exports its storage via *NFS*.
The *Proxmox
cluster mounts these NFS exports*, which are used as shared storage for
virtual machines and related workloads.


*Network configuration*

Each compute node is equipped with *four Ethernet network interfaces*,
which allows us to connect the servers to multiple physical networks and
separate traffic types if necessary. The infrastructure includes a *managed
switch with VLAN support*, where the corresponding VLAN configurations are
defined. This allows *virtual machines within Proxmox to be assigned
directly to specific VLANs*, depending on the network segmentation required.


*VM templates and automation*

We also maintain a set of *virtual machine templates*, for example *Debian
13 NetInstall*, which are used as base images for rapid VM deployment. On
top of these templates we run a set of *custom Bash automation scripts*
designed to manage large groups of virtual machines. These scripts allow us
to operate *batches of VMs simultaneously*, where a batch may contain *16,
32, 90, or more virtual machines* depending on the environment. The scripts
automate bulk VM creation from templates, batch start/stop/restart
operations, large-scale VM removal, and lifecycle management of multiple VM
groups. When new VM batches are deployed, the automation also generates the
corresponding *iptables rules required for remote SSH access*, so that
network access configuration becomes part of the provisioning workflow.


In summary, the environment consists of:


   -

   *9 Proxmox compute nodes*
   -

   *Mixed Intel i7 / i9 CPUs*
   -

   *64–128 GB RAM per node*
   -

   *1 TB SATA BTRFS system disk per node*
   -

   *3 NFS storage servers*
   -

   *MDADM RAID-5 with 4 × 4 TB disks per storage server*
   -

   *XFS filesystem*
   -

   *Managed switch with VLAN segmentation*
   -

   *4 network interfaces per compute node*
   -

   *Automated VM provisioning using templates and Bash scripts*
   -

   *Automatic generation of iptables rules for SSH remote access*
   This architecture has been designed to provide shared storage, flexible
   VM provisioning, and automated operational management using simple and
   cost-effective hardware


Best regards.

On Sun, Mar 1, 2026 at 6:59 PM Marco Malavolti <marco.malavolti@gmail.com>
wrote:

> Good evening to all Proxmox enthusiasts!
>
> I’m a future new user of Proxmox VE 9.1 and Proxmox Backup Server, and I’m
> looking for what you think is the most long‑lasting solution within the
> Proxmox ecosystem for a simple homelab.
>
> This is my hardware for Proxmox VE 9.1:
>
> 1) CWWK 12th Gen Intel Firewall Mini PC Alder Lake i3 N305 Fanless Soft
> Router Proxmox DDR5 4800MHz 4xi226-V 2.5G (
>
> https://cwwk.net/products/12th-gen-intel-firewall-mini-pc-alder-lake-i3-n305-8-core-n200-n100-fanless-soft-router-proxmox-ddr5-4800mhz-4xi226-v-2-5g?_pos=1&_sid=cc36e8016&_ss=r&variant=44613920162024
> )
>
> 2) 32GB of Crucial DDR5 RAM (https://amzn.eu/d/00DxTBLh)
>
> 3) 2TB NVMe M.2: https://amzn.eu/d/0f2XIxHV
>
> 4) Legrand Keor Multiplug LG-310082 800VA/480W UPS (
> https://amzn.eu/d/06coyhv5)
>
> For Proxmox Backup Server I have a mini PC with an Intel i5-5250U 4‑core
> CPU, 8GB RAM and a 1TB Samsung 870 SSD.
>
> I also have another external 1TB hard drive for an additional backup to
> support a 3‑2‑1 strategy.
>
> At the moment, all my applications run directly on a mini PC with Debian 12
> and Docker Compose: Immich, Nextcloud, Pi-hole, ProjectSend, Nginx Proxy
> Manager.
>
> In the future I would like to move everything to Proxmox VE and PBS, but
> I’d like to do it wisely. I’d like to set up a system that is both
> high‑performance and long‑lasting. I’ve heard about ZFS, but I’d like to
> better understand what is the best choice in my situation. LXC or VMs? I
> understand that with LXC I would have a better level of separation than
> putting everything in a single VM with Docker. Maybe you’ve already
> discussed these topics before, and if you can point me to those threads
> I’ll be happy to read them with curiosity.
>
> I came here to learn and move toward the best possible setup. Many thanks
> for any help, for your experience, and for what you share!
>
> Marco Malavolti
>


-- 
Luis G. Coralle
Secretaría de TIC
Facultad de Informática
Universidad Nacional del Comahue
(+54) 299-4490300 Int 647

  parent reply	other threads:[~2026-03-11 15:03 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-01 21:59 Marco Malavolti
2026-03-02  6:25 ` Andrei Boros
2026-03-11 15:02 ` Luis G. Coralle [this message]
2026-03-11 19:38   ` Andrei Boros

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAG+yBPSt53TiXELPsfQ359WCFX7uDaxzFUsaHSQk451F+SH24A@mail.gmail.com \
    --to=luiscoralle@fi.uncoma.edu.ar \
    --cc=marco.malavolti@gmail.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal