public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] Adding VEs and containers to 7.4
@ 2023-10-28 16:13 Oboe Lova
  2023-10-28 18:49 ` Uwe Sauter
  0 siblings, 1 reply; 5+ messages in thread
From: Oboe Lova @ 2023-10-28 16:13 UTC (permalink / raw)
  To: pve-user

Greetings to listers,

I have installed 7.4 VE and am finding no success in installing guest
images to what looks like a valid install, using the web gui at port 8006.
Specifically, the best I can do is create a vm using  a windows 7 dvd in
the node dvd drive and start an install session.  That runs for a while
then stalls while I watch in a console.  After that using console and other
ways to stop, reset, etc the vm so I can remove it are  ignored, though the
gui is still up and not frozen. Similar symptoms trying various Linux
distro dvds, from either iso image or burned install disks.  I also fumbled
around until I managed to upload an iso from laptop to a second internal
hdd disk but I can’t find a way to load it to a new vm.

Goal is homelab and a separate bookworm as VMs.   So what I obviously need
is docs on definitions and caveats for each gui option in the web gui.
Examples: how to create a vm from qcow2 image. Functionally what does  QEMU
checkbox do since I get console either way?  I expect command line
maneuvers will be required.

I have read the current wiki and tried help screens but haven’t found
anything that gives me a detailed recipe.    I would also like to use a
three x 500 GB disk zfs raid but can’t find when  or where I do the zfs
setup.  No install option on install except ext 4 partitons.   Dell XPS
 8500 i7 16 GB ram 4 physical / 8 threads.   Allocating 2 cores with
default lvms 2048 memory per vm.

Tnx in advance


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PVE-User] Adding VEs and containers to 7.4
  2023-10-28 16:13 [PVE-User] Adding VEs and containers to 7.4 Oboe Lova
@ 2023-10-28 18:49 ` Uwe Sauter
       [not found]   ` <CAC04G9iKBbQiF7qh9ieSpQvfdw5eybJ6ik7OA=FQCwrMPkLxfA@mail.gmail.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Uwe Sauter @ 2023-10-28 18:49 UTC (permalink / raw)
  To: Proxmox VE user list, Oboe Lova

Hi,

don't know if you are aware that PVE 8.0 was released back in June?

Also, there generally is a help button in the various menus and a documenation button left to the "create vm" and 
"create ct" buttons.

Regarding when to create a ZFS pool: usually you can do that during installation of PVE. You need to change the 
filesystem and the disks used by the installer. If that is not the case with your setup there seems to be something 
going wrong.
If you'd like to create a ZFS pool on an installed system go to "datacenter -> your server", then select "disks -> ZFS". 
You should be able to create a new pool **if** you have unused disks in your system.


Regards,

	Uwe

Am 28.10.23 um 18:13 schrieb Oboe Lova:
> Greetings to listers,
> 
> I have installed 7.4 VE and am finding no success in installing guest
> images to what looks like a valid install, using the web gui at port 8006.
> Specifically, the best I can do is create a vm using  a windows 7 dvd in
> the node dvd drive and start an install session.  That runs for a while
> then stalls while I watch in a console.  After that using console and othere
> ways to stop, reset, etc the vm so I can remove it are  ignored, though the
> gui is still up and not frozen. Similar symptoms trying various Linux
> distro dvds, from either iso image or burned install disks.  I also fumbled
> around until I managed to upload an iso from laptop to a second internal
> hdd disk but I can’t find a way to load it to a new vm.
> 
> Goal is homelab and a separate bookworm as VMs.   So what I obviously need
> is docs on definitions and caveats for each gui option in the web gui.
> Examples: how to create a vm from qcow2 image. Functionally what does  QEMU
> checkbox do since I get console either way?  I expect command line
> maneuvers will be required.
> 
> I have read the current wiki and tried help screens but haven’t found
> anything that gives me a detailed recipe.    I would also like to use a
> three x 500 GB disk zfs raid but can’t find when  or where I do the zfs
> setup.  No install option on install except ext 4 partitons.   Dell XPS
>   8500 i7 16 GB ram 4 physical / 8 threads.   Allocating 2 cores with
> default lvms 2048 memory per vm.
> 
> Tnx in advance
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PVE-User] Adding VEs and containers to 7.4
       [not found]   ` <CAC04G9iKBbQiF7qh9ieSpQvfdw5eybJ6ik7OA=FQCwrMPkLxfA@mail.gmail.com>
@ 2023-10-28 21:05     ` Uwe Sauter
       [not found]       ` <CAC04G9hb_fQYVu6s-VKJN_j_4Lb+ShP1QeJSW+mx0foD11YOKg@mail.gmail.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Uwe Sauter @ 2023-10-28 21:05 UTC (permalink / raw)
  To: Oboe Lova, Proxmox VE user list



Am 28.10.23 um 22:15 schrieb Oboe Lova:
> Thanks Ewe,
> 
> I am glad but surprised to hear from anyone on the list because my post was rejected by an auto-email from gitlab saying 
> it could not handle my message because "We could not tell what it was for.  Please use the web interface or post an 
> issue.  Do you know precisely what that might mean?  I could not sign up for the forum without a subscription.

I think someone subscribed to this list with an email address that points to his/her Gitlab instance. When the list then 
tried to deliver my answer to said address it caused the error message.

>   Anyway, I did know about 8.0-2 in the repository, which I burnt to a DVD  and tried first.

Depending on your hardware you will have more fun using a USB stick where you put the ISO onto using the "dd" command.

  But that but quickly got me
> out into the weeds. So I have not yet gone  back to 8.x to see if what I have learned so far will give me better 
> results.  Are you saying that I will have a chance to start the zfs install from the initial proxmox install with 8.x?  
> If so does zfs configuration include the 3 come later with the GUI or command line?  Does zfs raid come later with the 
> GUI or command line? Can I add a third disk formatted with zfs later as the wiki says, and what are the steps?

In all of my installations of PVE (I started with PVE 6.x) I had no trouble selecting ZFS during installation.
This picture (https://pve.proxmox.com/pve-docs/images/screenshot/pve-select-target-disk.png) shows how the installer 
should look. If this is not the case then your hardware might lack something…
Are all disks recognized in the BIOS or when booting a Debian live system? If so, wipe them clean from all former 
filesystem and partition signatures using the "wipefs" tool.

> I was trying to follow Andreas Spiess's (aka the guy with the Swiss accent) youtube video on setting up a home lab. He 
> was using 7.2 and the vid was recent. Most of my issues went away when I tried 7.4.  What I did not know at the time is 
> that either version requires UEFI boot or HHD icons are not shown on the web gui.; only the 3 working partitions show up 
> on the boot HDD.  I examined and deleted them on the /dev/sda  HDD to start over on a clean system may times. Just 
> yesterday I found some documentation that said GRUB boots give rise to disk mounting problems.  That made sense so I 
> tried UEFI boot and the HDD icons showed up on the server view.
> I did navigate to the HELP page but only simple instructions were given and I could not establish a running VM by simply 
> following those HELP steps.  Through experimentation I got two 500 HDDs to show up as icons (pool candidates?) in the 
> gui.  I also got an iso image of bookworm to upload to VM storage from my laptop DVD drive to somewhere on pve.home.  
> What I don't understand now is how to get the image to completely install after launching with START command.  I did not 
> enable start on boot so I could reboot the node and crash out of the VM.  I did so because VM hangs do not respond to 
> STOP, HARD STOP, etc. With the VM stopped on node reboot I could at least remove it and start over.

Given the age of you hardware (a quick search revealed that the notebook you are using has an Ivy Bridge generation 
Intel CPU which is now 11 years old) I suspect that your troubles come from the CPU that is selected for your VM. The 
Ivy Bridge chip probably does not have all features that the default emulated CPU ("x86-64-v2-AES") will propagate to 
the VM.
When creating the VM configuration try to select CPU type "host" at the bottom of the list.

> One question I have is why QEMU is optional.

I'm not quite certain what you mean. QEMU started as a project that emulated certain CPUs in software. When hardware 
began to support virtualization out of the box the kernel-based virtual machine (KVM) allowed better performance and 
QEMU adopted the usage of KVM where possible.

So when you nowerdays run a x86-64 VM on a x86-64 CPU you will use all the hardware virtualization if these features are 
not disabled in the BIOS. But if you run an ARM VM on a x86-64 CPU QEMU will still emulate the VM's CPU architecture in 
software.

Do you actually mean libvirt? Libvirt is a project on top of different virtualization technologies and container 
runtimes that allows you to save a VMs configuration to a file. Libvirt will use QEMU and KVM under the hood.
But in PVE libvirt is not used because the devs from Proxmox decided to talk directly to QEMU using their own framework.

  Isn't it needed for KVM or does the other gui method take over?  I am
> referring to virtXXX.man which I previously tried on a direct to Debian QEMU-KVM install try.  When trying to prep for 
> Home Assistant direct to Debian Bookworm (as direct QEMU-KVM install) I downloaded the the QCOW2 file and tried QEMU to 
> prepare it for upload to my pve.   QEMU converted it to an img file which was not recognized by Debian or Proxmox. I 
> tried to do an import to disk from the node DVD drive but the gui would not recognize it; maybe from issues with grub boot.

If you download VM images in the qcow2 format you will need to convert these images into the raw image file format. See 
"man qemu-img". The raw image then can be used as virtual HDD for the VM.

>   Are you saying that I will have a chance to start the zfs (instead of ext 4?) install to boot disk from the initial 
> proxmox install with 8.x?  If so does zfs configuration come later with the GUI or command line?  Does zfs raid come 
> later with the GUI or command line? Do I add an optional third or fourth  third disk to zfs pool later as the wiki says, 
> or install the HDD before the initial node installation?

Before going down further the debugging road I need you to familiarize yourself with the concepts of ZFS.
Even if you succeed lateron to get more disks recognized you will be limited in your possibilities compared to when all 
disks are recognized during installation. (E.g. you cannot convert a ZFS RAID 1 pool into a ZFS RAIDZ2 pool.)

My recommendation is: before trying out to install VMs make sure that your host system is running the way you want it to 
run.

Regards,

	Uwe

> Maybe one more try with 8.0-2 will be more straight forward?  Which log files will reveal problems, and where will they 
> be stored?
> 
> Armed with the above info I ought to make better progress.  Your help is much appreciated.
> 
> Vielen Danke,
> 
> Chuck in Libby, MT USA
> 
> 
> 
> On Sat, Oct 28, 2023 at 12:49 PM Uwe Sauter <uwe.sauter.de@gmail.com <mailto:uwe.sauter.de@gmail.com>> wrote:
> 
>     Hi,
> 
>     don't know if you are aware that PVE 8.0 was released back in June?
> 
>     Also, there generally is a help button in the various menus and a documenation button left to the "create vm" and
>     "create ct" buttons.
> 
>     Regarding when to create a ZFS pool: usually you can do that during installation of PVE. You need to change the
>     filesystem and the disks used by the installer. If that is not the case with your setup there seems to be something
>     going wrong.
>     If you'd like to create a ZFS pool on an installed system go to "datacenter -> your server", then select "disks ->
>     ZFS".
>     You should be able to create a new pool **if** you have unused disks in your system.
> 
> 
>     Regards,
> 
>              Uwe
> 
>     Am 28.10.23 um 18:13 schrieb Oboe Lova:
>      > Greetings to listers,
>      >
>      > I have installed 7.4 VE and am finding no success in installing guest
>      > images to what looks like a valid install, using the web gui at port 8006.
>      > Specifically, the best I can do is create a vm using  a windows 7 dvd in
>      > the node dvd drive and start an install session.  That runs for a while
>      > then stalls while I watch in a console.  After that using console and othere
>      > ways to stop, reset, etc the vm so I can remove it are  ignored, though the
>      > gui is still up and not frozen. Similar symptoms trying various Linux
>      > distro dvds, from either iso image or burned install disks.  I also fumbled
>      > around until I managed to upload an iso from laptop to a second internal
>      > hdd disk but I can’t find a way to load it to a new vm.
>      >
>      > Goal is homelab and a separate bookworm as VMs.   So what I obviously need
>      > is docs on definitions and caveats for each gui option in the web gui.
>      > Examples: how to create a vm from qcow2 image. Functionally what does  QEMU
>      > checkbox do since I get console either way?  I expect command line
>      > maneuvers will be required.
>      >
>      > I have read the current wiki and tried help screens but haven’t found
>      > anything that gives me a detailed recipe.    I would also like to use a
>      > three x 500 GB disk zfs raid but can’t find when  or where I do the zfs
>      > setup.  No install option on install except ext 4 partitons.   Dell XPS
>      >   8500 i7 16 GB ram 4 physical / 8 threads.   Allocating 2 cores with
>      > default lvms 2048 memory per vm.
>      >
>      > Tnx in advance
>      > _______________________________________________
>      > pve-user mailing list
>      > pve-user@lists.proxmox.com <mailto:pve-user@lists.proxmox.com>
>      > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>
> 



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PVE-User] Adding VEs and containers to 7.4
       [not found]       ` <CAC04G9hb_fQYVu6s-VKJN_j_4Lb+ShP1QeJSW+mx0foD11YOKg@mail.gmail.com>
@ 2023-10-29  8:43         ` Uwe Sauter
       [not found]           ` <CAC04G9jdQ6sEjMWHyiCFpbPA7n2fqA7MLZnBcfSMP_D67vyncw@mail.gmail.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Uwe Sauter @ 2023-10-29  8:43 UTC (permalink / raw)
  To: Oboe Lova, Proxmox VE user list

First of all: Please do not only reply to just me, address your answers to the list as well so that other folks can help 
you to.

Second: It usually is good habit to write answers or explanations below the section they refer to. It does make life 
much easier for everyone following when they don't need to scroll up and down just to understand what you are talking about.

Third: Always ask yourself what some reader without you knowledge specific to your situation would know about your 
setup. If things that are special don't get mentioned, then noone but you will known them. Better explain something that 
seems trivial to you than let others (falsely) assume things which might lead into directions that are orthotogal to 
your issue.

Those three things will get you more success in your quest to get help from others.


Am 29.10.23 um 01:58 schrieb Oboe Lova:
> The only laptop in my installation is the one running the web gui to the Dell xps8500 tower that hosts proxmox.  The   
> Lenovo W510 laptop is an i7 Q720 4 core/8 thread running Debian 12 with latest updates.   I don’t understand how it 
> could affect the installation since it is simply a console running html to configure pve.home.  And to use the noVNC 
> console screen which always stayed running even though the VM froze.  Is there no detection reported to the hypervisor 
> when the VM quits?  Is that why the vm shows in the gui as still running even though the vm is unresponsive to the 
> console, sometimes reporting a communication issue?

So, this tells me that your setup wasn't explained in enough detail before.
Please give a full overview of your setup and the inteded function.

If you refer to the installation of PVE than your Lenovo laptop should have no role whatsoever because you have display, 
mouse and keyboard attached to your Dell tower.

If you refer to the installation of VMs than your Lenovo laptop will just use the browser to display the graphics output 
of your VMs. As long as your VM doesn't have a serial console configured simply clicking on "console" and selecting 
noVNC from the "console" dropdown menu are equivalent and the way to go. (Configuring a VM with a serial console and 
using that as another channel to get access to the VM is an advanced topic that I don't want to delve into right now.)

> The Dell tower has an Intel  i7  3770 cpu with 4 cores/8 threads running 3.4 GHz per core.  Yes, it is Ivy Bridge  My 
> research online and the virtualization settings in the bios led me to believe it conformed to the required Intel spec.  
> Web pages with specs say all intel chips starting 2006 have “EM64T” and others talk about EMT64 being borrowed from 
> AMD.  BIOS  shows Intel Hyperthreading enabled, Intel Speedstep enabled, Intel Virtualization Technology enabled, CPU XD 
> Support disabled, Limit CPUID disabled, secure boot disabled.

Small example: Your Ivy Bridge CPU has the AVX instruction set. The next generation of Intel CPUs (Haswell) gained the 
AVX2 instruction set. If you now configure your VM to use a virtual CPU that also provides AVX2 support although your 
Ivy Bridge doesn't will lead to things like hanging or crashing VMs because the software inside gets the wrong answer to 
the question "what capabilities does the CPU provide?"

So on older hardware it is curcial to select the correct CPU model for the VM because the default might assume too much.
The easiest for you is to select CPU type "host" when creating the VM.

> Yes libvert is what I meant.
> 
> Should I enable QEMU in the create VM checkbox?

Again I'm unsure what you are talking about. If you create a VM in PVE you will be using QEMU. Do you mean the "QEMU 
Guest Agent" check box? If so I'd recommend it because it gives a communication channel to QEMU into the VM for things 
like reading the configured IP addresses (so they can be shown in the VM overview in teh WebUI) or instructing the VM to 
flush all cached data to disk just before a backup is taken.

> But if you are certain Ivy Bridge does not conform then game over.

I am very certain that Ivy Bridge is good enough. I have run multiple PVE clusters on Sandy Bridge (one generation 
older) hardware for a long time.

   Instead I will attempt IOT stack directly to Debian
> 12 or 11. Spiess’s project allows him to emulate pi boards using IOTstack on a Debian VM under proxmox 7.2.  All I care 
> about is IOTstack and open mqtt talking with ESP8266 boards over wifi.  But maybe it is worth one more try on 8.x.
> I tried ventoy on a 250 GB usb backup device aka Seagate FreeAgent but it did not install probably because it is a real 
> hdd inside. 

This has nothing to do whether there is a spinning HDD or a flashy SSD inside the USB case. If the installation of 
Ventoy fails there is another issue at hand.

  I simply burned the dvd since I had some blanks available and xfburn has always worked.  People keep giving
> me perfectly good computers so I have many duplicate HDDs to play with RAID but that is just a toy to try for faster 
> reads with zfs (per wiki).   I can always do a poor man’s on-demand NAS by simply mounting an nfs share to another linux 
> machine.  My clonezilla backups will go faster then.
> 
> Yes all hdds are recognized in BIOS and show as icons in the web gui.  I remove old partitions and tables then create a 
> new table and create an ext 4 partition.

And here's one of the issues. ZFS will need empty disks (or at least an empty partition). You cannot put a ZFS onto an 
existing filesystem.

ZFS combines disk management, RAID management and volume/filesystem management into one one. For that it needs empty disks…


    Maybe that is why all
> I see on offered on disk options is /dev/sda as ext 4.  But why does FSTAB not show /dev/sda, /dev/sdb?  Security feature?

Only because some partitions have a filesystem on top doesn't mean that they need to be configured to be mounted.
Indeed on my personal PVE that uses ZFS there is only one entry in /etc/fstab and that is for /proc.
And then there's the thing with systemd having its own mechanism to handle mountpoints…

> FWIW: for some strange reason dd is not included in Debian 12 installs.   Can’t find  in apt or software repositories 
> that are included with the distro.  Typically apt suggests a newer substitute when appropriate but not this time.

I've just checked and I'm baffeled that this is true. Yet dd is included just as well as /usr/bin/dd.


Regards,

	Uwe


> On Sat, Oct 28, 2023 at 3:05 PM Uwe Sauter <uwe.sauter.de@gmail.com <mailto:uwe.sauter.de@gmail.com>> wrote:
> 
> 
> 
>     Am 28.10.23 um 22:15 schrieb Oboe Lova:
>      > Thanks Ewe,
>      >
>      > I am glad but surprised to hear from anyone on the list because my post was rejected by an auto-email from gitlab
>     saying
>      > it could not handle my message because "We could not tell what it was for.  Please use the web interface or post an
>      > issue.  Do you know precisely what that might mean?  I could not sign up for the forum without a subscription.
> 
>     I think someone subscribed to this list with an email address that points to his/her Gitlab instance. When the list
>     then
>     tried to deliver my answer to said address it caused the error message.
> 
>      >   Anyway, I did know about 8.0-2 in the repository, which I burnt to a DVD  and tried first.
> 
>     Depending on your hardware you will have more fun using a USB stick where you put the ISO onto using the "dd" command.
> 
>        But that but quickly got me
>      > out into the weeds. So I have not yet gone  back to 8.x to see if what I have learned so far will give me better
>      > results.  Are you saying that I will have a chance to start the zfs install from the initial proxmox install with
>     8.x?
>      > If so does zfs configuration include the 3 come later with the GUI or command line?  Does zfs raid come later
>     with the
>      > GUI or command line? Can I add a third disk formatted with zfs later as the wiki says, and what are the steps?
> 
>     In all of my installations of PVE (I started with PVE 6.x) I had no trouble selecting ZFS during installation.
>     This picture (https://pve.proxmox.com/pve-docs/images/screenshot/pve-select-target-disk.png
>     <https://pve.proxmox.com/pve-docs/images/screenshot/pve-select-target-disk.png>) shows how the installer
>     should look. If this is not the case then your hardware might lack something…
>     Are all disks recognized in the BIOS or when booting a Debian live system? If so, wipe them clean from all former
>     filesystem and partition signatures using the "wipefs" tool.
> 
>      > I was trying to follow Andreas Spiess's (aka the guy with the Swiss accent) youtube video on setting up a home
>     lab. He
>      > was using 7.2 and the vid was recent. Most of my issues went away when I tried 7.4.  What I did not know at the
>     time is
>      > that either version requires UEFI boot or HHD icons are not shown on the web gui.; only the 3 working partitions
>     show up
>      > on the boot HDD.  I examined and deleted them on the /dev/sda  HDD to start over on a clean system may times. Just
>      > yesterday I found some documentation that said GRUB boots give rise to disk mounting problems.  That made sense so I
>      > tried UEFI boot and the HDD icons showed up on the server view.
>      > I did navigate to the HELP page but only simple instructions were given and I could not establish a running VM by
>     simply
>      > following those HELP steps.  Through experimentation I got two 500 HDDs to show up as icons (pool candidates?) in
>     the
>      > gui.  I also got an iso image of bookworm to upload to VM storage from my laptop DVD drive to somewhere on pve.home.
>      > What I don't understand now is how to get the image to completely install after launching with START command.  I
>     did not
>      > enable start on boot so I could reboot the node and crash out of the VM.  I did so because VM hangs do not
>     respond to
>      > STOP, HARD STOP, etc. With the VM stopped on node reboot I could at least remove it and start over.
> 
>     Given the age of you hardware (a quick search revealed that the notebook you are using has an Ivy Bridge generation
>     Intel CPU which is now 11 years old) I suspect that your troubles come from the CPU that is selected for your VM. The
>     Ivy Bridge chip probably does not have all features that the default emulated CPU ("x86-64-v2-AES") will propagate to
>     the VM.
>     When creating the VM configuration try to select CPU type "host" at the bottom of the list.
> 
>      > One question I have is why QEMU is optional.
> 
>     I'm not quite certain what you mean. QEMU started as a project that emulated certain CPUs in software. When hardware
>     began to support virtualization out of the box the kernel-based virtual machine (KVM) allowed better performance and
>     QEMU adopted the usage of KVM where possible.
> 
>     So when you nowerdays run a x86-64 VM on a x86-64 CPU you will use all the hardware virtualization if these features
>     are
>     not disabled in the BIOS. But if you run an ARM VM on a x86-64 CPU QEMU will still emulate the VM's CPU architecture in
>     software.
> 
>     Do you actually mean libvirt? Libvirt is a project on top of different virtualization technologies and container
>     runtimes that allows you to save a VMs configuration to a file. Libvirt will use QEMU and KVM under the hood.
>     But in PVE libvirt is not used because the devs from Proxmox decided to talk directly to QEMU using their own framework.
> 
>        Isn't it needed for KVM or does the other gui method take over?  I am
>      > referring to virtXXX.man which I previously tried on a direct to Debian QEMU-KVM install try.  When trying to
>     prep for
>      > Home Assistant direct to Debian Bookworm (as direct QEMU-KVM install) I downloaded the the QCOW2 file and tried
>     QEMU to
>      > prepare it for upload to my pve.   QEMU converted it to an img file which was not recognized by Debian or Proxmox. I
>      > tried to do an import to disk from the node DVD drive but the gui would not recognize it; maybe from issues with
>     grub boot.
> 
>     If you download VM images in the qcow2 format you will need to convert these images into the raw image file format. See
>     "man qemu-img". The raw image then can be used as virtual HDD for the VM.
> 
>      >   Are you saying that I will have a chance to start the zfs (instead of ext 4?) install to boot disk from the
>     initial
>      > proxmox install with 8.x?  If so does zfs configuration come later with the GUI or command line?  Does zfs raid come
>      > later with the GUI or command line? Do I add an optional third or fourth  third disk to zfs pool later as the
>     wiki says,
>      > or install the HDD before the initial node installation?
> 
>     Before going down further the debugging road I need you to familiarize yourself with the concepts of ZFS.
>     Even if you succeed lateron to get more disks recognized you will be limited in your possibilities compared to when all
>     disks are recognized during installation. (E.g. you cannot convert a ZFS RAID 1 pool into a ZFS RAIDZ2 pool.)
> 
>     My recommendation is: before trying out to install VMs make sure that your host system is running the way you want
>     it to
>     run.
> 
>     Regards,
> 
>              Uwe
> 
>      > Maybe one more try with 8.0-2 will be more straight forward?  Which log files will reveal problems, and where
>     will they
>      > be stored?
>      >
>      > Armed with the above info I ought to make better progress.  Your help is much appreciated.
>      >
>      > Vielen Danke,
>      >
>      > Chuck in Libby, MT USA
>      >
>      >
>      >
>      > On Sat, Oct 28, 2023 at 12:49 PM Uwe Sauter <uwe.sauter.de@gmail.com <mailto:uwe.sauter.de@gmail.com>
>     <mailto:uwe.sauter.de@gmail.com <mailto:uwe.sauter.de@gmail.com>>> wrote:
>      >
>      >     Hi,
>      >
>      >     don't know if you are aware that PVE 8.0 was released back in June?
>      >
>      >     Also, there generally is a help button in the various menus and a documenation button left to the "create vm" and
>      >     "create ct" buttons.
>      >
>      >     Regarding when to create a ZFS pool: usually you can do that during installation of PVE. You need to change the
>      >     filesystem and the disks used by the installer. If that is not the case with your setup there seems to be
>     something
>      >     going wrong.
>      >     If you'd like to create a ZFS pool on an installed system go to "datacenter -> your server", then select
>     "disks ->
>      >     ZFS".
>      >     You should be able to create a new pool **if** you have unused disks in your system.
>      >
>      >
>      >     Regards,
>      >
>      >              Uwe
>      >
>      >     Am 28.10.23 um 18:13 schrieb Oboe Lova:
>      >      > Greetings to listers,
>      >      >
>      >      > I have installed 7.4 VE and am finding no success in installing guest
>      >      > images to what looks like a valid install, using the web gui at port 8006.
>      >      > Specifically, the best I can do is create a vm using  a windows 7 dvd in
>      >      > the node dvd drive and start an install session.  That runs for a while
>      >      > then stalls while I watch in a console.  After that using console and othere
>      >      > ways to stop, reset, etc the vm so I can remove it are  ignored, though the
>      >      > gui is still up and not frozen. Similar symptoms trying various Linux
>      >      > distro dvds, from either iso image or burned install disks.  I also fumbled
>      >      > around until I managed to upload an iso from laptop to a second internal
>      >      > hdd disk but I can’t find a way to load it to a new vm.
>      >      >
>      >      > Goal is homelab and a separate bookworm as VMs.   So what I obviously need
>      >      > is docs on definitions and caveats for each gui option in the web gui.
>      >      > Examples: how to create a vm from qcow2 image. Functionally what does  QEMU
>      >      > checkbox do since I get console either way?  I expect command line
>      >      > maneuvers will be required.
>      >      >
>      >      > I have read the current wiki and tried help screens but haven’t found
>      >      > anything that gives me a detailed recipe.    I would also like to use a
>      >      > three x 500 GB disk zfs raid but can’t find when  or where I do the zfs
>      >      > setup.  No install option on install except ext 4 partitons.   Dell XPS
>      >      >   8500 i7 16 GB ram 4 physical / 8 threads.   Allocating 2 cores with
>      >      > default lvms 2048 memory per vm.
>      >      >
>      >      > Tnx in advance
>      >      > _______________________________________________
>      >      > pve-user mailing list
>      >      > pve-user@lists.proxmox.com <mailto:pve-user@lists.proxmox.com> <mailto:pve-user@lists.proxmox.com
>     <mailto:pve-user@lists.proxmox.com>>
>      >      > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>
>      >     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>>
>      >
> 



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PVE-User] Adding VEs and containers to 7.4
       [not found]           ` <CAC04G9jdQ6sEjMWHyiCFpbPA7n2fqA7MLZnBcfSMP_D67vyncw@mail.gmail.com>
@ 2023-10-29 13:33             ` Uwe Sauter
  0 siblings, 0 replies; 5+ messages in thread
From: Uwe Sauter @ 2023-10-29 13:33 UTC (permalink / raw)
  To: Oboe Lova, Proxmox VE user list



Am 29.10.23 um 13:28 schrieb Oboe Lova:
> I accept your criticisms and apologize  if my tone came across as hostile in any way.  I did consider a comments in line 
> approach as you did but abandoned it.  I had already given up on proxmox before your first response,  due to the gitlab 
> thing being the last straw after several days of fruitless web searching and experimentation. It did not register with 
> me that I really did have access to the list.

I wasn't offended by your tone. It's just that if you don't answer to the list (as you again forgot, sorry for pointing 
out) you limit yourself to a single person instead of a way bigger number. And that person can always just stop 
communicating with you…

> 
>     So on older hardware it is curcial to select the correct CPU model for the VM because the default might assume too much.
>     The easiest for you is to select CPU type "host" when creating the VM.
> 
> 
> I agree violently! But let’s remember my theme for questioning is “Why don’t I see these things  in the gui when others 
> do? “  i can’t be any more specific than I was in my last post.  eg how do I set cpu type if I don’t know or understand 
> the choices? This is just one example where proxmox  documentation falls short by not detailing the ALL the 
> considerations for setting up a VM on older equipment while encouraging me to do so in the wiki.
> 

I can only assume what you see and check on my own setup. There is one thing that you might have missed which is the 
"advanced" checkbox right next to the back and next buttons at the lower edge of the create VM wizard windows.

Attached are 2 screen shots. Please ignore the German labels, you should still be able to figure out where this is located.

ISO.png will show you where to select the ISO for VM installation and where the advanced checkbox is located.
CPU.png shows where you can select the CPU type for the VM.

> 
>     This has nothing to do whether there is a spinning HDD or a flashy SSD inside the USB case. If the installation of
>     Ventoy fails there is another issue at hand.
> 
> 
> I tried xfat instead of fat32.

Well, there it is again. Ventoy requires a certain setup which means it will create a small partition on the USB drive 
for the Ventoy bootloader and another FAT32 formated partition where the ISOs should be stored.
As Ventoy can only read FAT32, when you formated that partition with XFS you effectively hid the partition from Ventoy.

But I'm pretty certain that this is all described on the Ventoy website.


   Maybe ventoy didn’t like that.  Rather  than reformat the drive it was easier and faster
> to burn a dvd.  For uploading a VM image  the gui offers two different ways to try the cd/dvd drives. 

I'm not aware of different ways. But you need some storage (usually "local") configured for ISO storage. Left column -> 
data center, middle column -> storage. There should be ID="local" and Content="VZDump Backup files, ISO images, 
container templates"

In order to upload an ISO I navigate to left column -> my host -> local storage, middle column -> ISO images and then 
there is a button "upload".

  But I could not
> select the VM iso from either one. 

Once you uploaded an ISO to local storage you should be able to select that during VM creation, see the attached ISO.png

>     And here's one of the issues. ZFS will need empty disks (or at least an empty partition). You cannot put a ZFS onto an
>     existing filesystem.
> 
> 
> Wiki could have mentioned that.

In order to use ZFS you should have a basic understanding of ZFS. The wiki is not the right place to provide that kind 
of information, you can get that e.g. at openzfs.org.

Although Proxmox provides an easy way to use ZFS, ZFS itself is more complex and demanding than a simple LVM + XFS/ext4 
setup and thus requires the user to already have that kind of knowledge.

No offense, but I get the feeling that you are trying to learn too many new things in parallel. While that might be a 
noble effort it also makes the whole quest more demanding than it need be if you were taking one step at a time.

Regards,

	Uwe


> 
>     ZFS combines disk management, RAID management and volume/filesystem management into one one. For that it needs empty
>     disks…
> 
>     Ok got it.
> 
> 
> 
>          Maybe that is why all
>      > I see on offered on disk options is /dev/sda as ext 4.  But why does FSTAB not show /dev/sda, /dev/sdb?  Security
>     feature?
> 
>     Only because some partitions have a filesystem on top doesn't mean that they need to be configured to be mounted.
>     Indeed on my personal PVE that uses ZFS there is only one entry in /etc/fstab and that is for /proc.
>     And then there's the thing with systemd having its own mechanism to handle mountpoints…
> 
> 
>    Yes I surmised pve was doing something special.
> 
> 
> I will soldier on with this new info and report back to the list, informing you of any progress.
> Most gratefully yours,
> 
> Chuck
> 
> 
>      > FWIW: for some strange reason dd is not included in Debian 12 installs.   Can’t find  in apt or software
>     repositories
>      > that are included with the distro.  Typically apt suggests a newer substitute when appropriate but not this time.
> 
>     I've just checked and I'm baffeled that this is true. Yet dd is included just as well as /usr/bin/dd.
> 
> 
> Good news.
> 
> 
> 
> 
>     Regards,
> 
>              Uwe
> 
> 
>      > On Sat, Oct 28, 2023 at 3:05 PM Uwe Sauter <uwe.sauter.de@gmail.com <mailto:uwe.sauter.de@gmail.com>
>     <mailto:uwe.sauter.de@gmail.com <mailto:uwe.sauter.de@gmail.com>>> wrote:
>      >
>      >
>      >
>      >     Am 28.10.23 um 22:15 schrieb Oboe Lova:
>      >      > Thanks Ewe,
>      >      >
>      >      > I am glad but surprised to hear from anyone on the list because my post was rejected by an auto-email from
>     gitlab
>      >     saying
>      >      > it could not handle my message because "We could not tell what it was for.  Please use the web interface
>     or post an
>      >      > issue.  Do you know precisely what that might mean?  I could not sign up for the forum without a subscription.
>      >
>      >     I think someone subscribed to this list with an email address that points to his/her Gitlab instance. When
>     the list
>      >     then
>      >     tried to deliver my answer to said address it caused the error message.
>      >
>      >      >   Anyway, I did know about 8.0-2 in the repository, which I burnt to a DVD  and tried first.
>      >
>      >     Depending on your hardware you will have more fun using a USB stick where you put the ISO onto using the "dd"
>     command.
>      >
>      >        But that but quickly got me
>      >      > out into the weeds. So I have not yet gone  back to 8.x to see if what I have learned so far will give me
>     better
>      >      > results.  Are you saying that I will have a chance to start the zfs install from the initial proxmox
>     install with
>      >     8.x?
>      >      > If so does zfs configuration include the 3 come later with the GUI or command line?  Does zfs raid come later
>      >     with the
>      >      > GUI or command line? Can I add a third disk formatted with zfs later as the wiki says, and what are the steps?
>      >
>      >     In all of my installations of PVE (I started with PVE 6.x) I had no trouble selecting ZFS during installation.
>      >     This picture (https://pve.proxmox.com/pve-docs/images/screenshot/pve-select-target-disk.png
>     <https://pve.proxmox.com/pve-docs/images/screenshot/pve-select-target-disk.png>
>      >     <https://pve.proxmox.com/pve-docs/images/screenshot/pve-select-target-disk.png
>     <https://pve.proxmox.com/pve-docs/images/screenshot/pve-select-target-disk.png>>) shows how the installer
>      >     should look. If this is not the case then your hardware might lack something…
>      >     Are all disks recognized in the BIOS or when booting a Debian live system? If so, wipe them clean from all former
>      >     filesystem and partition signatures using the "wipefs" tool.
>      >
>      >      > I was trying to follow Andreas Spiess's (aka the guy with the Swiss accent) youtube video on setting up a home
>      >     lab. He
>      >      > was using 7.2 and the vid was recent. Most of my issues went away when I tried 7.4.  What I did not know
>     at the
>      >     time is
>      >      > that either version requires UEFI boot or HHD icons are not shown on the web gui.; only the 3 working
>     partitions
>      >     show up
>      >      > on the boot HDD.  I examined and deleted them on the /dev/sda  HDD to start over on a clean system may
>     times. Just
>      >      > yesterday I found some documentation that said GRUB boots give rise to disk mounting problems.  That made
>     sense so I
>      >      > tried UEFI boot and the HDD icons showed up on the server view.
>      >      > I did navigate to the HELP page but only simple instructions were given and I could not establish a
>     running VM by
>      >     simply
>      >      > following those HELP steps.  Through experimentation I got two 500 HDDs to show up as icons (pool
>     candidates?) in
>      >     the
>      >      > gui.  I also got an iso image of bookworm to upload to VM storage from my laptop DVD drive to somewhere on
>     pve.home.
>      >      > What I don't understand now is how to get the image to completely install after launching with START
>     command.  I
>      >     did not
>      >      > enable start on boot so I could reboot the node and crash out of the VM.  I did so because VM hangs do not
>      >     respond to
>      >      > STOP, HARD STOP, etc. With the VM stopped on node reboot I could at least remove it and start over.
>      >
>      >     Given the age of you hardware (a quick search revealed that the notebook you are using has an Ivy Bridge
>     generation
>      >     Intel CPU which is now 11 years old) I suspect that your troubles come from the CPU that is selected for your
>     VM. The
>      >     Ivy Bridge chip probably does not have all features that the default emulated CPU ("x86-64-v2-AES") will
>     propagate to
>      >     the VM.
>      >     When creating the VM configuration try to select CPU type "host" at the bottom of the list.
>      >
>      >      > One question I have is why QEMU is optional.
>      >
>      >     I'm not quite certain what you mean. QEMU started as a project that emulated certain CPUs in software. When
>     hardware
>      >     began to support virtualization out of the box the kernel-based virtual machine (KVM) allowed better
>     performance and
>      >     QEMU adopted the usage of KVM where possible.
>      >
>      >     So when you nowerdays run a x86-64 VM on a x86-64 CPU you will use all the hardware virtualization if these
>     features
>      >     are
>      >     not disabled in the BIOS. But if you run an ARM VM on a x86-64 CPU QEMU will still emulate the VM's CPU
>     architecture in
>      >     software.
>      >
>      >     Do you actually mean libvirt? Libvirt is a project on top of different virtualization technologies and container
>      >     runtimes that allows you to save a VMs configuration to a file. Libvirt will use QEMU and KVM under the hood.
>      >     But in PVE libvirt is not used because the devs from Proxmox decided to talk directly to QEMU using their own
>     framework.
>      >
>      >        Isn't it needed for KVM or does the other gui method take over?  I am
>      >      > referring to virtXXX.man which I previously tried on a direct to Debian QEMU-KVM install try.  When trying to
>      >     prep for
>      >      > Home Assistant direct to Debian Bookworm (as direct QEMU-KVM install) I downloaded the the QCOW2 file and
>     tried
>      >     QEMU to
>      >      > prepare it for upload to my pve.   QEMU converted it to an img file which was not recognized by Debian or
>     Proxmox. I
>      >      > tried to do an import to disk from the node DVD drive but the gui would not recognize it; maybe from
>     issues with
>      >     grub boot.
>      >
>      >     If you download VM images in the qcow2 format you will need to convert these images into the raw image file
>     format. See
>      >     "man qemu-img". The raw image then can be used as virtual HDD for the VM.
>      >
>      >      >   Are you saying that I will have a chance to start the zfs (instead of ext 4?) install to boot disk from the
>      >     initial
>      >      > proxmox install with 8.x?  If so does zfs configuration come later with the GUI or command line?  Does zfs
>     raid come
>      >      > later with the GUI or command line? Do I add an optional third or fourth  third disk to zfs pool later as the
>      >     wiki says,
>      >      > or install the HDD before the initial node installation?
>      >
>      >     Before going down further the debugging road I need you to familiarize yourself with the concepts of ZFS.
>      >     Even if you succeed lateron to get more disks recognized you will be limited in your possibilities compared
>     to when all
>      >     disks are recognized during installation. (E.g. you cannot convert a ZFS RAID 1 pool into a ZFS RAIDZ2 pool.)
>      >
>      >     My recommendation is: before trying out to install VMs make sure that your host system is running the way you
>     want
>      >     it to
>      >     run.
>      >
>      >     Regards,
>      >
>      >              Uwe
>      >
>      >      > Maybe one more try with 8.0-2 will be more straight forward?  Which log files will reveal problems, and where
>      >     will they
>      >      > be stored?
>      >      >
>      >      > Armed with the above info I ought to make better progress.  Your help is much appreciated.
>      >      >
>      >      > Vielen Danke,
>      >      >
>      >      > Chuck in Libby, MT USA
>      >      >
>      >      >
>      >      >
>      >      > On Sat, Oct 28, 2023 at 12:49 PM Uwe Sauter <uwe.sauter.de@gmail.com <mailto:uwe.sauter.de@gmail.com>
>     <mailto:uwe.sauter.de@gmail.com <mailto:uwe.sauter.de@gmail.com>>
>      >     <mailto:uwe.sauter.de@gmail.com <mailto:uwe.sauter.de@gmail.com> <mailto:uwe.sauter.de@gmail.com
>     <mailto:uwe.sauter.de@gmail.com>>>> wrote:
>      >      >
>      >      >     Hi,
>      >      >
>      >      >     don't know if you are aware that PVE 8.0 was released back in June?
>      >      >
>      >      >     Also, there generally is a help button in the various menus and a documenation button left to the
>     "create vm" and
>      >      >     "create ct" buttons.
>      >      >
>      >      >     Regarding when to create a ZFS pool: usually you can do that during installation of PVE. You need to
>     change the
>      >      >     filesystem and the disks used by the installer. If that is not the case with your setup there seems to be
>      >     something
>      >      >     going wrong.
>      >      >     If you'd like to create a ZFS pool on an installed system go to "datacenter -> your server", then select
>      >     "disks ->
>      >      >     ZFS".
>      >      >     You should be able to create a new pool **if** you have unused disks in your system.
>      >      >
>      >      >
>      >      >     Regards,
>      >      >
>      >      >              Uwe
>      >      >
>      >      >     Am 28.10.23 um 18:13 schrieb Oboe Lova:
>      >      >      > Greetings to listers,
>      >      >      >
>      >      >      > I have installed 7.4 VE and am finding no success in installing guest
>      >      >      > images to what looks like a valid install, using the web gui at port 8006.
>      >      >      > Specifically, the best I can do is create a vm using  a windows 7 dvd in
>      >      >      > the node dvd drive and start an install session.  That runs for a while
>      >      >      > then stalls while I watch in a console.  After that using console and othere
>      >      >      > ways to stop, reset, etc the vm so I can remove it are  ignored, though the
>      >      >      > gui is still up and not frozen. Similar symptoms trying various Linux
>      >      >      > distro dvds, from either iso image or burned install disks.  I also fumbled
>      >      >      > around until I managed to upload an iso from laptop to a second internal
>      >      >      > hdd disk but I can’t find a way to load it to a new vm.
>      >      >      >
>      >      >      > Goal is homelab and a separate bookworm as VMs.   So what I obviously need
>      >      >      > is docs on definitions and caveats for each gui option in the web gui.
>      >      >      > Examples: how to create a vm from qcow2 image. Functionally what does  QEMU
>      >      >      > checkbox do since I get console either way?  I expect command line
>      >      >      > maneuvers will be required.
>      >      >      >
>      >      >      > I have read the current wiki and tried help screens but haven’t found
>      >      >      > anything that gives me a detailed recipe.    I would also like to use a
>      >      >      > three x 500 GB disk zfs raid but can’t find when  or where I do the zfs
>      >      >      > setup.  No install option on install except ext 4 partitons.   Dell XPS
>      >      >      >   8500 i7 16 GB ram 4 physical / 8 threads.   Allocating 2 cores with
>      >      >      > default lvms 2048 memory per vm.
>      >      >      >
>      >      >      > Tnx in advance
>      >      >      > _______________________________________________
>      >      >      > pve-user mailing list
>      >      >      > pve-user@lists.proxmox.com <mailto:pve-user@lists.proxmox.com> <mailto:pve-user@lists.proxmox.com
>     <mailto:pve-user@lists.proxmox.com>> <mailto:pve-user@lists.proxmox.com <mailto:pve-user@lists.proxmox.com>
>      >     <mailto:pve-user@lists.proxmox.com <mailto:pve-user@lists.proxmox.com>>>
>      >      >      > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>
>      >     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>>
>      >      >     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>
>      >     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>>>
>      >      >
>      >
> 
From rbleeker@gmail.com  Mon Oct 30 02:08:50 2023
Return-Path: <rbleeker@gmail.com>
X-Original-To: pve-user@lists.proxmox.com
Delivered-To: pve-user@lists.proxmox.com
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 34DCD9DF7E
 for <pve-user@lists.proxmox.com>; Mon, 30 Oct 2023 02:08:50 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 0F551AEC1
 for <pve-user@lists.proxmox.com>; Mon, 30 Oct 2023 02:08:20 +0100 (CET)
Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com
 [IPv6:2607:f8b0:4864:20::1036])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-user@lists.proxmox.com>; Mon, 30 Oct 2023 02:08:18 +0100 (CET)
Received: by mail-pj1-x1036.google.com with SMTP id
 98e67ed59e1d1-2800229592aso2320731a91.2
 for <pve-user@lists.proxmox.com>; Sun, 29 Oct 2023 18:08:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=gmail.com; s=20230601; t=1698628091; x=1699232891; darn=lists.proxmox.com;
 h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
 :date:message-id:reply-to;
 bh=h/qXnNsw01V3SvOgLHwvVObQzGnLSSWTPfjPQfkStWk=;
 b=RRaP7Ytn8F/PC5fGLyamvyi2xf4Ari+VWpTsDdgsxvwGfcHtlkTJG9QEnnWQ3Dq5x5
 kJRBL6i1PmAxH+l9aij6O4jNUpST5j24eRODKd6GKXDEgKKwdhDX/+p391sDF74eZpG5
 VbkDYr7rPfmOmahGzozX+eyeM/SpZpp+y9HvH6rSOo+R1HDTD0BwslWUys7Qlw1ObsYm
 9+CqVTayQwFWnsqH7JvWf7TN0VQSbXOPe/Hswqk3/D4VLr4/DAwN5lX1mPzdnQozV6Ma
 bJpoTOpGEJ3a5U8IjtyCUIvI0Yzanaizgzc9lv+60D/jhaBMKxA7WKvrjWRgM/O8a5mT
 eYQA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20230601; t=1698628091; x=1699232891;
 h=to:subject:message-id:date:from:mime-version:x-gm-message-state
 :from:to:cc:subject:date:message-id:reply-to;
 bh=h/qXnNsw01V3SvOgLHwvVObQzGnLSSWTPfjPQfkStWk=;
 b=Spk3ZrD2iyNq7QgifbV+7fWNGlRAfnmWLo2cz4Vpqnx1x6dkqDQFnc46fssz7vJwtj
 eO8appXtJH4ZNkHAhOe6HfvlG+kz8zWbzKTdBDqiuGKL8/TZD8lPWE+81Y7KKQTVkaX+
 GRacnfrI+vFVoElbRIBrK7Ul2AMUSzs/sLle4bd/Mg9yIe+9ly9dmxCesOcJ4bmqmaf/
 ANeD1PfpzeoHyhw+Eoa2hkH40lg6wZIIkqdlMNLvs6fMI8Lyl7rXEAnX3+NGgiiX3Mls
 SSKQueGFhii1BaNoybmux8qgCNwgjd0710WtsckgQu13M8Hynfl0Tl7KXTJRmBl+N8ei
 2MLg==
X-Gm-Message-State: AOJu0YyA8B0Ujy54l8HGNnKxEUBAf4yFMaK3oXPgovF5d5IstoeuHrIH
 8lJuBEAGiEgIYXI6RRnGXDZFlqZYWNaV2rXeoAs6RE2JMPQ=
X-Google-Smtp-Source: AGHT+IGCaQbAvArn3MZ6MvsNCVXECvpfz4p40Vj7jEIvYjD+zfhSLfL8XREu1fky7lbYKjIr9xPBVYWeN1BtnG04OvE=
X-Received: by 2002:a17:90b:1946:b0:27d:8fbd:be8c with SMTP id
 nk6-20020a17090b194600b0027d8fbdbe8cmr5917079pjb.28.1698628090536; Sun, 29
 Oct 2023 18:08:10 -0700 (PDT)
MIME-Version: 1.0
From: Ruud Bleeker <rbleeker@gmail.com>
Date: Mon, 30 Oct 2023 02:07:59 +0100
Message-ID: <CACDn3Lx53DHgOK8b9BL4n422P5kNqjQx_21jKyBOs-bkx3B9GQ@mail.gmail.com>
To: pve-user@lists.proxmox.com
Content-Type: text/plain; charset="UTF-8"
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.925 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DKIM_SIGNED               0.1 Message has a DKIM or DK signature,
 not necessarily valid
 DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature
 DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's
 domain
 DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from
 domain DMARC_PASS               -0.1 DMARC pass policy
 FREEMAIL_FROM 0.001 Sender email is commonly abused enduser mail provider
 RCVD_IN_DNSWL_NONE     -0.0001 Sender listed at https://www.dnswl.org/,
 no trust
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: [PVE-User] basic packages not installed in CentOS 9 container?
X-BeenThere: pve-user@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE user list <pve-user.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-user/>
List-Post: <mailto:pve-user@lists.proxmox.com>
List-Help: <mailto:pve-user-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Mon, 30 Oct 2023 01:08:50 -0000

Hi,

I'm new to this mailing list and fairly new to Proxmox and lxc
containers. So far I've experimented with Debian and Ubuntu containers
on Proxmox and that all went as expected, I could easily access them
through ssh using a keypair and manage them remotely or through
Ansible.

As of this week I've started to spin up a few CentOS 9 containers
because I wanted to experiment with some Red Hat related stuff. To my
surprise I was unable to ssh into any of the new containers, which
were created using the default CentOS 9 Stream container template
(centos-9-stream-default_20221109_amd64.tar.xz) as downloaded from the
Proxmox container repository. It appears the package openssh-server is
not installed in this template. I found that the same goes for the
packages that provide the manual pages ("man-pages" and "man-db"). Am
I wrong to expect to find these packages installed by default? IMHO an
ssh server should be installed on all Linux distributions geared
towards server use and remote management, such as CentOS.

Thanks in advance for any answers.
-- 
Idleness is not doing nothing. Idleness is being free to do anything.
  - Floyd Dell



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-10-29 13:34 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-28 16:13 [PVE-User] Adding VEs and containers to 7.4 Oboe Lova
2023-10-28 18:49 ` Uwe Sauter
     [not found]   ` <CAC04G9iKBbQiF7qh9ieSpQvfdw5eybJ6ik7OA=FQCwrMPkLxfA@mail.gmail.com>
2023-10-28 21:05     ` Uwe Sauter
     [not found]       ` <CAC04G9hb_fQYVu6s-VKJN_j_4Lb+ShP1QeJSW+mx0foD11YOKg@mail.gmail.com>
2023-10-29  8:43         ` Uwe Sauter
     [not found]           ` <CAC04G9jdQ6sEjMWHyiCFpbPA7n2fqA7MLZnBcfSMP_D67vyncw@mail.gmail.com>
2023-10-29 13:33             ` Uwe Sauter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal