public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] Proxmox questions/features
@ 2021-07-06  9:00 Alex K
  2021-07-08  7:40 ` Alex K
  2021-07-08 11:33 ` Lindsay Mathieson
  0 siblings, 2 replies; 17+ messages in thread
From: Alex K @ 2021-07-06  9:00 UTC (permalink / raw)
  To: Proxmox VE user list

Hi all,

I've been assessing Proxmox for the last couple of days, coming from a
previous experience with oVirt. The intent is to switch to this solution if
most of the features are covered.

The below questions might have been put again, though searching the forum
or online I was not able to find any specific reference or not sure if the
feedback is still relevant.

- When adding a gluster volume I see that the UI provides an option for a
secondary server. In case I have a 3 replica glusterfs setup where I need
to add two backup servers as below, how can this be defined?

backup-volfile-servers=node1,node2


- I've read that qemu does use *libgfapi* when accessing VM disks on
glusterfs volumes. Can someone confirm this? I tried to find out the VM
configs that may reference this detail but was not able to do so.

- I have not seen any *load balancing/scheduling* feature being provided
and looking through the forum it seems that this is still missing. Is there
any future plan to provide such a feature. By load balancing I mean to
automatically balance VMs through the available hosts/nodes depending on a
set policy (CPU load, memory load or other).


Thanx for reading and appreciate any feedback,

Alex


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-06  9:00 [PVE-User] Proxmox questions/features Alex K
@ 2021-07-08  7:40 ` Alex K
  2021-07-08  7:54   ` Alex K
  2021-07-08 20:22   ` alexandre derumier
  2021-07-08 11:33 ` Lindsay Mathieson
  1 sibling, 2 replies; 17+ messages in thread
From: Alex K @ 2021-07-08  7:40 UTC (permalink / raw)
  To: Proxmox VE user list

Hi all,

Anyone has any info to share for the below?
Many thanx

On Tue, Jul 6, 2021 at 12:00 PM Alex K <rightkicktech@gmail.com> wrote:

> Hi all,
>
> I've been assessing Proxmox for the last couple of days, coming from a
> previous experience with oVirt. The intent is to switch to this solution if
> most of the features are covered.
>
> The below questions might have been put again, though searching the forum
> or online I was not able to find any specific reference or not sure if the
> feedback is still relevant.
>
> - When adding a gluster volume I see that the UI provides an option for a
> secondary server. In case I have a 3 replica glusterfs setup where I need
> to add two backup servers as below, how can this be defined?
>
> backup-volfile-servers=node1,node2
>
>
> - I've read that qemu does use *libgfapi* when accessing VM disks on
> glusterfs volumes. Can someone confirm this? I tried to find out the VM
> configs that may reference this detail but was not able to do so.
>
> - I have not seen any *load balancing/scheduling* feature being provided
> and looking through the forum it seems that this is still missing. Is there
> any future plan to provide such a feature. By load balancing I mean to
> automatically balance VMs through the available hosts/nodes depending on a
> set policy (CPU load, memory load or other).
>
>
> Thanx for reading and appreciate any feedback,
>
> Alex
>
>
>
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-08  7:40 ` Alex K
@ 2021-07-08  7:54   ` Alex K
  2021-07-08 11:39     ` Lindsay Mathieson
  2021-07-08 20:22   ` alexandre derumier
  1 sibling, 1 reply; 17+ messages in thread
From: Alex K @ 2021-07-08  7:54 UTC (permalink / raw)
  To: Proxmox VE user list

Checking the qemu process for the specific VM I get the following:

/usr/bin/kvm -id 100 -name Debian -no-shutdown -chardev
socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon
chardev=qmp,mode=control -chardev
socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon
chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/100.pid
-daemonize -smbios type=1,uuid=667352ae-6b86-49fc-a892-89b96e97ab8d -smp
1,sockets=1,cores=1,maxcpus=1 -nodefaults -boot
menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg
-vnc unix:/var/run/qemu-server/100.vnc,password -cpu
kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep -m 2048 -device
pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device
pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device
vmgenid,guid=3ce34c1d-ed4f-457d-a569-65f7755f47f1 -device
piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device
usb-tablet,id=tablet,bus=uhci.0,port=1 -device
VGA,id=vga,bus=pci.0,addr=0x2 -chardev
socket,path=/var/run/qemu-server/100.qga,server,nowait,id=qga0 -device
virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device
virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -object
rng-random,filename=/dev/urandom,id=rng0 -device
virtio-rng-pci,rng=rng0,max-bytes=1024,period=1000,bus=pci.1,addr=0x1d
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi
initiator-name=iqn.1993-08.org.debian:01:094fb2bbde3 -drive
file=/mnt/pve/share/template/iso/debian-10.5.0-amd64-netinst.iso,if=none,id=drive-ide2,media=cdrom,aio=threads
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101
-device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5 *-drive
file=gluster://node0/vms/images/100/vm-100-disk-1.qcow2*,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on
-device
scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100
-netdev
type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on
-device
virtio-net-pci,mac=D2:83:A3:B9:77:2C,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102
-machine type=pc+pve0

How one can confirm that *libgfapi* is being used to access the VM disk or
if not used how libgfapi can be enabled?

Thanx,
Alex


On Thu, Jul 8, 2021 at 10:40 AM Alex K <rightkicktech@gmail.com> wrote:

> Hi all,
>
> Anyone has any info to share for the below?
> Many thanx
>
> On Tue, Jul 6, 2021 at 12:00 PM Alex K <rightkicktech@gmail.com> wrote:
>
>> Hi all,
>>
>> I've been assessing Proxmox for the last couple of days, coming from a
>> previous experience with oVirt. The intent is to switch to this solution if
>> most of the features are covered.
>>
>> The below questions might have been put again, though searching the forum
>> or online I was not able to find any specific reference or not sure if the
>> feedback is still relevant.
>>
>> - When adding a gluster volume I see that the UI provides an option for a
>> secondary server. In case I have a 3 replica glusterfs setup where I need
>> to add two backup servers as below, how can this be defined?
>>
>> backup-volfile-servers=node1,node2
>>
>>
>> - I've read that qemu does use *libgfapi* when accessing VM disks on
>> glusterfs volumes. Can someone confirm this? I tried to find out the VM
>> configs that may reference this detail but was not able to do so.
>>
>> - I have not seen any *load balancing/scheduling* feature being provided
>> and looking through the forum it seems that this is still missing. Is there
>> any future plan to provide such a feature. By load balancing I mean to
>> automatically balance VMs through the available hosts/nodes depending on a
>> set policy (CPU load, memory load or other).
>>
>>
>> Thanx for reading and appreciate any feedback,
>>
>> Alex
>>
>>
>>
>>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-06  9:00 [PVE-User] Proxmox questions/features Alex K
  2021-07-08  7:40 ` Alex K
@ 2021-07-08 11:33 ` Lindsay Mathieson
  1 sibling, 0 replies; 17+ messages in thread
From: Lindsay Mathieson @ 2021-07-08 11:33 UTC (permalink / raw)
  To: pve-user

On 6/07/2021 7:00 pm, Alex K wrote:
> - I've read that qemu does use*libgfapi*  when accessing VM disks on
> glusterfs volumes. Can someone confirm this? I tried to find out the VM
> configs that may reference this detail but was not able to do so.


libgfapi is indeed used.


>
> - I have not seen any*load balancing/scheduling*  feature being provided
> and looking through the forum it seems that this is still missing. Is there
> any future plan to provide such a feature. By load balancing I mean to
> automatically balance VMs through the available hosts/nodes depending on a
> set policy (CPU load, memory load or other).


I don't believe there are any plans for load balancing, to many 
variables at play to usefully automate it I believe.

-- 

Lindsay




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-08  7:54   ` Alex K
@ 2021-07-08 11:39     ` Lindsay Mathieson
  2021-07-08 13:53       ` Alex K
  0 siblings, 1 reply; 17+ messages in thread
From: Lindsay Mathieson @ 2021-07-08 11:39 UTC (permalink / raw)
  To: pve-user

On 8/07/2021 5:54 pm, Alex K wrote:
> -drive
> file=gluster://node0/vms/images/100/vm-100-disk-1.qcow2*,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on

That specifies to use the gluster driver (libgfpapi) as documented here:

https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/qemu-integration/

-- 
Lindsay




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-08 11:39     ` Lindsay Mathieson
@ 2021-07-08 13:53       ` Alex K
  0 siblings, 0 replies; 17+ messages in thread
From: Alex K @ 2021-07-08 13:53 UTC (permalink / raw)
  To: Proxmox VE user list

On Thu, Jul 8, 2021 at 2:40 PM Lindsay Mathieson <
lindsay.mathieson@gmail.com> wrote:

> On 8/07/2021 5:54 pm, Alex K wrote:
> > -drive
> >
> file=gluster://node0/vms/images/100/vm-100-disk-1.qcow2*,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on
>
> That specifies to use the gluster driver (libgfpapi) as documented here:
>
>
> https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/qemu-integration/

Thank you for the feedback. Much appreciated.

>
>
> --
> Lindsay
>
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-08  7:40 ` Alex K
  2021-07-08  7:54   ` Alex K
@ 2021-07-08 20:22   ` alexandre derumier
  2021-07-09  8:16     ` Mark Schouten
  2021-07-09 13:54     ` Alex K
  1 sibling, 2 replies; 17+ messages in thread
From: alexandre derumier @ 2021-07-08 20:22 UTC (permalink / raw)
  To: Proxmox VE user list


Le jeudi 08 juillet 2021 à 10:40 +0300, Alex K a écrit :
> > - I have not seen any *load balancing/scheduling* feature being
> > provided
> > and looking through the forum it seems that this is still missing.
> > Is there
> > any future plan to provide such a feature. By load balancing I mean
> > to
> > automatically balance VMs through the available hosts/nodes
> > depending on a
> > set policy (CPU load, memory load or other).

Hi,
I hade done some prelimary code with dotpro scheduling algorithm, but I
never have time to finish it

https://github.com/aderumier/pve-ha-balancer

I need to finish the streaming/rrd of pressure counters, to have good
values
for vm/ct real cpu/mem usage on the host side.

I'll try to rework on it in september, but I'm a lot busy with sdn ipam
currently
for vm/ct ips allocation.




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-08 20:22   ` alexandre derumier
@ 2021-07-09  8:16     ` Mark Schouten
  2021-07-11 23:01       ` alexandre derumier
  2021-07-09 13:54     ` Alex K
  1 sibling, 1 reply; 17+ messages in thread
From: Mark Schouten @ 2021-07-09  8:16 UTC (permalink / raw)
  To: pve-user

Hi,

Op 08-07-2021 om 22:22 schreef alexandre derumier:
> I hade done some prelimary code with dotpro scheduling algorithm, but I
> never have time to finish it
> 
> https://github.com/aderumier/pve-ha-balancer

Cool!

> I need to finish the streaming/rrd of pressure counters, to have good
> values
> for vm/ct real cpu/mem usage on the host side.

Maybe easier to use the metric-server-service for that?

-- 
Mark Schouten
CTO, Tuxis B.V. | https://www.tuxis.nl/
<mark@tuxis.nl> | +31 318 200208



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-08 20:22   ` alexandre derumier
  2021-07-09  8:16     ` Mark Schouten
@ 2021-07-09 13:54     ` Alex K
  1 sibling, 0 replies; 17+ messages in thread
From: Alex K @ 2021-07-09 13:54 UTC (permalink / raw)
  To: Proxmox VE user list

On Thu, Jul 8, 2021, 23:22 alexandre derumier <aderumier@odiso.com> wrote:

>
> Le jeudi 08 juillet 2021 à 10:40 +0300, Alex K a écrit :
> > > - I have not seen any *load balancing/scheduling* feature being
> > > provided
> > > and looking through the forum it seems that this is still missing.
> > > Is there
> > > any future plan to provide such a feature. By load balancing I mean
> > > to
> > > automatically balance VMs through the available hosts/nodes
> > > depending on a
> > > set policy (CPU load, memory load or other).
>
> Hi,
> I hade done some prelimary code with dotpro scheduling algorithm, but I
> never have time to finish it
>
> https://github.com/aderumier/pve-ha-balancer
>
> I need to finish the streaming/rrd of pressure counters, to have good
> values
> for vm/ct real cpu/mem usage on the host side.
>
> I'll try to rework on it in september, but I'm a lot busy with sdn ipam
> currently
> for vm/ct ips allocation.
>

Great! It's good to know that this is in the works.

>
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-09  8:16     ` Mark Schouten
@ 2021-07-11 23:01       ` alexandre derumier
  2021-07-14 12:44         ` Alex K
  0 siblings, 1 reply; 17+ messages in thread
From: alexandre derumier @ 2021-07-11 23:01 UTC (permalink / raw)
  To: Proxmox VE user list

Le vendredi 09 juillet 2021 à 10:16 +0200, Mark Schouten a écrit :
> Maybe easier to use the metric-server-service for that?

it's more than we don't have some values currently, like vm real host
cpu usage. (including vhost-net process for example),

or real host memory usage (for windows, when ballonning is enabled, we
don't see the zero filled memory by windows, so reserved on the host).

My patches to get values from cgroups are already applied,
but they are not yet exposed in pvestatd. 


I would like also to improve pve-ha-manager vm migration based on the
hosts load too.


I just need time to rework on this ^_^








^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-11 23:01       ` alexandre derumier
@ 2021-07-14 12:44         ` Alex K
  2021-07-20 10:56           ` Alex K
  0 siblings, 1 reply; 17+ messages in thread
From: Alex K @ 2021-07-14 12:44 UTC (permalink / raw)
  To: Proxmox VE user list

Resending as it seems screenshots are not accepted.

Regarding the *backup-volfile-servers* option when attaching gluster
volumes, I tried to configure it from UI (DC -> Storage -> Add) as below
though it seems it is not supported.
I am getting the error:

Parameter verification failed. (400)
*server2*: invalid format - value does not look like a valid server name or
IP address

How can one define more than one backup server as below?

backup-volfile-servers=node1,node2

This is useful when having a 3 replica or more gluster server setup.

Thanx,
Alex


On Mon, Jul 12, 2021 at 2:01 AM alexandre derumier <aderumier@odiso.com>
wrote:

> Le vendredi 09 juillet 2021 à 10:16 +0200, Mark Schouten a écrit :
> > Maybe easier to use the metric-server-service for that?
>
> it's more than we don't have some values currently, like vm real host
> cpu usage. (including vhost-net process for example),
>
> or real host memory usage (for windows, when ballonning is enabled, we
> don't see the zero filled memory by windows, so reserved on the host).
>
> My patches to get values from cgroups are already applied,
> but they are not yet exposed in pvestatd.
>
>
> I would like also to improve pve-ha-manager vm migration based on the
> hosts load too.
>
>
> I just need time to rework on this ^_^
>
>
>
>
>
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-14 12:44         ` Alex K
@ 2021-07-20 10:56           ` Alex K
  2021-07-20 11:45             ` Gilberto Ferreira
  2021-07-24 11:29             ` Alex K
  0 siblings, 2 replies; 17+ messages in thread
From: Alex K @ 2021-07-20 10:56 UTC (permalink / raw)
  To: Proxmox VE user list

Anyone has any feedback to share regarding the support of multiple
glusterfs server nodes?
Thanx

On Wed, Jul 14, 2021 at 3:44 PM Alex K <rightkicktech@gmail.com> wrote:

> Resending as it seems screenshots are not accepted.
>
> Regarding the *backup-volfile-servers* option when attaching gluster
> volumes, I tried to configure it from UI (DC -> Storage -> Add) as below
> though it seems it is not supported.
> I am getting the error:
>
> Parameter verification failed. (400)
> *server2*: invalid format - value does not look like a valid server name
> or IP address
>
> How can one define more than one backup server as below?
>
> backup-volfile-servers=node1,node2
>
> This is useful when having a 3 replica or more gluster server setup.
>
> Thanx,
> Alex
>
>
> On Mon, Jul 12, 2021 at 2:01 AM alexandre derumier <aderumier@odiso.com>
> wrote:
>
>> Le vendredi 09 juillet 2021 à 10:16 +0200, Mark Schouten a écrit :
>> > Maybe easier to use the metric-server-service for that?
>>
>> it's more than we don't have some values currently, like vm real host
>> cpu usage. (including vhost-net process for example),
>>
>> or real host memory usage (for windows, when ballonning is enabled, we
>> don't see the zero filled memory by windows, so reserved on the host).
>>
>> My patches to get values from cgroups are already applied,
>> but they are not yet exposed in pvestatd.
>>
>>
>> I would like also to improve pve-ha-manager vm migration based on the
>> hosts load too.
>>
>>
>> I just need time to rework on this ^_^
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-20 10:56           ` Alex K
@ 2021-07-20 11:45             ` Gilberto Ferreira
  2021-07-20 13:01               ` Alex K
  2021-07-24 11:29             ` Alex K
  1 sibling, 1 reply; 17+ messages in thread
From: Gilberto Ferreira @ 2021-07-20 11:45 UTC (permalink / raw)
  To: Proxmox VE user list

Hi everybody.

I had read this in the Gluster qemu integration (
https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/qemu-integration/
)


   - Tuning the volume for virt-store

   There are recommended settings available for virt-store. This provide
   good performance characteristics when enabled on the volume that was used
   for virt-store

   Refer to
   http://www.gluster.org/community/documentation/index.php/Virt-store-usecase#Tunables
   for recommended tunables and for applying them on the volume,
   http://www.gluster.org/community/documentation/index.php/Virt-store-usecase#Applying_the_Tunables_on_the_volume


This could apply to Proxmox as well???


Thanks
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 20 de jul. de 2021 às 07:57, Alex K <rightkicktech@gmail.com>
escreveu:

> Anyone has any feedback to share regarding the support of multiple
> glusterfs server nodes?
> Thanx
>
> On Wed, Jul 14, 2021 at 3:44 PM Alex K <rightkicktech@gmail.com> wrote:
>
> > Resending as it seems screenshots are not accepted.
> >
> > Regarding the *backup-volfile-servers* option when attaching gluster
> > volumes, I tried to configure it from UI (DC -> Storage -> Add) as below
> > though it seems it is not supported.
> > I am getting the error:
> >
> > Parameter verification failed. (400)
> > *server2*: invalid format - value does not look like a valid server name
> > or IP address
> >
> > How can one define more than one backup server as below?
> >
> > backup-volfile-servers=node1,node2
> >
> > This is useful when having a 3 replica or more gluster server setup.
> >
> > Thanx,
> > Alex
> >
> >
> > On Mon, Jul 12, 2021 at 2:01 AM alexandre derumier <aderumier@odiso.com>
> > wrote:
> >
> >> Le vendredi 09 juillet 2021 à 10:16 +0200, Mark Schouten a écrit :
> >> > Maybe easier to use the metric-server-service for that?
> >>
> >> it's more than we don't have some values currently, like vm real host
> >> cpu usage. (including vhost-net process for example),
> >>
> >> or real host memory usage (for windows, when ballonning is enabled, we
> >> don't see the zero filled memory by windows, so reserved on the host).
> >>
> >> My patches to get values from cgroups are already applied,
> >> but they are not yet exposed in pvestatd.
> >>
> >>
> >> I would like also to improve pve-ha-manager vm migration based on the
> >> hosts load too.
> >>
> >>
> >> I just need time to rework on this ^_^
> >>
> >>
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> pve-user mailing list
> >> pve-user@lists.proxmox.com
> >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>
> >
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-20 11:45             ` Gilberto Ferreira
@ 2021-07-20 13:01               ` Alex K
  2021-07-20 14:01                 ` Alex K
  0 siblings, 1 reply; 17+ messages in thread
From: Alex K @ 2021-07-20 13:01 UTC (permalink / raw)
  To: Proxmox VE user list

On Tue, Jul 20, 2021 at 2:45 PM Gilberto Ferreira <
gilberto.nunes32@gmail.com> wrote:

> Hi everybody.
>
> I had read this in the Gluster qemu integration (
>
> https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/qemu-integration/
> )
>
>
>    - Tuning the volume for virt-store
>
>    There are recommended settings available for virt-store. This provide
>    good performance characteristics when enabled on the volume that was
> used
>    for virt-store
>
>    Refer to
>
> http://www.gluster.org/community/documentation/index.php/Virt-store-usecase#Tunables
>    for recommended tunables and for applying them on the volume,
>
> http://www.gluster.org/community/documentation/index.php/Virt-store-usecase#Applying_the_Tunables_on_the_volume
>
> These links seem broken.

>
> This could apply to Proxmox as well???
>
I do not see why not. Apart from what is recommended, I also enable
sharding at 512MB size.


>
> Thanks
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em ter., 20 de jul. de 2021 às 07:57, Alex K <rightkicktech@gmail.com>
> escreveu:
>
> > Anyone has any feedback to share regarding the support of multiple
> > glusterfs server nodes?
> > Thanx
> >
> > On Wed, Jul 14, 2021 at 3:44 PM Alex K <rightkicktech@gmail.com> wrote:
> >
> > > Resending as it seems screenshots are not accepted.
> > >
> > > Regarding the *backup-volfile-servers* option when attaching gluster
> > > volumes, I tried to configure it from UI (DC -> Storage -> Add) as
> below
> > > though it seems it is not supported.
> > > I am getting the error:
> > >
> > > Parameter verification failed. (400)
> > > *server2*: invalid format - value does not look like a valid server
> name
> > > or IP address
> > >
> > > How can one define more than one backup server as below?
> > >
> > > backup-volfile-servers=node1,node2
> > >
> > > This is useful when having a 3 replica or more gluster server setup.
> > >
> > > Thanx,
> > > Alex
> > >
> > >
> > > On Mon, Jul 12, 2021 at 2:01 AM alexandre derumier <
> aderumier@odiso.com>
> > > wrote:
> > >
> > >> Le vendredi 09 juillet 2021 à 10:16 +0200, Mark Schouten a écrit :
> > >> > Maybe easier to use the metric-server-service for that?
> > >>
> > >> it's more than we don't have some values currently, like vm real host
> > >> cpu usage. (including vhost-net process for example),
> > >>
> > >> or real host memory usage (for windows, when ballonning is enabled, we
> > >> don't see the zero filled memory by windows, so reserved on the host).
> > >>
> > >> My patches to get values from cgroups are already applied,
> > >> but they are not yet exposed in pvestatd.
> > >>
> > >>
> > >> I would like also to improve pve-ha-manager vm migration based on the
> > >> hosts load too.
> > >>
> > >>
> > >> I just need time to rework on this ^_^
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> _______________________________________________
> > >> pve-user mailing list
> > >> pve-user@lists.proxmox.com
> > >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > >>
> > >
> > _______________________________________________
> > pve-user mailing list
> > pve-user@lists.proxmox.com
> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-20 13:01               ` Alex K
@ 2021-07-20 14:01                 ` Alex K
  0 siblings, 0 replies; 17+ messages in thread
From: Alex K @ 2021-07-20 14:01 UTC (permalink / raw)
  To: Proxmox VE user list

On Tue, Jul 20, 2021 at 4:01 PM Alex K <rightkicktech@gmail.com> wrote:

>
>
> On Tue, Jul 20, 2021 at 2:45 PM Gilberto Ferreira <
> gilberto.nunes32@gmail.com> wrote:
>
>> Hi everybody.
>>
>> I had read this in the Gluster qemu integration (
>>
>> https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/qemu-integration/
>> )
>>
>>
>>    - Tuning the volume for virt-store
>>
>>    There are recommended settings available for virt-store. This provide
>>    good performance characteristics when enabled on the volume that was
>> used
>>    for virt-store
>>
>>    Refer to
>>
>> http://www.gluster.org/community/documentation/index.php/Virt-store-usecase#Tunables
>>    for recommended tunables and for applying them on the volume,
>>
>> http://www.gluster.org/community/documentation/index.php/Virt-store-usecase#Applying_the_Tunables_on_the_volume
>>
>> These links seem broken.
>
>>
>> This could apply to Proxmox as well???
>>
> I do not see why not. Apart from what is recommended, I also enable
> sharding at 512MB size.
>
Caution on the sharding part.  You must not enable/disable sharding at a
volume that has already data in it as it will cause data corruption.
Sharding must be enabled only at a fresh volume.


>
>>
>> Thanks
>> ---
>> Gilberto Nunes Ferreira
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em ter., 20 de jul. de 2021 às 07:57, Alex K <rightkicktech@gmail.com>
>> escreveu:
>>
>> > Anyone has any feedback to share regarding the support of multiple
>> > glusterfs server nodes?
>> > Thanx
>> >
>> > On Wed, Jul 14, 2021 at 3:44 PM Alex K <rightkicktech@gmail.com> wrote:
>> >
>> > > Resending as it seems screenshots are not accepted.
>> > >
>> > > Regarding the *backup-volfile-servers* option when attaching gluster
>> > > volumes, I tried to configure it from UI (DC -> Storage -> Add) as
>> below
>> > > though it seems it is not supported.
>> > > I am getting the error:
>> > >
>> > > Parameter verification failed. (400)
>> > > *server2*: invalid format - value does not look like a valid server
>> name
>> > > or IP address
>> > >
>> > > How can one define more than one backup server as below?
>> > >
>> > > backup-volfile-servers=node1,node2
>> > >
>> > > This is useful when having a 3 replica or more gluster server setup.
>> > >
>> > > Thanx,
>> > > Alex
>> > >
>> > >
>> > > On Mon, Jul 12, 2021 at 2:01 AM alexandre derumier <
>> aderumier@odiso.com>
>> > > wrote:
>> > >
>> > >> Le vendredi 09 juillet 2021 à 10:16 +0200, Mark Schouten a écrit :
>> > >> > Maybe easier to use the metric-server-service for that?
>> > >>
>> > >> it's more than we don't have some values currently, like vm real host
>> > >> cpu usage. (including vhost-net process for example),
>> > >>
>> > >> or real host memory usage (for windows, when ballonning is enabled,
>> we
>> > >> don't see the zero filled memory by windows, so reserved on the
>> host).
>> > >>
>> > >> My patches to get values from cgroups are already applied,
>> > >> but they are not yet exposed in pvestatd.
>> > >>
>> > >>
>> > >> I would like also to improve pve-ha-manager vm migration based on the
>> > >> hosts load too.
>> > >>
>> > >>
>> > >> I just need time to rework on this ^_^
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >> _______________________________________________
>> > >> pve-user mailing list
>> > >> pve-user@lists.proxmox.com
>> > >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > >>
>> > >
>> > _______________________________________________
>> > pve-user mailing list
>> > pve-user@lists.proxmox.com
>> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> >
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-20 10:56           ` Alex K
  2021-07-20 11:45             ` Gilberto Ferreira
@ 2021-07-24 11:29             ` Alex K
  2021-07-24 12:07               ` Gilberto Ferreira
  1 sibling, 1 reply; 17+ messages in thread
From: Alex K @ 2021-07-24 11:29 UTC (permalink / raw)
  To: Proxmox VE user list

On Tue, Jul 20, 2021, 13:56 Alex K <rightkicktech@gmail.com> wrote:

> Anyone has any feedback to share regarding the support of multiple
> glusterfs server nodes?
>

May I assume that glusterfs is not used in production with Proxmox where
more then two servers in hyperconverged setup is a common use case? It is
my impression that Proxmox is more inclined towards ceph with glusterfs
being somehow a second class citizen. Correct me if I am wrong.

Thanx
>
> On Wed, Jul 14, 2021 at 3:44 PM Alex K <rightkicktech@gmail.com> wrote:
>
>> Resending as it seems screenshots are not accepted.
>>
>> Regarding the *backup-volfile-servers* option when attaching gluster
>> volumes, I tried to configure it from UI (DC -> Storage -> Add) as below
>> though it seems it is not supported.
>> I am getting the error:
>>
>> Parameter verification failed. (400)
>> *server2*: invalid format - value does not look like a valid server name
>> or IP address
>>
>> How can one define more than one backup server as below?
>>
>> backup-volfile-servers=node1,node2
>>
>> This is useful when having a 3 replica or more gluster server setup.
>>
>> Thanx,
>> Alex
>>
>>
>> On Mon, Jul 12, 2021 at 2:01 AM alexandre derumier <aderumier@odiso.com>
>> wrote:
>>
>>> Le vendredi 09 juillet 2021 à 10:16 +0200, Mark Schouten a écrit :
>>> > Maybe easier to use the metric-server-service for that?
>>>
>>> it's more than we don't have some values currently, like vm real host
>>> cpu usage. (including vhost-net process for example),
>>>
>>> or real host memory usage (for windows, when ballonning is enabled, we
>>> don't see the zero filled memory by windows, so reserved on the host).
>>>
>>> My patches to get values from cgroups are already applied,
>>> but they are not yet exposed in pvestatd.
>>>
>>>
>>> I would like also to improve pve-ha-manager vm migration based on the
>>> hosts load too.
>>>
>>>
>>> I just need time to rework on this ^_^
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> pve-user mailing list
>>> pve-user@lists.proxmox.com
>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PVE-User] Proxmox questions/features
  2021-07-24 11:29             ` Alex K
@ 2021-07-24 12:07               ` Gilberto Ferreira
  0 siblings, 0 replies; 17+ messages in thread
From: Gilberto Ferreira @ 2021-07-24 12:07 UTC (permalink / raw)
  To: Proxmox VE user list

Yeah! I have that impression too.
Even the glusterfs plugin in the web interface doesn't work.
Plante of times I have to use fstab to mount a gluster volume with two
servers.

Em sáb, 24 de jul de 2021 08:30, Alex K <rightkicktech@gmail.com> escreveu:

> On Tue, Jul 20, 2021, 13:56 Alex K <rightkicktech@gmail.com> wrote:
>
> > Anyone has any feedback to share regarding the support of multiple
> > glusterfs server nodes?
> >
>
> May I assume that glusterfs is not used in production with Proxmox where
> more then two servers in hyperconverged setup is a common use case? It is
> my impression that Proxmox is more inclined towards ceph with glusterfs
> being somehow a second class citizen. Correct me if I am wrong.
>
> Thanx
> >
> > On Wed, Jul 14, 2021 at 3:44 PM Alex K <rightkicktech@gmail.com> wrote:
> >
> >> Resending as it seems screenshots are not accepted.
> >>
> >> Regarding the *backup-volfile-servers* option when attaching gluster
> >> volumes, I tried to configure it from UI (DC -> Storage -> Add) as below
> >> though it seems it is not supported.
> >> I am getting the error:
> >>
> >> Parameter verification failed. (400)
> >> *server2*: invalid format - value does not look like a valid server name
> >> or IP address
> >>
> >> How can one define more than one backup server as below?
> >>
> >> backup-volfile-servers=node1,node2
> >>
> >> This is useful when having a 3 replica or more gluster server setup.
> >>
> >> Thanx,
> >> Alex
> >>
> >>
> >> On Mon, Jul 12, 2021 at 2:01 AM alexandre derumier <aderumier@odiso.com
> >
> >> wrote:
> >>
> >>> Le vendredi 09 juillet 2021 à 10:16 +0200, Mark Schouten a écrit :
> >>> > Maybe easier to use the metric-server-service for that?
> >>>
> >>> it's more than we don't have some values currently, like vm real host
> >>> cpu usage. (including vhost-net process for example),
> >>>
> >>> or real host memory usage (for windows, when ballonning is enabled, we
> >>> don't see the zero filled memory by windows, so reserved on the host).
> >>>
> >>> My patches to get values from cgroups are already applied,
> >>> but they are not yet exposed in pvestatd.
> >>>
> >>>
> >>> I would like also to improve pve-ha-manager vm migration based on the
> >>> hosts load too.
> >>>
> >>>
> >>> I just need time to rework on this ^_^
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> pve-user mailing list
> >>> pve-user@lists.proxmox.com
> >>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>>
> >>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-07-24 12:08 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-06  9:00 [PVE-User] Proxmox questions/features Alex K
2021-07-08  7:40 ` Alex K
2021-07-08  7:54   ` Alex K
2021-07-08 11:39     ` Lindsay Mathieson
2021-07-08 13:53       ` Alex K
2021-07-08 20:22   ` alexandre derumier
2021-07-09  8:16     ` Mark Schouten
2021-07-11 23:01       ` alexandre derumier
2021-07-14 12:44         ` Alex K
2021-07-20 10:56           ` Alex K
2021-07-20 11:45             ` Gilberto Ferreira
2021-07-20 13:01               ` Alex K
2021-07-20 14:01                 ` Alex K
2021-07-24 11:29             ` Alex K
2021-07-24 12:07               ` Gilberto Ferreira
2021-07-09 13:54     ` Alex K
2021-07-08 11:33 ` Lindsay Mathieson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal