all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Alex K <rightkicktech@gmail.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Proxmox questions/features
Date: Tue, 20 Jul 2021 17:01:23 +0300	[thread overview]
Message-ID: <CABMULtJB9PethOLGeH7GXdicALottRkoHPBScZq=m1sh5SUV=Q@mail.gmail.com> (raw)
In-Reply-To: <CABMULtKJzkUL7dZvckyWAEhAwzsDY_vV1D6ha3MFte8RHm-YFQ@mail.gmail.com>

On Tue, Jul 20, 2021 at 4:01 PM Alex K <rightkicktech@gmail.com> wrote:

>
>
> On Tue, Jul 20, 2021 at 2:45 PM Gilberto Ferreira <
> gilberto.nunes32@gmail.com> wrote:
>
>> Hi everybody.
>>
>> I had read this in the Gluster qemu integration (
>>
>> https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/qemu-integration/
>> )
>>
>>
>>    - Tuning the volume for virt-store
>>
>>    There are recommended settings available for virt-store. This provide
>>    good performance characteristics when enabled on the volume that was
>> used
>>    for virt-store
>>
>>    Refer to
>>
>> http://www.gluster.org/community/documentation/index.php/Virt-store-usecase#Tunables
>>    for recommended tunables and for applying them on the volume,
>>
>> http://www.gluster.org/community/documentation/index.php/Virt-store-usecase#Applying_the_Tunables_on_the_volume
>>
>> These links seem broken.
>
>>
>> This could apply to Proxmox as well???
>>
> I do not see why not. Apart from what is recommended, I also enable
> sharding at 512MB size.
>
Caution on the sharding part.  You must not enable/disable sharding at a
volume that has already data in it as it will cause data corruption.
Sharding must be enabled only at a fresh volume.


>
>>
>> Thanks
>> ---
>> Gilberto Nunes Ferreira
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em ter., 20 de jul. de 2021 às 07:57, Alex K <rightkicktech@gmail.com>
>> escreveu:
>>
>> > Anyone has any feedback to share regarding the support of multiple
>> > glusterfs server nodes?
>> > Thanx
>> >
>> > On Wed, Jul 14, 2021 at 3:44 PM Alex K <rightkicktech@gmail.com> wrote:
>> >
>> > > Resending as it seems screenshots are not accepted.
>> > >
>> > > Regarding the *backup-volfile-servers* option when attaching gluster
>> > > volumes, I tried to configure it from UI (DC -> Storage -> Add) as
>> below
>> > > though it seems it is not supported.
>> > > I am getting the error:
>> > >
>> > > Parameter verification failed. (400)
>> > > *server2*: invalid format - value does not look like a valid server
>> name
>> > > or IP address
>> > >
>> > > How can one define more than one backup server as below?
>> > >
>> > > backup-volfile-servers=node1,node2
>> > >
>> > > This is useful when having a 3 replica or more gluster server setup.
>> > >
>> > > Thanx,
>> > > Alex
>> > >
>> > >
>> > > On Mon, Jul 12, 2021 at 2:01 AM alexandre derumier <
>> aderumier@odiso.com>
>> > > wrote:
>> > >
>> > >> Le vendredi 09 juillet 2021 à 10:16 +0200, Mark Schouten a écrit :
>> > >> > Maybe easier to use the metric-server-service for that?
>> > >>
>> > >> it's more than we don't have some values currently, like vm real host
>> > >> cpu usage. (including vhost-net process for example),
>> > >>
>> > >> or real host memory usage (for windows, when ballonning is enabled,
>> we
>> > >> don't see the zero filled memory by windows, so reserved on the
>> host).
>> > >>
>> > >> My patches to get values from cgroups are already applied,
>> > >> but they are not yet exposed in pvestatd.
>> > >>
>> > >>
>> > >> I would like also to improve pve-ha-manager vm migration based on the
>> > >> hosts load too.
>> > >>
>> > >>
>> > >> I just need time to rework on this ^_^
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >> _______________________________________________
>> > >> pve-user mailing list
>> > >> pve-user@lists.proxmox.com
>> > >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > >>
>> > >
>> > _______________________________________________
>> > pve-user mailing list
>> > pve-user@lists.proxmox.com
>> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> >
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>


  reply	other threads:[~2021-07-20 14:01 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-06  9:00 Alex K
2021-07-08  7:40 ` Alex K
2021-07-08  7:54   ` Alex K
2021-07-08 11:39     ` Lindsay Mathieson
2021-07-08 13:53       ` Alex K
2021-07-08 20:22   ` alexandre derumier
2021-07-09  8:16     ` Mark Schouten
2021-07-11 23:01       ` alexandre derumier
2021-07-14 12:44         ` Alex K
2021-07-20 10:56           ` Alex K
2021-07-20 11:45             ` Gilberto Ferreira
2021-07-20 13:01               ` Alex K
2021-07-20 14:01                 ` Alex K [this message]
2021-07-24 11:29             ` Alex K
2021-07-24 12:07               ` Gilberto Ferreira
2021-07-09 13:54     ` Alex K
2021-07-08 11:33 ` Lindsay Mathieson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABMULtJB9PethOLGeH7GXdicALottRkoHPBScZq=m1sh5SUV=Q@mail.gmail.com' \
    --to=rightkicktech@gmail.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal