public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Сергей Цаболов" <tsabolov@t8.ru>
To: Alwin Antreich <alwin@antreich.com>,
	Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Ceph df
Date: Tue, 1 Feb 2022 16:59:59 +0300	[thread overview]
Message-ID: <748c01bc-3d9f-5f10-9249-dcfa7b3b8211@t8.ru> (raw)
In-Reply-To: <5e1b7f7739e33e8fdcc97a5b097432f9@antreich.com>

Hello Alwin,

In this post 
https://forum.proxmox.com/threads/ceph-octopus-upgrade-notes-think-twice-before-enabling-auto-scale.80105/#post-399654

I read about *target ratio to 1 and call it a day *, in my case I set to 
vm.pool  Target ratio 1 :

ceph osd pool autoscale-status
POOL                                         SIZE TARGET SIZE      RATE  
RAW CAPACITY   RATIO  TARGET RATIO EFFECTIVE RATIO  BIAS  PG_NUM  NEW 
PG_NUM  AUTOSCALE
device_health_metrics          22216k       500.0G   2.0 106.4T  0.0092 
                                                     1.0            8 on
vm.pool                                     2734G             3.0        
106.4T  0.0753        1.0000                                       
0.8180 1.0             512 on
cephfs_data                                    0              2.0        
106.4T  0.0000             0.2000 0.1636   1.0             128         
     on
cephfs_metadata                  27843k         500.0G 2.0        
106.4T  0.0092    4.0              32              on

What you think  I need to set target ratio on cephfs_metadata & 
device_health_metrics?

To pool  cephfs_data I set the target ratio 0.2  .

Or the target ration on vm.pool need not the *1* but more?


*
*

31.01.2022 15:05, Alwin Antreich пишет:
> Hello Sergey,
>
> January 31, 2022 9:58 AM, "Сергей Цаболов"<tsabolov@t8.ru>  wrote:
>> My question is how I can  decrease MAX AVAIL in default pool
>> device_health_metrics + cephfs_metadata and set it to vm.pool and
>> cephfs_data
> The max_avail is calculated by the cluster-wide AVAIL and pool USED, with respect to the replication size / EC profile.
>
> Cheers,
> Alwin
>
Sergey TS
The best Regard

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


  parent reply	other threads:[~2022-02-01 14:00 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-31  8:58 Сергей Цаболов
     [not found] ` <5e1b7f7739e33e8fdcc97a5b097432f9@antreich.com>
2022-02-01 13:59   ` Сергей Цаболов [this message]
     [not found] <D730700E-D141-4B83-8255-73E7FC1C85C5@antreich.com>
     [not found] ` <mailman.15.1643777058.456.pve-user@lists.proxmox.com>
2022-02-02  7:27   ` Сергей Цаболов

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=748c01bc-3d9f-5f10-9249-dcfa7b3b8211@t8.ru \
    --to=tsabolov@t8.ru \
    --cc=alwin@antreich.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal