* Re: [PVE-User] Ceph df
[not found] ` <mailman.15.1643777058.456.pve-user@lists.proxmox.com>
@ 2022-02-02 7:27 ` Сергей Цаболов
0 siblings, 0 replies; 3+ messages in thread
From: Сергей Цаболов @ 2022-02-02 7:27 UTC (permalink / raw)
To: Proxmox VE user list
Hello,
I read the documentation before. I know this page.
In the part of page placement-groups this part:
*TARGET RATIO*, if present, is the ratio of storage that the
administrator has specified that they expect this pool to consume
relative to other pools with target ratios set. If both target size
bytes and ratio are specified, the ratio takes precedence.
If I understand right I can set the Ratio =< 2 or 3 and is right ratio
for this pool? I'am correct?
02.02.2022 07:43, Alwin Antreich via pve-user пишет:
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Sergey TS
The best Regard
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PVE-User] Ceph df
[not found] ` <5e1b7f7739e33e8fdcc97a5b097432f9@antreich.com>
@ 2022-02-01 13:59 ` Сергей Цаболов
0 siblings, 0 replies; 3+ messages in thread
From: Сергей Цаболов @ 2022-02-01 13:59 UTC (permalink / raw)
To: Alwin Antreich, Proxmox VE user list
Hello Alwin,
In this post
https://forum.proxmox.com/threads/ceph-octopus-upgrade-notes-think-twice-before-enabling-auto-scale.80105/#post-399654
I read about *target ratio to 1 and call it a day *, in my case I set to
vm.pool Target ratio 1 :
ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE
RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW
PG_NUM AUTOSCALE
device_health_metrics 22216k 500.0G 2.0 106.4T 0.0092
1.0 8 on
vm.pool 2734G 3.0
106.4T 0.0753 1.0000
0.8180 1.0 512 on
cephfs_data 0 2.0
106.4T 0.0000 0.2000 0.1636 1.0 128
on
cephfs_metadata 27843k 500.0G 2.0
106.4T 0.0092 4.0 32 on
What you think I need to set target ratio on cephfs_metadata &
device_health_metrics?
To pool cephfs_data I set the target ratio 0.2 .
Or the target ration on vm.pool need not the *1* but more?
*
*
31.01.2022 15:05, Alwin Antreich пишет:
> Hello Sergey,
>
> January 31, 2022 9:58 AM, "Сергей Цаболов"<tsabolov@t8.ru> wrote:
>> My question is how I can decrease MAX AVAIL in default pool
>> device_health_metrics + cephfs_metadata and set it to vm.pool and
>> cephfs_data
> The max_avail is calculated by the cluster-wide AVAIL and pool USED, with respect to the replication size / EC profile.
>
> Cheers,
> Alwin
>
Sergey TS
The best Regard
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PVE-User] Ceph df
@ 2022-01-31 8:58 Сергей Цаболов
[not found] ` <5e1b7f7739e33e8fdcc97a5b097432f9@antreich.com>
0 siblings, 1 reply; 3+ messages in thread
From: Сергей Цаболов @ 2022-01-31 8:58 UTC (permalink / raw)
To: Proxmox VE user list
Hi to all.
I have cluster with 7 pve nodes
After ceph complete to set the health: HEALTH_OK
I for info with command check the MAX AVAILABLE storage :
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd `106 TiB 96 TiB 10 TiB 10 TiB 9.51
TOTAL 106 TiB 96 TiB 10 TiB 10 TiB 9.51
--- POOLS ---
POOL ID PGS STORED OBJECTS USED
%USED MAX AVAIL
device_health_metrics 1 1 14 MiB 22 28 MiB
0 42 TiB
vm.pool 2 512 2.7 TiB 799.37k 8.3
TiB 8.95 28 TiB
cephfs_data 3 32 927 GiB 237.28k 1.8 TiB
2.11 42 TiB
cephfs_metadata 4 32 30 MiB 28 60 MiB
0 42 TiB
I understand why is show it like that.
My question is how I can decrease MAX AVAIL in default pool
device_health_metrics + cephfs_metadata and set it to vm.pool and
cephfs_data
Thank you.
Sergey TS
The best Regard
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2022-02-02 7:28 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <D730700E-D141-4B83-8255-73E7FC1C85C5@antreich.com>
[not found] ` <mailman.15.1643777058.456.pve-user@lists.proxmox.com>
2022-02-02 7:27 ` [PVE-User] Ceph df Сергей Цаболов
2022-01-31 8:58 Сергей Цаболов
[not found] ` <5e1b7f7739e33e8fdcc97a5b097432f9@antreich.com>
2022-02-01 13:59 ` Сергей Цаболов
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox