all lists on lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] ceph osd tree & destroy_cephfs
@ 2022-02-04 10:15 Сергей Цаболов
  0 siblings, 0 replies; only message in thread
From: Сергей Цаболов @ 2022-02-04 10:15 UTC (permalink / raw)
  To: Proxmox VE user list

Hi to all.

In my Proxmox Cluster  with 7 node

I try to change some Pgs, Target Ratio and some Target size on some pool.

MAX AVAIL on important pool not changed, I think if I destroy 2 pool on 
ceph is changed.

I read the instructions 
https://pve.proxmox.com/pve-docs/chapter-pveceph.html#_destroy_cephfs , 
I need ask if I destroy CephFS pool is will affect other pools ?

For now I not have there some data not used it for backup or something 
other data.

For now I have :

ceph df
--- RAW STORAGE ---
CLASS  SIZE     AVAIL   USED     RAW USED  %RAW USED
hdd    106 TiB  98 TiB  8.0 TiB   8.1 TiB       7.58
TOTAL  106 TiB  98 TiB  8.0 TiB   8.1 TiB       7.58

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED     %USED MAX AVAIL
device_health_metrics   1    1   16 MiB       22   32 MiB 0     46 TiB
vm.pool                 2  512  2.7 TiB  740.12k  8.0 TiB 7.99     31 TiB
cephfs_data             3   32  1.9 KiB        0  3.8 KiB 0     46 TiB
cephfs_metadata         4    2   23 MiB       28   47 MiB 0     46 TiB


And one other question below is my ceph osd tree, like you see some osd 
the  REWEIGHT is less the default 1.00000

Suggest me how I change the REWEIGHT on this osd?


ID   CLASS  WEIGHT     TYPE NAME            STATUS  REWEIGHT PRI-AFF
  -1         106.43005  root default
-13          14.55478      host pve3101
  10    hdd    7.27739          osd.10           up   1.00000 1.00000
  11    hdd    7.27739          osd.11           up   1.00000 1.00000
-11          14.55478      host pve3103
   8    hdd    7.27739          osd.8            up   1.00000 1.00000
   9    hdd    7.27739          osd.9            up   1.00000 1.00000
  -3          14.55478      host pve3105
   0    hdd    7.27739          osd.0            up   1.00000 1.00000
   1    hdd    7.27739          osd.1            up   1.00000 1.00000
  -5          14.55478      host pve3107
*  2    hdd    7.27739          osd.2            up   0.95001 1.00000*
   3    hdd    7.27739          osd.3            up   1.00000 1.00000
  -9          14.55478      host pve3108
   6    hdd    7.27739          osd.6            up   1.00000 1.00000
   7    hdd    7.27739          osd.7            up   1.00000 1.00000
  -7          14.55478      host pve3109
   4    hdd    7.27739          osd.4            up   1.00000 1.00000
   5    hdd    7.27739          osd.5            up   1.00000 1.00000
-15          19.10138      host pve3111
  12    hdd   10.91409          osd.12           up   1.00000 1.00000
* 13    hdd    0.90970          osd.13           up   0.76846 1.00000*
  14    hdd    0.90970          osd.14           up   1.00000 1.00000
  15    hdd    0.90970          osd.15           up   1.00000 1.00000
  16    hdd    0.90970          osd.16           up   1.00000 1.00000
  17    hdd    0.90970          osd.17           up   1.00000 1.00000
* 18    hdd    0.90970          osd.18           up   0.75006 1.00000*
  19    hdd    0.90970          osd.19           up   1.00000 1.00000
  20    hdd    0.90970          osd.20           up   1.00000 1.00000
  21    hdd    0.90970          osd.21           up   1.00000 1.00000

Sergey TS
The best Regard

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2022-02-04 10:16 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-04 10:15 [PVE-User] ceph osd tree & destroy_cephfs Сергей Цаболов

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal