From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <tsabolov@t8.ru>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 6EBAB612C1
 for <pve-user@lists.proxmox.com>; Fri,  4 Feb 2022 11:16:07 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 5C122300F0
 for <pve-user@lists.proxmox.com>; Fri,  4 Feb 2022 11:15:37 +0100 (CET)
Received: from relay161.nicmail.ru (relay161.nicmail.ru [91.189.117.5])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id 060C9300DF
 for <pve-user@lists.proxmox.com>; Fri,  4 Feb 2022 11:15:36 +0100 (CET)
Received: from [10.28.138.151] (port=23146 helo=[192.168.8.155])
 by relay.hosting.mail.nic.ru with esmtp (Exim 5.55)
 (envelope-from <tsabolov@t8.ru>) id 1nFvcm-0000C6-5B
 for pve-user@lists.proxmox.com; Fri, 04 Feb 2022 13:15:28 +0300
Received: from [62.105.41.93] (account tsabolov@t8.ru HELO [192.168.8.155])
 by incarp1103.int.hosting.nic.ru (Exim 5.55)
 with id 1nFvcm-0003rB-HK for pve-user@lists.proxmox.com;
 Fri, 04 Feb 2022 13:15:28 +0300
Message-ID: <e97b4099-4241-890d-3d72-790b1f7e66c5@t8.ru>
Date: Fri, 4 Feb 2022 13:15:28 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.5.0
Content-Language: en-US
To: Proxmox VE user list <pve-user@lists.proxmox.com>
From: =?UTF-8?B?0KHQtdGA0LPQtdC5INCm0LDQsdC+0LvQvtCy?= <tsabolov@t8.ru>
X-KLMS-AntiSpam-Auth: dkim=none
X-MS-Exchange-Organization-SCL: -1
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.017 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 HTML_MESSAGE            0.001 HTML included in message
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_DNSWL_LOW        -0.7 Sender listed at https://www.dnswl.org/,
 low trust
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Content-Filtered-By: Mailman/MimeDel 2.1.29
Subject: [PVE-User] ceph osd tree & destroy_cephfs
X-BeenThere: pve-user@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE user list <pve-user.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-user/>
List-Post: <mailto:pve-user@lists.proxmox.com>
List-Help: <mailto:pve-user-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Fri, 04 Feb 2022 10:16:07 -0000

Hi to all.

In my Proxmox Cluster  with 7 node

I try to change some Pgs, Target Ratio and some Target size on some pool.

MAX AVAIL on important pool not changed, I think if I destroy 2 pool on 
ceph is changed.

I read the instructions 
https://pve.proxmox.com/pve-docs/chapter-pveceph.html#_destroy_cephfs , 
I need ask if I destroy CephFS pool is will affect other pools ?

For now I not have there some data not used it for backup or something 
other data.

For now I have :

ceph df
--- RAW STORAGE ---
CLASS  SIZE     AVAIL   USED     RAW USED  %RAW USED
hdd    106 TiB  98 TiB  8.0 TiB   8.1 TiB       7.58
TOTAL  106 TiB  98 TiB  8.0 TiB   8.1 TiB       7.58

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED     %USED MAX AVAIL
device_health_metrics   1    1   16 MiB       22   32 MiB 0     46 TiB
vm.pool                 2  512  2.7 TiB  740.12k  8.0 TiB 7.99     31 TiB
cephfs_data             3   32  1.9 KiB        0  3.8 KiB 0     46 TiB
cephfs_metadata         4    2   23 MiB       28   47 MiB 0     46 TiB


And one other question below is my ceph osd tree, like you see some osd 
the  REWEIGHT is less the default 1.00000

Suggest me how I change the REWEIGHT on this osd?


ID   CLASS  WEIGHT     TYPE NAME            STATUS  REWEIGHT PRI-AFF
  -1         106.43005  root default
-13          14.55478      host pve3101
  10    hdd    7.27739          osd.10           up   1.00000 1.00000
  11    hdd    7.27739          osd.11           up   1.00000 1.00000
-11          14.55478      host pve3103
   8    hdd    7.27739          osd.8            up   1.00000 1.00000
   9    hdd    7.27739          osd.9            up   1.00000 1.00000
  -3          14.55478      host pve3105
   0    hdd    7.27739          osd.0            up   1.00000 1.00000
   1    hdd    7.27739          osd.1            up   1.00000 1.00000
  -5          14.55478      host pve3107
*  2    hdd    7.27739          osd.2            up   0.95001 1.00000*
   3    hdd    7.27739          osd.3            up   1.00000 1.00000
  -9          14.55478      host pve3108
   6    hdd    7.27739          osd.6            up   1.00000 1.00000
   7    hdd    7.27739          osd.7            up   1.00000 1.00000
  -7          14.55478      host pve3109
   4    hdd    7.27739          osd.4            up   1.00000 1.00000
   5    hdd    7.27739          osd.5            up   1.00000 1.00000
-15          19.10138      host pve3111
  12    hdd   10.91409          osd.12           up   1.00000 1.00000
* 13    hdd    0.90970          osd.13           up   0.76846 1.00000*
  14    hdd    0.90970          osd.14           up   1.00000 1.00000
  15    hdd    0.90970          osd.15           up   1.00000 1.00000
  16    hdd    0.90970          osd.16           up   1.00000 1.00000
  17    hdd    0.90970          osd.17           up   1.00000 1.00000
* 18    hdd    0.90970          osd.18           up   0.75006 1.00000*
  19    hdd    0.90970          osd.19           up   1.00000 1.00000
  20    hdd    0.90970          osd.20           up   1.00000 1.00000
  21    hdd    0.90970          osd.21           up   1.00000 1.00000

Sergey TS
The best Regard

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user