From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <s.hanreich@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id D73C99BF94
 for <pve-user@lists.proxmox.com>; Tue, 30 May 2023 14:14:37 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id AF2C52E543
 for <pve-user@lists.proxmox.com>; Tue, 30 May 2023 14:14:07 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-user@lists.proxmox.com>; Tue, 30 May 2023 14:14:06 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 864AD47BFA;
 Tue, 30 May 2023 14:14:06 +0200 (CEST)
Message-ID: <a8554390-7f0f-94c1-e0a2-1a3088bf8d4a@proxmox.com>
Date: Tue, 30 May 2023 14:14:05 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Content-Language: en-US
To: Proxmox VE user list <pve-user@lists.proxmox.com>,
 Benjamin Hofer <benjamin@gridscale.io>
References: <CAD=jCXOpYPu2Hz+krGssLBkMvtMdiWyXSbH=+VwX76PTXyF_9A@mail.gmail.com>
From: Stefan Hanreich <s.hanreich@proxmox.com>
In-Reply-To: <CAD=jCXOpYPu2Hz+krGssLBkMvtMdiWyXSbH=+VwX76PTXyF_9A@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.274 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 KAM_NUMSUBJECT 0.5 Subject ends in numbers excluding current years
 NICE_REPLY_A           -0.091 Looks like a legit reply (A)
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Subject: Re: [PVE-User] Proxmox HCI Ceph: "osd_max_backfills" is overridden
 and set to 1000
X-BeenThere: pve-user@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE user list <pve-user.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-user/>
List-Post: <mailto:pve-user@lists.proxmox.com>
List-Help: <mailto:pve-user-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Tue, 30 May 2023 12:14:37 -0000

Hi Benjamin

This behavior was introduced in Ceph with the new mClock scheduler [1]. 
If the mClock scheduler is used, the osd_max_backfills option gets 
overridden (to 1000), among others.

This is what is very likely causing the issues in your cluster when 
rebalancing. With the mClock scheduler the parameters for tuning 
rebalancing have changed. In our wiki you can find a description of the 
new parameters and how you can use them [2].

This should be fixed in the newer Ceph version 17.2.6 [3] [4], which is 
already available via our repositories (no-subscription as well as 
enterprise). It contains the fix for this issue and should override the 
max_backfills to a more reasonable value. Nevertheless, you should still 
take a look at the new mClock tuning options.

Kind Regards
Stefan

[1] https://github.com/ceph/ceph/pull/38920
[2] https://pve.proxmox.com/wiki/Ceph_mclock_tuning
[3] https://github.com/ceph/ceph/pull/48226/files
[4] 
https://github.com/ceph/ceph/commit/89e48395f8b1329066a1d7e05a4e9e083c88c1a6

On 5/30/23 12:00, Benjamin Hofer wrote:
> Dear community,
> 
> We've set up a Proxmox hyper-converged Ceph cluster in production.
> After syncing in one new OSD using the "pveceph osd create" command,
> we got massive network performance issues and outages. We then found
> that "osd_max_backfills" is set to 1000 (Ceph default is 1) and that
> this (along with some other values) have been overridden.
> 
> Does anyone know a root cause? I can't imagine that this is the
> Proxmox default behaviour and I'm very sure that we didn't change
> anything (actually I didn't even know the value before researching and
> talking to colleagues with deeper Ceph knowledge).
> 
> System:
> 
> PVE version output: pve-manager/7.3-6/723bb6ec (running kernel: 5.15.102-1-pve)
> ceph version 17.2.5 (e04241aa9b639588fa6c864845287d2824cb6b55) quincy (stable)
> 
> # ceph config get osd.1
> WHO    MASK  LEVEL  OPTION                            VALUE         RO
> osd.1        basic  osd_mclock_max_capacity_iops_ssd  17080.220753
> 
> # ceph config show osd.1
> NAME                                             VALUE
>              SOURCE    OVERRIDES  IGNORES
> auth_client_required                             cephx
>              file
> auth_cluster_required                            cephx
>              file
> auth_service_required                            cephx
>              file
> cluster_network                                  10.0.18.0/24
>              file
> daemonize                                        false
>              override
> keyring                                          $osd_data/keyring
>              default
> leveldb_log
>              default
> mon_allow_pool_delete                            true
>              file
> mon_host                                         10.0.18.30 10.0.18.10
> 10.0.18.20  file
> ms_bind_ipv4                                     true
>              file
> ms_bind_ipv6                                     false
>              file
> no_config_file                                   false
>              override
> osd_delete_sleep                                 0.000000
>              override
> osd_delete_sleep_hdd                             0.000000
>              override
> osd_delete_sleep_hybrid                          0.000000
>              override
> osd_delete_sleep_ssd                             0.000000
>              override
> osd_max_backfills                                1000
>              override
> osd_mclock_max_capacity_iops_ssd                 17080.220753
>              mon
> osd_mclock_scheduler_background_best_effort_lim  999999
>              default
> osd_mclock_scheduler_background_best_effort_res  534
>              default
> osd_mclock_scheduler_background_best_effort_wgt  2
>              default
> osd_mclock_scheduler_background_recovery_lim     2135
>              default
> osd_mclock_scheduler_background_recovery_res     534
>              default
> osd_mclock_scheduler_background_recovery_wgt     1
>              default
> osd_mclock_scheduler_client_lim                  999999
>              default
> osd_mclock_scheduler_client_res                  1068
>              default
> osd_mclock_scheduler_client_wgt                  2
>              default
> osd_pool_default_min_size                        2
>              file
> osd_pool_default_size                            3
>              file
> osd_recovery_max_active                          1000
>              override
> osd_recovery_max_active_hdd                      1000
>              override
> osd_recovery_max_active_ssd                      1000
>              override
> osd_recovery_sleep                               0.000000
>              override
> osd_recovery_sleep_hdd                           0.000000
>              override
> osd_recovery_sleep_hybrid                        0.000000
>              override
> osd_recovery_sleep_ssd                           0.000000
>              override
> osd_scrub_sleep                                  0.000000
>              override
> osd_snap_trim_sleep                              0.000000
>              override
> osd_snap_trim_sleep_hdd                          0.000000
>              override
> osd_snap_trim_sleep_hybrid                       0.000000
>              override
> osd_snap_trim_sleep_ssd                          0.000000
>              override
> public_network                                   10.0.18.0/24
>              file
> rbd_default_features                             61
>              default
> rbd_qos_exclude_ops                              0
>              default
> setgroup                                         ceph
>              cmdline
> setuser                                          ceph
>              cmdline
> 
> Thanks a lot in advance.
> 
> Best
> Benjamin
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
>