public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Gabriel Goller <g.goller@proxmox.com>
To: "Thomas Lamprecht" <t.lamprecht@proxmox.com>,
	"Proxmox Backup Server development discussion"
	<pbs-devel@lists.proxmox.com>,
	"Dominik Csapak" <d.csapak@proxmox.com>,
	"Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Subject: Re: [pbs-devel] [PATCH widget-toolkit] fix #4961: disks: increase timeout to 1min
Date: Thu, 28 Sep 2023 14:57:10 +0200	[thread overview]
Message-ID: <27539eba-9e90-15bf-aed1-635e2e57aebc@proxmox.com> (raw)
In-Reply-To: <7b0ac740-096b-4850-a81d-5fa6bc9c347a@proxmox.com>

On 9/26/23 15:42, Thomas Lamprecht wrote:

> [..]
> I'd rather like to avoid that, 30s is already long and might make some
> users just close the dialogue or whatever.
>
> And general "lightweight-task" mechanism that Fabian proposed could be
> nice for some things but IMO the wrong approach here too.
>
> Here, with the end result being to show a simple disk list, I just
> refuse to believe that this has to take that long; here the API call has
> some serious architectural problems, among other things that can mean a
> mix of:
>
> 1. too much stuff read that is not relevant most of the time, we could
>     avoid querying the data that has the worst expensive/useful ratio
>     and make that only available on a per-disk info dialogue.
>
> 2. getting some info in a convoluted way (e.g, we have already lsblk
>     info, which knows if a disk is a "zfs_member", but still do a zpool
>     list, and all of those do separate stats on the device nodes to get
>     the rdevs, multiple times...)
>
> 3. missing a cache for the more expensive stuff, e.g., one that the rrd
>     stat querying code could probably fill every X'd run or so, as it
>     checks for IO metrics anyway
>
> I'd start with the 2. point above, as I saw quite a bit optimization
> potential with only a quick skimming, some of that should hopefully
> produce some actual improvements.
>
> Then I'd check where most of the time is spent, either by asking the
> reporter nicely for timings of all relevant commands and potentially
> also by just checking ourselves, we have slow HW here too, and while
> not 90 disks, even a tenth should show slowish enough behavior to
> be able to find the major culprits with some tracing or even just
> printf-timestamp-debugging
>
>
> The _last_ thing I'd do here for this specific endpoint is extending
> timeouts or inventing some new fancy light-weight task mechanisms.
> As optimizations, omitting expensive stuff, or moving it out from
> "for-all-disk" to a "per-single-disk" view, and caching for the
> "expensive but just really interesting to always see" stuff goes a
> long way..
>
> btw. PBS still misses the Wipe-Disk feature from PVE..
I looked into optimizing some stuff and noticed that (as you
wrote in the issue) the slowest part is the smartctl. Getting
the smartctl-data takes about 60-70% of the time spent on the
request.
I only have 4 disks with 6 partitions, but the api call still takes
about 232.22ms on average. Disabling the smartctl check we
get to 157.22ms. I did some other optimizations and got down
to 121.27ms, which is still not great but an improvement. I will
do some other stuff and check if it helps...

But in the meantime I thought about some way of disabling
smartctl checks when there are a lot of disks: How about we
add a button on the frontend "get smartctl data", which
manually updates the smartctl data in the table. We could then
check server-side if there are more than x disks, or "lsblk" takes
more than x seconds and don't run the smartctl check, which returns
a unknown status. Then the user can manually click the "get smartctl data"
button which makes the smartctl check in a separate api request.

So when a user doesn't have a lot of disks, we still get them all including
the smartctl data on the same request. But if it takes too long, we skip the
smartctl check, and let the user decide if he needs the data or not.




      reply	other threads:[~2023-09-28 12:57 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-21  8:21 Gabriel Goller
2023-09-26  7:55 ` Fabian Grünbichler
2023-09-26  8:15   ` Dominik Csapak
2023-09-26 13:42     ` Thomas Lamprecht
2023-09-28 12:57       ` Gabriel Goller [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=27539eba-9e90-15bf-aed1-635e2e57aebc@proxmox.com \
    --to=g.goller@proxmox.com \
    --cc=d.csapak@proxmox.com \
    --cc=f.gruenbichler@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal