From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
Dominik Csapak <d.csapak@proxmox.com>
Cc: Alwin Antreich <alwin@antreich.com>
Subject: [pve-devel] applied-series: [PATCH manager v4 0/9] ceph: allow pools settings to be changed
Date: Wed, 21 Apr 2021 16:20:44 +0200 [thread overview]
Message-ID: <daa68346-fb09-c665-accd-4d31ff54519f@proxmox.com> (raw)
In-Reply-To: <20210420081523.2704-1-d.csapak@proxmox.com>
On 20.04.21 10:15, Dominik Csapak wrote:
> originally from Alwin Antreich
>
> mostly rebase on master, a few eslint fixes (squashed into alwins
> commits) and 3 small fixups
>
> Alwin Antreich (6):
> ceph: add autoscale_status to api calls
> ceph: gui: add autoscale & flatten pool view
> ceph: set allowed minimal pg_num down to 1
> ceph: gui: rework pool input panel
> ceph: gui: add min num of PG
> fix: ceph: always set pool size first
>
> Dominik Csapak (3):
> API2/Ceph/Pools: remove unnecessary boolean conversion
> ui: ceph/Pools: improve number checking for target_size
> ui: ceph/Pool: show progress on pool edit/create
>
> PVE/API2/Ceph/Pools.pm | 97 +++++++--
> PVE/CLI/pveceph.pm | 4 +
> PVE/Ceph/Tools.pm | 61 ++++--
> www/manager6/ceph/Pool.js | 401 +++++++++++++++++++++++++++-----------
> 4 files changed, 422 insertions(+), 141 deletions(-)
>
applied, thanks to both of you!
Made some followups on-top, besides some minor code style stuff it was mostly:
* avoid horizontal scrolling due to column width for 720p screens
* make min_size auto calculation (size + 1) / 2, so that size 4 gets min_size 3
* use the pveSizeField (adapted from the pveBandwidthField) for target size to
avoid the *1024*1024*1024 for and back conversion in the panel get/setValues
* print pool settings as they're applied, makes a slightly nicer task log.
prev parent reply other threads:[~2021-04-21 14:20 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-20 8:15 [pve-devel] " Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 1/9] ceph: add autoscale_status to api calls Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 2/9] ceph: gui: add autoscale & flatten pool view Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 3/9] ceph: set allowed minimal pg_num down to 1 Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 4/9] ceph: gui: rework pool input panel Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 5/9] ceph: gui: add min num of PG Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 6/9] fix: ceph: always set pool size first Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 7/9] API2/Ceph/Pools: remove unnecessary boolean conversion Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 8/9] ui: ceph/Pools: improve number checking for target_size Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 9/9] ui: ceph/Pool: show progress on pool edit/create Dominik Csapak
2021-04-21 14:20 ` Thomas Lamprecht [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=daa68346-fb09-c665-accd-4d31ff54519f@proxmox.com \
--to=t.lamprecht@proxmox.com \
--cc=alwin@antreich.com \
--cc=d.csapak@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.