From: Stefan Hanreich <s.hanreich@proxmox.com>
To: Dominik Csapak <d.csapak@proxmox.com>,
Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [RFC PATCH manager] WIP: api: implement node-independent bulk actions
Date: Thu, 20 Mar 2025 09:44:36 +0100 [thread overview]
Message-ID: <21f4c584-58f1-4706-b750-f8f7655bd885@proxmox.com> (raw)
In-Reply-To: <3d51b4d4-6eb1-4cba-8670-c12a04d9fa44@proxmox.com>
On 3/19/25 10:04, Dominik Csapak wrote:
> On 3/18/25 12:30, Stefan Hanreich wrote:
>>> There are alternative methods to achieve similar results:
>>> * use some kind of queuing system on the cluster (e.g. via pmxcfs)
>>> * using the 'startall'/'stopall' calls from pve in PDM
>>> * surely some other thing I didn't think about
>>>
>>> We can of course start with this, and change the underlying mechanism
>>> later too.
>>>
>>> If we go this route, I could also rewrite the code in rust if wanted,
>>> since there is nothing particularly dependent on perl here
>>> (besides getting the vmlist, but that could stay in perl).
>>> The bulk of the logic is how to start tasks + handle them finishing +
>>> handling filter + concurrency.
>>
>> I'm actually reading the VM list in the firewall via this:
>> https://git.proxmox.com/?p=proxmox-ve-rs.git;a=blob;f=proxmox-ve-
>> config/src/guest/
>> mod.rs;h=74fd8abc000aec0fa61898840d44ab8a4cd9018b;hb=HEAD#l69
>>
>> So we could build upon that if we want to implement it in Rust?
>>
>> I have something similar, *very* basic, implemented for running multiple
>> tasks across clusters in my SDN patch series - so maybe we could
>> repurpose that for a possible implementation, even generalize it?
>
> Yeah sounds good if we want to do it this way, for my use case here we
> need to parse the config of
> all guests though, not sure if we can do that in rust. maybe with just a
> minimal config like 'boot'
> and such? Or we try to pull out the pve api types from pdm since there
> are parts of the config
> already exposed i think...
Makes sense to leave it in Perl then, I just thought I'd point it out if
the guest list alone was the dealbreaker.
>>> diff --git a/PVE/API2/Cluster/Bulk.pm b/PVE/API2/Cluster/Bulk.pm
>>> new file mode 100644
>>> index 00000000..05a79155
>>> --- /dev/null
>>> +++ b/PVE/API2/Cluster/Bulk.pm
>>> @@ -0,0 +1,475 @@
>>> +package PVE::API2::Cluster::Bulk;
>>
>> We might wanna think about using sub-paths already, since I can see this
>> growing quite fast (at least a sub-path for SDN would be interesting). I
>> don't know how many other potential use-cases there are aside from that.
>>
>
> sure I would suggest it like this:
>
> /cluster/bulk-actions/guest/{start,shutdown,...} ->
> PVE::API2::Cluster::Bulk(Actions?)::Guest;
> /cluster/bulk-actions/sdn/{...} -> PVE::API2::Cluster::Bulk::SDN;
>
> maybe in the future we can have:
> /cluster/bulk-actions/node/{...} -> PVE::API2::Cluster::Bulk::Node;
>
fine with me!
>> Maybe extract that into a function, since it seems to be the same code
>> as above?
>>
>> Or maybe even a do while would simplify things here? Haven't thought it
>> through 100%, just an idea:
>>
>> do {
>> // check for terminated workers and reap them
>> // fill empty worker slots with new workers
>> }
>> while (workers_exist)
>>
>> Would maybe simplify things and not require the waiting part at the end?
>
> it's not so easy sadly since the two blocks are not the same
>
> we have two different mechanisms here
>
> we have worker slots (max_workers) that we want to fill.
> while we are going through an order (e.g. for start/shutdown) we don't
> want to start with the next order while there are still workers running so
>
> while we can still ad workers, we loop over the existing ones until one is
> finished and queue the next. at the end of the 'order' we have wait for all
> remaining workers before continuing to the next order.
>
Yeah, I thought it'd possible to do that loop for each order - but it
was just a quick thought I scribbled down to possibly avoid duplicating
code. I figured I'm probably missing something.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
prev parent reply other threads:[~2025-03-20 8:45 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-18 10:39 Dominik Csapak
2025-03-18 11:30 ` Stefan Hanreich
2025-03-19 9:04 ` Dominik Csapak
2025-03-20 8:44 ` Stefan Hanreich [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=21f4c584-58f1-4706-b750-f8f7655bd885@proxmox.com \
--to=s.hanreich@proxmox.com \
--cc=d.csapak@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.