public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: "Proxmox Backup Server development discussion"
	<pbs-devel@lists.proxmox.com>,
	"Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox 2/2] rest-server: close race window when updating worker task count
Date: Fri, 29 Nov 2024 14:27:17 +0100	[thread overview]
Message-ID: <00e24e50-5df8-4c62-abe2-e14916c4a7ba@proxmox.com> (raw)
In-Reply-To: <20241129131329.765815-3-f.gruenbichler@proxmox.com>

Am 29.11.24 um 14:13 schrieb Fabian Grünbichler:
> this mimics how the count is updated when spawning a new task - the lock scope
> needs to cover the count update itself, else there's a race when multiple
> worker's log their result at the same time..
> 
> Co-developed-by: Dominik Csapak <d.csapak@proxmox.com>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
>  proxmox-rest-server/src/worker_task.rs | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/proxmox-rest-server/src/worker_task.rs b/proxmox-rest-server/src/worker_task.rs
> index 3ca93965..018d18c0 100644
> --- a/proxmox-rest-server/src/worker_task.rs
> +++ b/proxmox-rest-server/src/worker_task.rs
> @@ -1023,7 +1023,8 @@ impl WorkerTask {
>  
>          WORKER_TASK_LIST.lock().unwrap().remove(&self.upid.task_id);
>          let _ = self.setup.update_active_workers(None);
> -        set_worker_count(WORKER_TASK_LIST.lock().unwrap().len());
> +        let lock = WORKER_TASK_LIST.lock().unwrap();

why not use this also for the remove operation above? I.e. something like:

let locked_worker_tasks = WORKER_TASK_LIST.lock().unwrap();

locked_worker_tasks.remove(&self.upid.task_id);

set_worker_count(locked_worker_tasks.len())

If there are technical reason speaking against this, which I hope not, then a
comment would be definitively warranted, otherwise using a single lock would
IMO make this a bit clearer and locking twice isn't exactly cheaper.

Looks OK besides that, but would still want to take a closer look.

> +        set_worker_count(lock.len());
>      }
>  
>      /// Log a message.



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

  reply	other threads:[~2024-11-29 13:27 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-29 13:13 [pbs-devel] [RFC proxmox 0/2] worker task setup improvements Fabian Grünbichler
2024-11-29 13:13 ` [pbs-devel] [PATCH proxmox 1/2] rest-server: handle failure in worker task setup correctly Fabian Grünbichler
2024-11-29 13:34   ` Thomas Lamprecht
2024-12-02  9:14     ` Fabian Grünbichler
2024-11-29 13:13 ` [pbs-devel] [PATCH proxmox 2/2] rest-server: close race window when updating worker task count Fabian Grünbichler
2024-11-29 13:27   ` Thomas Lamprecht [this message]
2024-11-29 14:20     ` Dominik Csapak
2024-12-02  9:14       ` Fabian Grünbichler
2024-11-29 14:53 ` [pbs-devel] [RFC proxmox 0/2] worker task setup improvements Dominik Csapak
2024-12-02 13:04 ` [pbs-devel] superseded: " Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=00e24e50-5df8-4c62-abe2-e14916c4a7ba@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=f.gruenbichler@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal