all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server v3 3/5] await and kill lingering KVM thread when VM start reaches timeout
Date: Thu, 22 Dec 2022 14:20:28 +0100	[thread overview]
Message-ID: <1671714998.zcf0qv7i9n.astroid@yuna.none> (raw)
In-Reply-To: <a348bbc4-e5d1-cc6d-b5bc-1e51d5358f2d@proxmox.com>

On December 22, 2022 1:58 pm, Daniel Tschlatscher wrote:

>>>  
>>> -	    my $exitcode = run_command($cmd, %run_params);
>>> -	    if ($exitcode) {
>>> -		if ($tpmpid) {
>>> -		    warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup error\n";
>>> -		    kill 'TERM', $tpmpid;
>>> +	    eval {
>>> +		my $exitcode = run_command($cmd, %run_params);
>>> +
>>> +		if ($exitcode) {
>>> +		    if ($tpmpid) {
>>> +			log_warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup
>> error\n";
>> 
>> this warn -> log_warn change kind of slipped in, it's not really part of this
>> patch?
> 
> Because I changed this line anyway, I changed it to log_warn as it is
> imported already and, as I understood, the preferable alternative
> to calling 'warn'.
> Sourcing this in it's own patch seems overkill to me, or would you
> rather suggest something like this should be handled in, e.g. a
> file-encompassing refactoring?

ideally it could be sent as cleanup patch up-front (then it can be applied even
if the rest needs another round ;)) or at least mentioned somewhere (e.g., in
the patch notes). seemingly unrelated changes in a patch always make me wary that
the patch was generated from some unclean tree/more or less than intended was
`git add`ed. in this case my guess was that you just changed that (wrapped) call
site to match your newly introduced ones, but it could also have been an
unintentional search+replace result, for example, so I'd rather ask :)




  reply	other threads:[~2022-12-22 13:20 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-16 13:36 [pve-devel] [PATCH common/qemu-server/manager v3] fix #3502: VM start timeout config parameter Daniel Tschlatscher
2022-12-16 13:36 ` [pve-devel] [PATCH common v3 1/5] VM start timeout config parameter in backend Daniel Tschlatscher
2022-12-16 13:36 ` [pve-devel] [PATCH qemu-server v3 2/5] expose VM start timeout config setting in API Daniel Tschlatscher
2022-12-16 13:36 ` [pve-devel] [PATCH qemu-server v3 3/5] await and kill lingering KVM thread when VM start reaches timeout Daniel Tschlatscher
2022-12-21 11:14   ` Fabian Grünbichler
2022-12-22 12:58     ` Daniel Tschlatscher
2022-12-22 13:20       ` Fabian Grünbichler [this message]
2022-12-22 14:22         ` Daniel Tschlatscher
2022-12-16 13:36 ` [pve-devel] [PATCH qemu-server v3 4/5] make the timeout value editable when the VM is locked Daniel Tschlatscher
2022-12-16 13:36 ` [pve-devel] [PATCH manager v3 5/5] VM start Timeout "Options" parameter in the GUI Daniel Tschlatscher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1671714998.zcf0qv7i9n.astroid@yuna.none \
    --to=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal