From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Aaron Lauterer <a.lauterer@proxmox.com>,
Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use
Date: Fri, 19 Aug 2022 10:20:00 +0200 [thread overview]
Message-ID: <1660896759.rnpqx2mr05.astroid@nora.none> (raw)
In-Reply-To: <ff083011-f422-1d93-d9d5-f667300e03e5@proxmox.com>
On August 18, 2022 5:22 pm, Aaron Lauterer wrote:
>
>
> On 8/17/22 13:42, Fabian Grünbichler wrote:
>> On July 15, 2022 1:58 pm, Aaron Lauterer wrote:
> [...]
>>> - my $worker = sub {
>>> - my $path = "/mnt/pve/$name";
>>> - my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
>>> - my $mountunitpath = "/etc/systemd/system/$mountunitname";
>>> + my $mounted = PVE::Diskmanage::mounted_paths();
>>> + die "a mountpoint for '${name}' already exists: ${path} ($1)\n" if $mounted->{$path};
>>
>> nit: the mountpoint existing is not the issue (something already being
>> mounted there is!). also, where does the $1 come from?
>
> good point and thanks for catching the $1
>>
>>> + die "a systemd mount unit already exists: ${mountunitpath}\n" if -e $mountunitpath;
>>
>> could check if it's identical to the one we'd generate (in the spirit of
>> patch #3 ;))
>
> I looked into it, depending on how hard we want to match the mount unit, this
> could be a bit hard. It contains the /dev/disk/by-uuid/... path which will not
> be the same as it changes with each FS creation (IIUC).
>
makes sense, and no, we definitely don't want any fancy heuristics for
"close enough to ignore it already exists" ;) so this can stay as it is.
>>
>>>
>>> + my $worker = sub {
>>> PVE::Diskmanage::locked_disk_action(sub {
>>> PVE::Diskmanage::assert_disk_unused($dev);
>>>
>>> diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
>>> index 6e4331a..a27afe2 100644
>>> --- a/PVE/API2/Disks/LVM.pm
>>> +++ b/PVE/API2/Disks/LVM.pm
>>> @@ -152,6 +152,9 @@ __PACKAGE__->register_method ({
>>> PVE::Diskmanage::assert_disk_unused($dev);
>>> PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>>>
>>> + die "volume group with name '${name}' already exists\n"
>>> + if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
>>
>> probably better off inside the worker, since `vgs` might take a while
>> (although we also use it in the index API call in this module..)
>
> From a GUI perspective: putting it in a worker would result in the user to hit
> okay and then will see the failed task right? Keeping it as is will result in an
> error popping up when clicking ok/create and the user can edit the name instead
> of starting all over again. Though, if `vgs` really needs a bit, that error
> popping up could take a moment or two.
yes, putting it in the worker means no early-failure. but not putting it
in the worker potentially means this API endpoint cannot be called on
systems with busy LVM at all (30s timeout for API requests!). so
early-failure checks should almost always only do "logical" things that
are cheap (like reading a conf file and checking invariants), and
nothing that can reasonably block for some time (like storage
operations, starting guests, ..). I know we are not 100% consistent
there (the worst offender is probably the storage content API endpoint
;)), but we should try to not introduce more problems of that fashion
but rather work on removing the existing ones.
>
> [...]
>
>>> sub preparetree {
>>> @@ -336,6 +340,7 @@ __PACKAGE__->register_method ({
>>> my $user = $rpcenv->get_user();
>>>
>>> my $name = $param->{name};
>>> + my $node = $param->{node};
>>
>> nit: please also change the usage further below in this API handler if
>> you do this
>
> what exactly do you mean? defining local variables for $param->{..} ones?
the usage of $param->{node} below in this handler should (also) switch
to $node - we don't want two places with the same information, that can
cause subtle breakage in the future when only one of them gets
modified/..
>
>>
>>> my $devs = [PVE::Tools::split_list($param->{devices})];
>>> my $raidlevel = $param->{raidlevel};
>>> my $compression = $param->{compression} // 'on';
>>> @@ -346,6 +351,10 @@ __PACKAGE__->register_method ({
>>> }
>>> PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>>>
>>> + my $pools = get_pool_data();
>>
>> and also here
>>
>>> + die "pool '${name}' already exists on node '$node'\n"
>>> + if grep { $_->{name} eq $name } @{$pools};
>>> +
>>> my $numdisks = scalar(@$devs);
>>> my $mindisks = {
>>> single => 1,
>>> --
>>> 2.30.2
next prev parent reply other threads:[~2022-08-19 8:20 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-15 11:58 [pve-devel] [PATCH storage v2 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 1/3] diskmanage: add mounted_paths Aaron Lauterer
2022-08-17 11:35 ` Fabian Grünbichler
2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use Aaron Lauterer
2022-08-17 11:42 ` Fabian Grünbichler
2022-08-18 15:22 ` Aaron Lauterer
2022-08-18 15:31 ` Aaron Lauterer
2022-08-19 8:21 ` Fabian Grünbichler
2022-08-19 9:29 ` Aaron Lauterer
2022-08-19 8:20 ` Fabian Grünbichler [this message]
2022-08-19 9:28 ` Aaron Lauterer
2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
2022-08-17 11:53 ` Fabian Grünbichler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1660896759.rnpqx2mr05.astroid@nora.none \
--to=f.gruenbichler@proxmox.com \
--cc=a.lauterer@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox