public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH storage] fix #3610: properly build ZFS detail tree
@ 2021-09-07  9:30 Fabian Ebner
  2021-09-09 16:24 ` Thomas Lamprecht
  0 siblings, 1 reply; 4+ messages in thread
From: Fabian Ebner @ 2021-09-07  9:30 UTC (permalink / raw)
  To: pve-devel

Previously, top-level vdevs like log or special were wrongly added as
children of the previous outer vdev instead of the root.

Fix it by also showing the vdev with the same name as the pool and
start counting from level 1 (the pool itself serves as the root and
should be the only one with level 0). This results in the same kind
of structure as in PBS and (except for the root) zpool status itself.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
 PVE/API2/Disks/ZFS.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index 0418794..60077c4 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -240,8 +240,8 @@ __PACKAGE__->register_method ({
 		$config = 1;
 	    } elsif ($config && $line =~ m/^(\s+)(\S+)\s*(\S+)?(?:\s+(\S+)\s+(\S+)\s+(\S+))?\s*(.*)$/) {
 		my ($space, $name, $state, $read, $write, $cksum, $msg) = ($1, $2, $3, $4, $5, $6, $7);
-		if ($name ne "NAME" and $name ne $param->{name}) {
-		    my $lvl= int(length($space)/2); # two spaces per level
+		if ($name ne "NAME") {
+		    my $lvl = int(length($space) / 2) + 1; # two spaces per level
 		    my $vdev = {
 			name => $name,
 			msg => $msg,
-- 
2.30.2





^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [pve-devel] [PATCH storage] fix #3610: properly build ZFS detail tree
  2021-09-07  9:30 [pve-devel] [PATCH storage] fix #3610: properly build ZFS detail tree Fabian Ebner
@ 2021-09-09 16:24 ` Thomas Lamprecht
  2021-09-10  8:03   ` Fabian Ebner
  0 siblings, 1 reply; 4+ messages in thread
From: Thomas Lamprecht @ 2021-09-09 16:24 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 07.09.21 11:30, Fabian Ebner wrote:
> Previously, top-level vdevs like log or special were wrongly added as
> children of the previous outer vdev instead of the root.
> 
> Fix it by also showing the vdev with the same name as the pool and
> start counting from level 1 (the pool itself serves as the root and
> should be the only one with level 0). This results in the same kind
> of structure as in PBS and (except for the root) zpool status itself.
> 
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
>  PVE/API2/Disks/ZFS.pm | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
> index 0418794..60077c4 100644
> --- a/PVE/API2/Disks/ZFS.pm
> +++ b/PVE/API2/Disks/ZFS.pm
> @@ -240,8 +240,8 @@ __PACKAGE__->register_method ({
>  		$config = 1;
>  	    } elsif ($config && $line =~ m/^(\s+)(\S+)\s*(\S+)?(?:\s+(\S+)\s+(\S+)\s+(\S+))?\s*(.*)$/) {
>  		my ($space, $name, $state, $read, $write, $cksum, $msg) = ($1, $2, $3, $4, $5, $6, $7);
> -		if ($name ne "NAME" and $name ne $param->{name}) {
> -		    my $lvl= int(length($space)/2); # two spaces per level
> +		if ($name ne "NAME") {
> +		    my $lvl = int(length($space) / 2) + 1; # two spaces per level
>  		    my $vdev = {
>  			name => $name,
>  			msg => $msg,
> 

hmm, I get the idea and can see how one could assert that this is more correct,
but as it is presented it'd be a bit more confusing too, IMO, as it does not
matches the zpool status CLI output anymore.

I.e., the following (real):
>         NAME                                      STATE     READ WRITE CKSUM
>         zpt                                       ONLINE       0     0     0
>           mirror-0                                ONLINE       0     0     0
>             scsi-0QEMU_QEMU_HARDDISK_drive-scsi3  ONLINE       0     0     0
>             scsi-0QEMU_QEMU_HARDDISK_drive-scsi4  ONLINE       0     0     0
>         logs
>           scsi-0QEMU_QEMU_HARDDISK_drive-scsi5    ONLINE       0     0     0


Is suggested to be (adapted):
>         NAME                                      STATE     READ WRITE CKSUM
>         zpt                                       ONLINE
>           zpt                                       ONLINE       0     0     0
>             mirror-0                                ONLINE       0     0     0
>               scsi-0QEMU_QEMU_HARDDISK_drive-scsi3  ONLINE       0     0     0
>               scsi-0QEMU_QEMU_HARDDISK_drive-scsi4  ONLINE       0     0     0
>         logs
>           scsi-0QEMU_QEMU_HARDDISK_drive-scsi5    ONLINE       0     0     0


How about hiding the root in the devices tree and add a line to the
KV grid above instead? E.g., something that would then render:

Pool       <ID> (<STATUS>)

That can be a GUI change only, did not really checked implementation details, but
I'd like to clear that up before applying this patch.




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [pve-devel] [PATCH storage] fix #3610: properly build ZFS detail tree
  2021-09-09 16:24 ` Thomas Lamprecht
@ 2021-09-10  8:03   ` Fabian Ebner
  2021-09-10  8:05     ` Thomas Lamprecht
  0 siblings, 1 reply; 4+ messages in thread
From: Fabian Ebner @ 2021-09-10  8:03 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

Am 09.09.21 um 18:24 schrieb Thomas Lamprecht:
> On 07.09.21 11:30, Fabian Ebner wrote:
>> Previously, top-level vdevs like log or special were wrongly added as
>> children of the previous outer vdev instead of the root.
>>
>> Fix it by also showing the vdev with the same name as the pool and
>> start counting from level 1 (the pool itself serves as the root and
>> should be the only one with level 0). This results in the same kind
>> of structure as in PBS and (except for the root) zpool status itself.
>>
>> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
>> ---
>>   PVE/API2/Disks/ZFS.pm | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
>> index 0418794..60077c4 100644
>> --- a/PVE/API2/Disks/ZFS.pm
>> +++ b/PVE/API2/Disks/ZFS.pm
>> @@ -240,8 +240,8 @@ __PACKAGE__->register_method ({
>>   		$config = 1;
>>   	    } elsif ($config && $line =~ m/^(\s+)(\S+)\s*(\S+)?(?:\s+(\S+)\s+(\S+)\s+(\S+))?\s*(.*)$/) {
>>   		my ($space, $name, $state, $read, $write, $cksum, $msg) = ($1, $2, $3, $4, $5, $6, $7);
>> -		if ($name ne "NAME" and $name ne $param->{name}) {
>> -		    my $lvl= int(length($space)/2); # two spaces per level
>> +		if ($name ne "NAME") {
>> +		    my $lvl = int(length($space) / 2) + 1; # two spaces per level
>>   		    my $vdev = {
>>   			name => $name,
>>   			msg => $msg,
>>
> 
> hmm, I get the idea and can see how one could assert that this is more correct,
> but as it is presented it'd be a bit more confusing too, IMO, as it does not
> matches the zpool status CLI output anymore.
> 
> I.e., the following (real):
>>          NAME                                      STATE     READ WRITE CKSUM
>>          zpt                                       ONLINE       0     0     0
>>            mirror-0                                ONLINE       0     0     0
>>              scsi-0QEMU_QEMU_HARDDISK_drive-scsi3  ONLINE       0     0     0
>>              scsi-0QEMU_QEMU_HARDDISK_drive-scsi4  ONLINE       0     0     0
>>          logs
>>            scsi-0QEMU_QEMU_HARDDISK_drive-scsi5    ONLINE       0     0     0
> 
> 
> Is suggested to be (adapted):
>>          NAME                                      STATE     READ WRITE CKSUM
>>          zpt                                       ONLINE
>>            zpt                                       ONLINE       0     0     0
>>              mirror-0                                ONLINE       0     0     0
>>                scsi-0QEMU_QEMU_HARDDISK_drive-scsi3  ONLINE       0     0     0
>>                scsi-0QEMU_QEMU_HARDDISK_drive-scsi4  ONLINE       0     0     0
>>          logs
>>            scsi-0QEMU_QEMU_HARDDISK_drive-scsi5    ONLINE       0     0     0
> 

Is 'logs' aligned with the outer 'zpt' or the inner 'zpt'? It's intended 
to be the inner one, but in the mail it looks like the outer one to me.

> 
> How about hiding the root in the devices tree and add a line to the
> KV grid above instead? E.g., something that would then render:
> 
> Pool       <ID> (<STATUS>)
> 
> That can be a GUI change only, did not really checked implementation details, but
> I'd like to clear that up before applying this patch.
> 

I suppose that should be done for PBS too then? And PVE should switch to 
using the ZFSDetail window from widget-toolkit?




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [pve-devel] [PATCH storage] fix #3610: properly build ZFS detail tree
  2021-09-10  8:03   ` Fabian Ebner
@ 2021-09-10  8:05     ` Thomas Lamprecht
  0 siblings, 0 replies; 4+ messages in thread
From: Thomas Lamprecht @ 2021-09-10  8:05 UTC (permalink / raw)
  To: Fabian Ebner, Proxmox VE development discussion

On 10.09.21 10:03, Fabian Ebner wrote:
> Am 09.09.21 um 18:24 schrieb Thomas Lamprecht:
>> hmm, I get the idea and can see how one could assert that this is more correct,
>> but as it is presented it'd be a bit more confusing too, IMO, as it does not
>> matches the zpool status CLI output anymore.
>>
>> I.e., the following (real):
>>>          NAME                                      STATE     READ WRITE CKSUM
>>>          zpt                                       ONLINE       0     0     0
>>>            mirror-0                                ONLINE       0     0     0
>>>              scsi-0QEMU_QEMU_HARDDISK_drive-scsi3  ONLINE       0     0     0
>>>              scsi-0QEMU_QEMU_HARDDISK_drive-scsi4  ONLINE       0     0     0
>>>          logs
>>>            scsi-0QEMU_QEMU_HARDDISK_drive-scsi5    ONLINE       0     0     0
>>
>>
>> Is suggested to be (adapted):
>>>          NAME                                      STATE     READ WRITE CKSUM
>>>          zpt                                       ONLINE
>>>            zpt                                       ONLINE       0     0     0
>>>              mirror-0                                ONLINE       0     0     0
>>>                scsi-0QEMU_QEMU_HARDDISK_drive-scsi3  ONLINE       0     0     0
>>>                scsi-0QEMU_QEMU_HARDDISK_drive-scsi4  ONLINE       0     0     0
>>>          logs
>>>            scsi-0QEMU_QEMU_HARDDISK_drive-scsi5    ONLINE       0     0     0
>>
> 
> Is 'logs' aligned with the outer 'zpt' or the inner 'zpt'? It's intended to be the inner one, but in the mail it looks like the outer one to me.
> 

inner, so that's OK and will keep that way if we drop the root node.

>>
>> How about hiding the root in the devices tree and add a line to the
>> KV grid above instead? E.g., something that would then render:
>>
>> Pool       <ID> (<STATUS>)
>>
>> That can be a GUI change only, did not really checked implementation details, but
>> I'd like to clear that up before applying this patch.
>>
> 
> I suppose that should be done for PBS too then? And PVE should switch to using the ZFSDetail window from widget-toolkit?

yes to both (ideally).

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-09-10  8:06 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-07  9:30 [pve-devel] [PATCH storage] fix #3610: properly build ZFS detail tree Fabian Ebner
2021-09-09 16:24 ` Thomas Lamprecht
2021-09-10  8:03   ` Fabian Ebner
2021-09-10  8:05     ` Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal