public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint
@ 2022-06-08  8:51 Hannes Laimer
  2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-backup v2 1/3] api2: disks endpoint return partitions Hannes Laimer
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Hannes Laimer @ 2022-06-08  8:51 UTC (permalink / raw)
  To: pbs-devel

In order to work with the existing frontend component for displying the
disklist, either
* the partition data has to be return with the same struct a disk is
  returned. Which leads to the same struct being used for different
  things -> quite a few fields are always empty for partitions and a new
  'type' field would be needed. Also the code structure in PBS has to be
  changed quite a bit.
* the existing frontend has to be adjusted to handle data from PVE and
  PBS properly.

I went with the second option because the adjustments nedded in the UI
compenent were minimal and, IMHO, adjusting the API to fit the UI is the
wrong direction.

For the mount status to shown in the UI, the patch[1] sent to pve-devel for
the 'mounted' column is needed.

NOTE: The partition data will be needed in later patches for removable
datastores.

v1->v2:
 * use builder pattern for queries as suggested by Wolfgang
 * move mounted out of Enum and add filesystem string
 * add missing zfsreserve usage type
 * add 'mounted' column to disklist (separate patch[1])


[1] https://lists.proxmox.com/pipermail/pve-devel/2022-June/053211.html

* proxmox-backup
Hannes Laimer (3):
  api2: disks endpoint return partitions
  disks: use builder pattern for querying disk usage
  ui: disks: show partitions by default

 src/api2/node/disks/directory.rs |   6 +-
 src/api2/node/disks/mod.rs       |  19 ++-
 src/api2/node/disks/zfs.rs       |   2 +-
 src/tools/disks/mod.rs           | 206 +++++++++++++++++++++++++++++--
 www/panel/StorageAndDisks.js     |   1 +
 5 files changed, 215 insertions(+), 19 deletions(-)

* proxmox-widget-toolkit
Hannes Laimer (1):
  ui: DiskLisk: handle partition data from PBS backend

 src/panel/DiskList.js | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 1/3] api2: disks endpoint return partitions
  2022-06-08  8:51 [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint Hannes Laimer
@ 2022-06-08  8:51 ` Hannes Laimer
  2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-backup v2 2/3] disks: use builder pattern for querying disk usage Hannes Laimer
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Hannes Laimer @ 2022-06-08  8:51 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
 src/api2/node/disks/directory.rs |   2 +-
 src/api2/node/disks/mod.rs       |  11 ++-
 src/api2/node/disks/zfs.rs       |   2 +-
 src/tools/disks/mod.rs           | 132 ++++++++++++++++++++++++++++++-
 4 files changed, 140 insertions(+), 7 deletions(-)

diff --git a/src/api2/node/disks/directory.rs b/src/api2/node/disks/directory.rs
index f4d85d0a..123a8d7b 100644
--- a/src/api2/node/disks/directory.rs
+++ b/src/api2/node/disks/directory.rs
@@ -147,7 +147,7 @@ pub fn create_datastore_disk(
 
     let auth_id = rpcenv.get_auth_id().unwrap();
 
-    let info = get_disk_usage_info(&disk, true)?;
+    let info = get_disk_usage_info(&disk, true, false)?;
 
     if info.used != DiskUsageType::Unused {
         bail!("disk '{}' is already in use.", disk);
diff --git a/src/api2/node/disks/mod.rs b/src/api2/node/disks/mod.rs
index c44ccfee..478829fb 100644
--- a/src/api2/node/disks/mod.rs
+++ b/src/api2/node/disks/mod.rs
@@ -33,6 +33,12 @@ pub mod zfs;
                 optional: true,
                 default: false,
             },
+            "include-partitions": {
+                description: "Include partitions.",
+                type: bool,
+                optional: true,
+                default: false,
+            },
             "usage-type": {
                 type: DiskUsageType,
                 optional: true,
@@ -53,11 +59,12 @@ pub mod zfs;
 /// List local disks
 pub fn list_disks(
     skipsmart: bool,
+    include_partitions: bool,
     usage_type: Option<DiskUsageType>,
 ) -> Result<Vec<DiskUsageInfo>, Error> {
     let mut list = Vec::new();
 
-    for (_, info) in get_disks(None, skipsmart)? {
+    for (_, info) in get_disks(None, skipsmart, include_partitions)? {
         if let Some(ref usage_type) = usage_type {
             if info.used == *usage_type {
                 list.push(info);
@@ -140,7 +147,7 @@ pub fn initialize_disk(
 
     let auth_id = rpcenv.get_auth_id().unwrap();
 
-    let info = get_disk_usage_info(&disk, true)?;
+    let info = get_disk_usage_info(&disk, true, false)?;
 
     if info.used != DiskUsageType::Unused {
         bail!("disk '{}' is already in use.", disk);
diff --git a/src/api2/node/disks/zfs.rs b/src/api2/node/disks/zfs.rs
index dac6f535..5cb23e70 100644
--- a/src/api2/node/disks/zfs.rs
+++ b/src/api2/node/disks/zfs.rs
@@ -174,7 +174,7 @@ pub fn create_zpool(
         .map(|v| v.as_str().unwrap().to_string())
         .collect();
 
-    let disk_map = crate::tools::disks::get_disks(None, true)?;
+    let disk_map = crate::tools::disks::get_disks(None, true, false)?;
     for disk in devices.iter() {
         match disk_map.get(disk) {
             Some(info) => {
diff --git a/src/tools/disks/mod.rs b/src/tools/disks/mod.rs
index 568dccbf..ea4c687a 100644
--- a/src/tools/disks/mod.rs
+++ b/src/tools/disks/mod.rs
@@ -602,6 +602,26 @@ fn get_file_system_devices(lsblk_info: &[LsblkInfo]) -> Result<HashSet<u64>, Err
     Ok(device_set)
 }
 
+#[api()]
+#[derive(Debug, Serialize, Deserialize, PartialEq)]
+#[serde(rename_all = "lowercase")]
+pub enum PartitionUsageType {
+    /// Partition is not used (as far we can tell)
+    Unused,
+    /// Partition is used by LVM
+    LVM,
+    /// Partition is used by ZFS
+    ZFS,
+    /// Partition is ZFS reserved
+    ZfsReserved,
+    /// Partition is an EFI partition
+    EFI,
+    /// Partition is a BIOS partition
+    BIOS,
+    /// Partition contains a file system label
+    FileSystem,
+}
+
 #[api()]
 #[derive(Debug, Serialize, Deserialize, PartialEq)]
 #[serde(rename_all = "lowercase")]
@@ -622,6 +642,27 @@ pub enum DiskUsageType {
     FileSystem,
 }
 
+#[api()]
+#[derive(Debug, Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
+/// Baisc information about a partition
+pub struct PartitionInfo {
+    /// The partition name
+    pub name: String,
+    /// What the partition is used for
+    pub used: PartitionUsageType,
+    /// Is the partition mounted
+    pub mounted: bool,
+    /// The filesystem of the partition
+    pub filesystem: Option<String>,
+    /// The partition devpath
+    pub devpath: Option<String>,
+    /// Size in bytes
+    pub size: Option<u64>,
+    /// GPT partition
+    pub gpt: bool,
+}
+
 #[api(
     properties: {
         used: {
@@ -632,6 +673,12 @@ pub enum DiskUsageType {
         },
         status: {
             type: SmartStatus,
+        },
+        partitions: {
+            optional: true,
+            items: {
+                type: PartitionInfo
+            }
         }
     }
 )]
@@ -656,6 +703,8 @@ pub struct DiskUsageInfo {
     pub size: u64,
     /// Serisal number
     pub serial: Option<String>,
+    /// Partitions on the device
+    pub partitions: Option<Vec<PartitionInfo>>,
     /// Linux device path (/dev/xxx)
     pub devpath: Option<String>,
     /// Set if disk contains a GPT partition table
@@ -733,10 +782,14 @@ fn scan_partitions(
 }
 
 /// Get disk usage information for a single disk
-pub fn get_disk_usage_info(disk: &str, no_smart: bool) -> Result<DiskUsageInfo, Error> {
+pub fn get_disk_usage_info(
+    disk: &str,
+    no_smart: bool,
+    include_partitions: bool,
+) -> Result<DiskUsageInfo, Error> {
     let mut filter = Vec::new();
     filter.push(disk.to_string());
-    let mut map = get_disks(Some(filter), no_smart)?;
+    let mut map = get_disks(Some(filter), no_smart, include_partitions)?;
     if let Some(info) = map.remove(disk) {
         Ok(info)
     } else {
@@ -744,12 +797,72 @@ pub fn get_disk_usage_info(disk: &str, no_smart: bool) -> Result<DiskUsageInfo,
     }
 }
 
+fn get_partitions_info(
+    partitions: HashMap<u64, Disk>,
+    lvm_devices: &HashSet<u64>,
+    zfs_devices: &HashSet<u64>,
+    file_system_devices: &HashSet<u64>,
+) -> Vec<PartitionInfo> {
+    let lsblk_infos = get_lsblk_info().ok();
+    partitions
+        .iter()
+        .map(|(_nr, disk)| {
+            let devpath = disk
+                .device_path()
+                .map(|p| p.to_owned())
+                .map(|p| p.to_string_lossy().to_string());
+
+            let mut used = PartitionUsageType::Unused;
+
+            if let Some(devnum) = disk.devnum().ok() {
+                if lvm_devices.contains(&devnum) {
+                    used = PartitionUsageType::LVM;
+                } else if zfs_devices.contains(&devnum) {
+                    used = PartitionUsageType::ZFS;
+                } else if file_system_devices.contains(&devnum) {
+                    used = PartitionUsageType::FileSystem;
+                }
+            }
+
+            let mounted = disk.is_mounted().unwrap_or(false);
+            let mut filesystem = None;
+            if let (Some(devpath), Some(infos)) = (devpath.as_ref(), lsblk_infos.as_ref()) {
+                for info in infos.iter().filter(|i| i.path.eq(devpath)) {
+                    used = match info.partition_type.as_deref() {
+                        Some("21686148-6449-6e6f-744e-656564454649") => PartitionUsageType::BIOS,
+                        Some("c12a7328-f81f-11d2-ba4b-00a0c93ec93b") => PartitionUsageType::EFI,
+                        Some("6a945a3b-1dd2-11b2-99a6-080020736631") => {
+                            PartitionUsageType::ZfsReserved
+                        }
+                        _ => used,
+                    };
+                    if used == PartitionUsageType::FileSystem {
+                        filesystem = info.file_system_type.clone();
+                    }
+                }
+            }
+
+            PartitionInfo {
+                name: disk.sysname().to_str().unwrap_or("?").to_string(),
+                devpath,
+                used,
+                mounted,
+                filesystem,
+                size: disk.size().ok(),
+                gpt: disk.has_gpt(),
+            }
+        })
+        .collect()
+}
+
 /// Get disk usage information for multiple disks
 pub fn get_disks(
     // filter - list of device names (without leading /dev)
     disks: Option<Vec<String>>,
     // do no include data from smartctl
     no_smart: bool,
+    // include partitions
+    include_partitions: bool,
 ) -> Result<HashMap<String, DiskUsageInfo>, Error> {
     let disk_manager = DiskManage::new();
 
@@ -837,6 +950,19 @@ pub fn get_disks(
 
         let wwn = disk.wwn().map(|s| s.to_string_lossy().into_owned());
 
+        let partitions: Option<Vec<PartitionInfo>> = if include_partitions {
+            disk.partitions().map_or(None, |parts| {
+                Some(get_partitions_info(
+                    parts,
+                    &lvm_devices,
+                    &zfs_devices,
+                    &file_system_devices,
+                ))
+            })
+        } else {
+            None
+        };
+
         if usage != DiskUsageType::Mounted {
             match scan_partitions(disk_manager.clone(), &lvm_devices, &zfs_devices, &name) {
                 Ok(part_usage) => {
@@ -870,6 +996,7 @@ pub fn get_disks(
             name: name.clone(),
             vendor,
             model,
+            partitions,
             serial,
             devpath,
             size,
@@ -989,7 +1116,6 @@ pub fn create_file_system(disk: &Disk, fs_type: FileSystemType) -> Result<(), Er
 
     Ok(())
 }
-
 /// Block device name completion helper
 pub fn complete_disk_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
     let dir =
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 2/3] disks: use builder pattern for querying disk usage
  2022-06-08  8:51 [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint Hannes Laimer
  2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-backup v2 1/3] api2: disks endpoint return partitions Hannes Laimer
@ 2022-06-08  8:51 ` Hannes Laimer
  2022-06-09 10:38   ` Wolfgang Bumiller
  2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-backup v2 3/3] ui: disks: show partitions by default Hannes Laimer
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Hannes Laimer @ 2022-06-08  8:51 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
 src/api2/node/disks/directory.rs |  6 +--
 src/api2/node/disks/mod.rs       | 12 +++--
 src/api2/node/disks/zfs.rs       |  2 +-
 src/tools/disks/mod.rs           | 86 ++++++++++++++++++++++++++------
 4 files changed, 84 insertions(+), 22 deletions(-)

diff --git a/src/api2/node/disks/directory.rs b/src/api2/node/disks/directory.rs
index 123a8d7b..cb2799fa 100644
--- a/src/api2/node/disks/directory.rs
+++ b/src/api2/node/disks/directory.rs
@@ -13,8 +13,8 @@ use pbs_api_types::{
 };
 
 use crate::tools::disks::{
-    create_file_system, create_single_linux_partition, get_disk_usage_info, get_fs_uuid,
-    DiskManage, DiskUsageType, FileSystemType,
+    create_file_system, create_single_linux_partition, get_fs_uuid, DiskManage, DiskUsageQuery,
+    DiskUsageType, FileSystemType,
 };
 use crate::tools::systemd::{self, types::*};
 
@@ -147,7 +147,7 @@ pub fn create_datastore_disk(
 
     let auth_id = rpcenv.get_auth_id().unwrap();
 
-    let info = get_disk_usage_info(&disk, true, false)?;
+    let info = DiskUsageQuery::new(&disk).smart(false).query()?;
 
     if info.used != DiskUsageType::Unused {
         bail!("disk '{}' is already in use.", disk);
diff --git a/src/api2/node/disks/mod.rs b/src/api2/node/disks/mod.rs
index 478829fb..2bd3f859 100644
--- a/src/api2/node/disks/mod.rs
+++ b/src/api2/node/disks/mod.rs
@@ -12,8 +12,8 @@ use pbs_api_types::{
 };
 
 use crate::tools::disks::{
-    get_disk_usage_info, get_disks, get_smart_data, inititialize_gpt_disk, DiskManage,
-    DiskUsageInfo, DiskUsageType, SmartData,
+    get_smart_data, inititialize_gpt_disk, DiskManage, DiskUsageInfo, DiskUsageQuery,
+    DiskUsageType, DisksUsageQuery, SmartData,
 };
 use proxmox_rest_server::WorkerTask;
 
@@ -64,7 +64,11 @@ pub fn list_disks(
 ) -> Result<Vec<DiskUsageInfo>, Error> {
     let mut list = Vec::new();
 
-    for (_, info) in get_disks(None, skipsmart, include_partitions)? {
+    for (_, info) in DisksUsageQuery::new()
+        .smart(!skipsmart)
+        .partitions(include_partitions)
+        .query()?
+    {
         if let Some(ref usage_type) = usage_type {
             if info.used == *usage_type {
                 list.push(info);
@@ -147,7 +151,7 @@ pub fn initialize_disk(
 
     let auth_id = rpcenv.get_auth_id().unwrap();
 
-    let info = get_disk_usage_info(&disk, true, false)?;
+    let info = DiskUsageQuery::new(&disk).query()?;
 
     if info.used != DiskUsageType::Unused {
         bail!("disk '{}' is already in use.", disk);
diff --git a/src/api2/node/disks/zfs.rs b/src/api2/node/disks/zfs.rs
index 5cb23e70..a2d6ff0f 100644
--- a/src/api2/node/disks/zfs.rs
+++ b/src/api2/node/disks/zfs.rs
@@ -174,7 +174,7 @@ pub fn create_zpool(
         .map(|v| v.as_str().unwrap().to_string())
         .collect();
 
-    let disk_map = crate::tools::disks::get_disks(None, true, false)?;
+    let disk_map = crate::tools::disks::DisksUsageQuery::new().query()?;
     for disk in devices.iter() {
         match disk_map.get(disk) {
             Some(info) => {
diff --git a/src/tools/disks/mod.rs b/src/tools/disks/mod.rs
index ea4c687a..a1436fc3 100644
--- a/src/tools/disks/mod.rs
+++ b/src/tools/disks/mod.rs
@@ -781,19 +781,77 @@ fn scan_partitions(
     Ok(used)
 }
 
-/// Get disk usage information for a single disk
-pub fn get_disk_usage_info(
-    disk: &str,
-    no_smart: bool,
-    include_partitions: bool,
-) -> Result<DiskUsageInfo, Error> {
-    let mut filter = Vec::new();
-    filter.push(disk.to_string());
-    let mut map = get_disks(Some(filter), no_smart, include_partitions)?;
-    if let Some(info) = map.remove(disk) {
-        Ok(info)
-    } else {
-        bail!("failed to get disk usage info - internal error"); // should not happen
+pub struct DisksUsageQuery {
+    disks: Option<Vec<String>>,
+    smart: bool,
+    partitions: bool,
+}
+
+impl DisksUsageQuery {
+    pub fn new() -> DisksUsageQuery {
+        DisksUsageQuery {
+            disks: None,
+            smart: true,
+            partitions: false,
+        }
+    }
+
+    pub fn disks<'a>(&'a mut self, disks: Vec<String>) -> &'a mut DisksUsageQuery {
+        self.disks = Some(disks);
+        self
+    }
+
+    pub fn smart<'a>(&'a mut self, smart: bool) -> &'a mut DisksUsageQuery {
+        self.smart = smart;
+        self
+    }
+
+    pub fn partitions<'a>(&'a mut self, partitions: bool) -> &'a mut DisksUsageQuery {
+        self.partitions = partitions;
+        self
+    }
+
+    pub fn query(&self) -> Result<HashMap<String, DiskUsageInfo>, Error> {
+        get_disks(self.disks.clone(), !self.smart, self.partitions)
+    }
+}
+
+pub struct DiskUsageQuery {
+    disk: String,
+    smart: bool,
+    partitions: bool,
+}
+
+impl DiskUsageQuery {
+    pub fn new(disk: &str) -> DiskUsageQuery {
+        DiskUsageQuery {
+            disk: String::from(disk),
+            smart: true,
+            partitions: false,
+        }
+    }
+
+    pub fn smart<'a>(&'a mut self, smart: bool) -> &'a mut DiskUsageQuery {
+        self.smart = smart;
+        self
+    }
+
+    pub fn partitions<'a>(&'a mut self, partitions: bool) -> &'a mut DiskUsageQuery {
+        self.partitions = partitions;
+        self
+    }
+
+    pub fn query(&self) -> Result<DiskUsageInfo, Error> {
+        let mut map = DisksUsageQuery::new()
+            .disks(vec![self.disk.clone()])
+            .smart(self.smart)
+            .partitions(self.partitions)
+            .query()?;
+        if let Some(info) = map.remove(&self.disk) {
+            Ok(info)
+        } else {
+            bail!("failed to get disk usage info - internal error"); // should not happen
+        }
     }
 }
 
@@ -856,7 +914,7 @@ fn get_partitions_info(
 }
 
 /// Get disk usage information for multiple disks
-pub fn get_disks(
+fn get_disks(
     // filter - list of device names (without leading /dev)
     disks: Option<Vec<String>>,
     // do no include data from smartctl
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 3/3] ui: disks: show partitions by default
  2022-06-08  8:51 [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint Hannes Laimer
  2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-backup v2 1/3] api2: disks endpoint return partitions Hannes Laimer
  2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-backup v2 2/3] disks: use builder pattern for querying disk usage Hannes Laimer
@ 2022-06-08  8:51 ` Hannes Laimer
  2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-widget-toolkit v2 1/1] ui: DiskLisk: handle partition data from PBS backend Hannes Laimer
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Hannes Laimer @ 2022-06-08  8:51 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
 www/panel/StorageAndDisks.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/panel/StorageAndDisks.js b/www/panel/StorageAndDisks.js
index 5fd6faf4..7bd7042c 100644
--- a/www/panel/StorageAndDisks.js
+++ b/www/panel/StorageAndDisks.js
@@ -16,6 +16,7 @@ Ext.define('PBS.StorageAndDiskPanel', {
 	{
 	    xtype: 'pmxDiskList',
 	    title: gettext('Disks'),
+	    includePartitions: true,
 	    itemId: 'disks',
 	    iconCls: 'fa fa-hdd-o',
 	},
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [PATCH proxmox-widget-toolkit v2 1/1] ui: DiskLisk: handle partition data from PBS backend
  2022-06-08  8:51 [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint Hannes Laimer
                   ` (2 preceding siblings ...)
  2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-backup v2 3/3] ui: disks: show partitions by default Hannes Laimer
@ 2022-06-08  8:51 ` Hannes Laimer
  2022-06-15  8:58   ` [pbs-devel] applied: " Wolfgang Bumiller
  2022-06-15  7:58 ` [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint Wolfgang Bumiller
  2022-06-15  9:09 ` [pbs-devel] applied-series: [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpointg Wolfgang Bumiller
  5 siblings, 1 reply; 14+ messages in thread
From: Hannes Laimer @ 2022-06-08  8:51 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
 src/panel/DiskList.js | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/src/panel/DiskList.js b/src/panel/DiskList.js
index 76d92cd..c5a0050 100644
--- a/src/panel/DiskList.js
+++ b/src/panel/DiskList.js
@@ -167,10 +167,19 @@ Ext.define('Proxmox.DiskList', {
 
 	    for (const item of records) {
 		let data = item.data;
-		data.leaf = true;
 		data.expanded = true;
-		data.children = [];
+		data.children = data.partitions ?? [];
+		for (let p of data.children) {
+		    p['disk-type'] = 'partition';
+		    p.iconCls = 'fa fa-fw fa-hdd-o x-fa-tree';
+		    p.used = p.used === 'filesystem' ? p.filesystem : p.used;
+		    p.parent = data.devpath;
+		    p.children = [];
+		    p.leaf = true;
+		}
 		data.iconCls = 'fa fa-fw fa-hdd-o x-fa-tree';
+		data.leaf = data.children.length === 0;
+
 		if (!data.parent) {
 		    disks[data.devpath] = data;
 		}
@@ -227,6 +236,15 @@ Ext.define('Proxmox.DiskList', {
 		extendedInfo = `, Ceph (${types.join(', ')})`;
 	    }
 	}
+	const formatMap = {
+	    'bios': 'BIOS boot',
+	    'zfsreserved': 'ZFS reserved',
+	    'efi': 'EFI',
+	    'lvm': 'LVM',
+	    'zfs': 'ZFS',
+	};
+
+	v = formatMap[v] || v;
 	return v ? `${v}${extendedInfo}` : Proxmox.Utils.noText;
     },
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v2 2/3] disks: use builder pattern for querying disk usage
  2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-backup v2 2/3] disks: use builder pattern for querying disk usage Hannes Laimer
@ 2022-06-09 10:38   ` Wolfgang Bumiller
  2022-06-15  6:07     ` [pbs-devel] [PATCH proxmox-backup v3 " Hannes Laimer
  2022-06-15  6:17     ` Hannes Laimer
  0 siblings, 2 replies; 14+ messages in thread
From: Wolfgang Bumiller @ 2022-06-09 10:38 UTC (permalink / raw)
  To: Hannes Laimer; +Cc: pbs-devel

On Wed, Jun 08, 2022 at 08:51:52AM +0000, Hannes Laimer wrote:
(...)
> diff --git a/src/tools/disks/mod.rs b/src/tools/disks/mod.rs
> index ea4c687a..a1436fc3 100644
> --- a/src/tools/disks/mod.rs
> +++ b/src/tools/disks/mod.rs
> @@ -781,19 +781,77 @@ fn scan_partitions(
>      Ok(used)
>  }
>  
> -/// Get disk usage information for a single disk
> -pub fn get_disk_usage_info(
> -    disk: &str,
> -    no_smart: bool,
> -    include_partitions: bool,
> -) -> Result<DiskUsageInfo, Error> {
> -    let mut filter = Vec::new();
> -    filter.push(disk.to_string());
> -    let mut map = get_disks(Some(filter), no_smart, include_partitions)?;
> -    if let Some(info) = map.remove(disk) {
> -        Ok(info)
> -    } else {
> -        bail!("failed to get disk usage info - internal error"); // should not happen
> +pub struct DisksUsageQuery {

Instead of having two types named almost the same with the same
parameters apart from disks, I think it would make more sense to just
have 2 'query' methods on a single type pass the disk list to one of the
query functions.

Also, it probably makese sense to use a slice (&[String]) instead of an owned
vector anyway in get_disks.

> +    disks: Option<Vec<String>>,
> +    smart: bool,
> +    partitions: bool,
> +}
> +
> +impl DisksUsageQuery {
> +    pub fn new() -> DisksUsageQuery {
> +        DisksUsageQuery {
> +            disks: None,
> +            smart: true,
> +            partitions: false,
> +        }
> +    }
> +
> +    pub fn disks<'a>(&'a mut self, disks: Vec<String>) -> &'a mut DisksUsageQuery {

You don't need to name the lifetimes at all here (in any of the
methods), and you can use `Self` for the type (`&mut Self`).




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v3 2/3] disks: use builder pattern for querying disk usage
  2022-06-09 10:38   ` Wolfgang Bumiller
@ 2022-06-15  6:07     ` Hannes Laimer
  2022-06-15  6:17     ` Hannes Laimer
  1 sibling, 0 replies; 14+ messages in thread
From: Hannes Laimer @ 2022-06-15  6:07 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
v2, thanks @Wolfgang Bumiller <w.bumiller@proxmox.com>
 * use just one struct for querying
 * remove not needed lifetimes

the two 'find' functions arguments are for filtering the data and the builder
functions are for defining what data the (maybe filtered) resulting
elements should contain. 

src/api2/node/disks/directory.rs |  6 ++--
 src/api2/node/disks/mod.rs       | 12 +++++---
 src/api2/node/disks/zfs.rs       |  2 +-
 src/tools/disks/mod.rs           | 53 +++++++++++++++++++++++---------
 4 files changed, 51 insertions(+), 22 deletions(-)

diff --git a/src/api2/node/disks/directory.rs b/src/api2/node/disks/directory.rs
index 123a8d7b..cada95cd 100644
--- a/src/api2/node/disks/directory.rs
+++ b/src/api2/node/disks/directory.rs
@@ -13,8 +13,8 @@ use pbs_api_types::{
 };
 
 use crate::tools::disks::{
-    create_file_system, create_single_linux_partition, get_disk_usage_info, get_fs_uuid,
-    DiskManage, DiskUsageType, FileSystemType,
+    create_file_system, create_single_linux_partition, get_fs_uuid, DiskManage, DiskUsageQuery,
+    DiskUsageType, FileSystemType,
 };
 use crate::tools::systemd::{self, types::*};
 
@@ -147,7 +147,7 @@ pub fn create_datastore_disk(
 
     let auth_id = rpcenv.get_auth_id().unwrap();
 
-    let info = get_disk_usage_info(&disk, true, false)?;
+    let info = DiskUsageQuery::new().smart(false).find(&disk)?;
 
     if info.used != DiskUsageType::Unused {
         bail!("disk '{}' is already in use.", disk);
diff --git a/src/api2/node/disks/mod.rs b/src/api2/node/disks/mod.rs
index 478829fb..c16be46a 100644
--- a/src/api2/node/disks/mod.rs
+++ b/src/api2/node/disks/mod.rs
@@ -12,8 +12,8 @@ use pbs_api_types::{
 };
 
 use crate::tools::disks::{
-    get_disk_usage_info, get_disks, get_smart_data, inititialize_gpt_disk, DiskManage,
-    DiskUsageInfo, DiskUsageType, SmartData,
+    get_smart_data, inititialize_gpt_disk, DiskManage, DiskUsageInfo, DiskUsageQuery,
+    DiskUsageType, SmartData,
 };
 use proxmox_rest_server::WorkerTask;
 
@@ -64,7 +64,11 @@ pub fn list_disks(
 ) -> Result<Vec<DiskUsageInfo>, Error> {
     let mut list = Vec::new();
 
-    for (_, info) in get_disks(None, skipsmart, include_partitions)? {
+    for (_, info) in DiskUsageQuery::new()
+        .smart(!skipsmart)
+        .partitions(include_partitions)
+        .query()?
+    {
         if let Some(ref usage_type) = usage_type {
             if info.used == *usage_type {
                 list.push(info);
@@ -147,7 +151,7 @@ pub fn initialize_disk(
 
     let auth_id = rpcenv.get_auth_id().unwrap();
 
-    let info = get_disk_usage_info(&disk, true, false)?;
+    let info = DiskUsageQuery::new().find(&disk)?;
 
     if info.used != DiskUsageType::Unused {
         bail!("disk '{}' is already in use.", disk);
diff --git a/src/api2/node/disks/zfs.rs b/src/api2/node/disks/zfs.rs
index 5cb23e70..3efc8a05 100644
--- a/src/api2/node/disks/zfs.rs
+++ b/src/api2/node/disks/zfs.rs
@@ -174,7 +174,7 @@ pub fn create_zpool(
         .map(|v| v.as_str().unwrap().to_string())
         .collect();
 
-    let disk_map = crate::tools::disks::get_disks(None, true, false)?;
+    let disk_map = crate::tools::disks::DiskUsageQuery::new().query()?;
     for disk in devices.iter() {
         match disk_map.get(disk) {
             Some(info) => {
diff --git a/src/tools/disks/mod.rs b/src/tools/disks/mod.rs
index 61e0f17a..4b2d638b 100644
--- a/src/tools/disks/mod.rs
+++ b/src/tools/disks/mod.rs
@@ -763,19 +763,44 @@ fn scan_partitions(
     Ok(used)
 }
 
-/// Get disk usage information for a single disk
-pub fn get_disk_usage_info(
-    disk: &str,
-    no_smart: bool,
-    include_partitions: bool,
-) -> Result<DiskUsageInfo, Error> {
-    let mut filter = Vec::new();
-    filter.push(disk.to_string());
-    let mut map = get_disks(Some(filter), no_smart, include_partitions)?;
-    if let Some(info) = map.remove(disk) {
-        Ok(info)
-    } else {
-        bail!("failed to get disk usage info - internal error"); // should not happen
+pub struct DiskUsageQuery {
+    smart: bool,
+    partitions: bool,
+}
+
+impl DiskUsageQuery {
+    pub fn new() -> DiskUsageQuery {
+        DiskUsageQuery {
+            smart: true,
+            partitions: false,
+        }
+    }
+
+    pub fn smart<'a>(&'a mut self, smart: bool) -> &'a mut DiskUsageQuery {
+        self.smart = smart;
+        self
+    }
+
+    pub fn partitions<'a>(&'a mut self, partitions: bool) -> &'a mut DiskUsageQuery {
+        self.partitions = partitions;
+        self
+    }
+
+    pub fn query(&self) -> Result<HashMap<String, DiskUsageInfo>, Error> {
+        get_disks(None, !self.smart, self.partitions)
+    }
+
+    pub fn find(&self, disk: &str) -> Result<DiskUsageInfo, Error> {
+        let mut map = get_disks(Some(vec![disk.to_string()]), !self.smart, self.partitions)?;
+        if let Some(info) = map.remove(disk) {
+            Ok(info)
+        } else {
+            bail!("failed to get disk usage info - internal error"); // should not happen
+        }
+    }
+
+    pub fn find_all(&self, disks: Vec<String>) -> Result<HashMap<String, DiskUsageInfo>, Error> {
+        get_disks(Some(disks), !self.smart, self.partitions)
     }
 }
 
@@ -838,7 +863,7 @@ fn get_partitions_info(
 }
 
 /// Get disk usage information for multiple disks
-pub fn get_disks(
+fn get_disks(
     // filter - list of device names (without leading /dev)
     disks: Option<Vec<String>>,
     // do no include data from smartctl
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v3 2/3] disks: use builder pattern for querying disk usage
  2022-06-09 10:38   ` Wolfgang Bumiller
  2022-06-15  6:07     ` [pbs-devel] [PATCH proxmox-backup v3 " Hannes Laimer
@ 2022-06-15  6:17     ` Hannes Laimer
  1 sibling, 0 replies; 14+ messages in thread
From: Hannes Laimer @ 2022-06-15  6:17 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
---
previous patch was missing the removal of the not needed lifetimes,
sorry for the noise


v2, thanks @Wolfgang Bumiller <w.bumiller@proxmox.com>
 * use just one struct for querying
 * remove not needed lifetimes

the two 'find' functions arguments are for filtering the data and the builder
functions are for defining what data the (maybe filtered) resulting
elements should contain.

 src/api2/node/disks/directory.rs |  6 ++--
 src/api2/node/disks/mod.rs       | 12 +++++---
 src/api2/node/disks/zfs.rs       |  2 +-
 src/tools/disks/mod.rs           | 53 +++++++++++++++++++++++---------
 4 files changed, 51 insertions(+), 22 deletions(-)

diff --git a/src/api2/node/disks/directory.rs b/src/api2/node/disks/directory.rs
index 123a8d7b..cada95cd 100644
--- a/src/api2/node/disks/directory.rs
+++ b/src/api2/node/disks/directory.rs
@@ -13,8 +13,8 @@ use pbs_api_types::{
 };
 
 use crate::tools::disks::{
-    create_file_system, create_single_linux_partition, get_disk_usage_info, get_fs_uuid,
-    DiskManage, DiskUsageType, FileSystemType,
+    create_file_system, create_single_linux_partition, get_fs_uuid, DiskManage, DiskUsageQuery,
+    DiskUsageType, FileSystemType,
 };
 use crate::tools::systemd::{self, types::*};
 
@@ -147,7 +147,7 @@ pub fn create_datastore_disk(
 
     let auth_id = rpcenv.get_auth_id().unwrap();
 
-    let info = get_disk_usage_info(&disk, true, false)?;
+    let info = DiskUsageQuery::new().smart(false).find(&disk)?;
 
     if info.used != DiskUsageType::Unused {
         bail!("disk '{}' is already in use.", disk);
diff --git a/src/api2/node/disks/mod.rs b/src/api2/node/disks/mod.rs
index 478829fb..c16be46a 100644
--- a/src/api2/node/disks/mod.rs
+++ b/src/api2/node/disks/mod.rs
@@ -12,8 +12,8 @@ use pbs_api_types::{
 };
 
 use crate::tools::disks::{
-    get_disk_usage_info, get_disks, get_smart_data, inititialize_gpt_disk, DiskManage,
-    DiskUsageInfo, DiskUsageType, SmartData,
+    get_smart_data, inititialize_gpt_disk, DiskManage, DiskUsageInfo, DiskUsageQuery,
+    DiskUsageType, SmartData,
 };
 use proxmox_rest_server::WorkerTask;
 
@@ -64,7 +64,11 @@ pub fn list_disks(
 ) -> Result<Vec<DiskUsageInfo>, Error> {
     let mut list = Vec::new();
 
-    for (_, info) in get_disks(None, skipsmart, include_partitions)? {
+    for (_, info) in DiskUsageQuery::new()
+        .smart(!skipsmart)
+        .partitions(include_partitions)
+        .query()?
+    {
         if let Some(ref usage_type) = usage_type {
             if info.used == *usage_type {
                 list.push(info);
@@ -147,7 +151,7 @@ pub fn initialize_disk(
 
     let auth_id = rpcenv.get_auth_id().unwrap();
 
-    let info = get_disk_usage_info(&disk, true, false)?;
+    let info = DiskUsageQuery::new().find(&disk)?;
 
     if info.used != DiskUsageType::Unused {
         bail!("disk '{}' is already in use.", disk);
diff --git a/src/api2/node/disks/zfs.rs b/src/api2/node/disks/zfs.rs
index 5cb23e70..3efc8a05 100644
--- a/src/api2/node/disks/zfs.rs
+++ b/src/api2/node/disks/zfs.rs
@@ -174,7 +174,7 @@ pub fn create_zpool(
         .map(|v| v.as_str().unwrap().to_string())
         .collect();
 
-    let disk_map = crate::tools::disks::get_disks(None, true, false)?;
+    let disk_map = crate::tools::disks::DiskUsageQuery::new().query()?;
     for disk in devices.iter() {
         match disk_map.get(disk) {
             Some(info) => {
diff --git a/src/tools/disks/mod.rs b/src/tools/disks/mod.rs
index 61e0f17a..35ec9996 100644
--- a/src/tools/disks/mod.rs
+++ b/src/tools/disks/mod.rs
@@ -763,19 +763,44 @@ fn scan_partitions(
     Ok(used)
 }
 
-/// Get disk usage information for a single disk
-pub fn get_disk_usage_info(
-    disk: &str,
-    no_smart: bool,
-    include_partitions: bool,
-) -> Result<DiskUsageInfo, Error> {
-    let mut filter = Vec::new();
-    filter.push(disk.to_string());
-    let mut map = get_disks(Some(filter), no_smart, include_partitions)?;
-    if let Some(info) = map.remove(disk) {
-        Ok(info)
-    } else {
-        bail!("failed to get disk usage info - internal error"); // should not happen
+pub struct DiskUsageQuery {
+    smart: bool,
+    partitions: bool,
+}
+
+impl DiskUsageQuery {
+    pub fn new() -> Self {
+        Self {
+            smart: true,
+            partitions: false,
+        }
+    }
+
+    pub fn smart(&mut self, smart: bool) -> &mut Self {
+        self.smart = smart;
+        self
+    }
+
+    pub fn partitions(&mut self, partitions: bool) -> &mut Self {
+        self.partitions = partitions;
+        self
+    }
+
+    pub fn query(&self) -> Result<HashMap<String, DiskUsageInfo>, Error> {
+        get_disks(None, !self.smart, self.partitions)
+    }
+
+    pub fn find(&self, disk: &str) -> Result<DiskUsageInfo, Error> {
+        let mut map = get_disks(Some(vec![disk.to_string()]), !self.smart, self.partitions)?;
+        if let Some(info) = map.remove(disk) {
+            Ok(info)
+        } else {
+            bail!("failed to get disk usage info - internal error"); // should not happen
+        }
+    }
+
+    pub fn find_all(&self, disks: Vec<String>) -> Result<HashMap<String, DiskUsageInfo>, Error> {
+        get_disks(Some(disks), !self.smart, self.partitions)
     }
 }
 
@@ -838,7 +863,7 @@ fn get_partitions_info(
 }
 
 /// Get disk usage information for multiple disks
-pub fn get_disks(
+fn get_disks(
     // filter - list of device names (without leading /dev)
     disks: Option<Vec<String>>,
     // do no include data from smartctl
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint
  2022-06-08  8:51 [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint Hannes Laimer
                   ` (3 preceding siblings ...)
  2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-widget-toolkit v2 1/1] ui: DiskLisk: handle partition data from PBS backend Hannes Laimer
@ 2022-06-15  7:58 ` Wolfgang Bumiller
  2022-06-15  8:08   ` Dominik Csapak
  2022-06-15  9:09 ` [pbs-devel] applied-series: [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpointg Wolfgang Bumiller
  5 siblings, 1 reply; 14+ messages in thread
From: Wolfgang Bumiller @ 2022-06-15  7:58 UTC (permalink / raw)
  To: Dominik Csapak; +Cc: pbs-devel, Hannes Laimer

On Wed, Jun 08, 2022 at 08:51:50AM +0000, Hannes Laimer wrote:
> In order to work with the existing frontend component for displying the
> disklist, either
> * the partition data has to be return with the same struct a disk is
>   returned. Which leads to the same struct being used for different
>   things -> quite a few fields are always empty for partitions and a new
>   'type' field would be needed. Also the code structure in PBS has to be
>   changed quite a bit.
> * the existing frontend has to be adjusted to handle data from PVE and
>   PBS properly.
> 
> I went with the second option because the adjustments nedded in the UI
> compenent were minimal and, IMHO, adjusting the API to fit the UI is the
> wrong direction.
> 
> For the mount status to shown in the UI, the patch[1] sent to pve-devel for
> the 'mounted' column is needed.
> 
> NOTE: The partition data will be needed in later patches for removable
> datastores.
> 
> v1->v2:
>  * use builder pattern for queries as suggested by Wolfgang
>  * move mounted out of Enum and add filesystem string
>  * add missing zfsreserve usage type
>  * add 'mounted' column to disklist (separate patch[1])
> 
> 
> [1] https://lists.proxmox.com/pipermail/pve-devel/2022-June/053211.html

@Dominik: Not sure how to deal with the above patch linked, since AFAICT
right now the added column is (currently) pbs specific.

Should the visibility of the mounted column be configurable?
Then again, we probably want to change PVE::Diskmanage & API to return
the mounted boolean as well? Currently it just slams a " (mounted)" text
onto the file system type, which is rather awkward (and does not happen
at all for eg. an ESP partition, which can definitely be mounted...).

Though I suppose the series would still work without it, so I guess we
can apply this regardless?




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint
  2022-06-15  7:58 ` [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint Wolfgang Bumiller
@ 2022-06-15  8:08   ` Dominik Csapak
  2022-06-15  8:10     ` Hannes Laimer
  0 siblings, 1 reply; 14+ messages in thread
From: Dominik Csapak @ 2022-06-15  8:08 UTC (permalink / raw)
  To: Wolfgang Bumiller; +Cc: pbs-devel, Hannes Laimer

On 6/15/22 09:58, Wolfgang Bumiller wrote:
> On Wed, Jun 08, 2022 at 08:51:50AM +0000, Hannes Laimer wrote:
>> In order to work with the existing frontend component for displying the
>> disklist, either
>> * the partition data has to be return with the same struct a disk is
>>    returned. Which leads to the same struct being used for different
>>    things -> quite a few fields are always empty for partitions and a new
>>    'type' field would be needed. Also the code structure in PBS has to be
>>    changed quite a bit.
>> * the existing frontend has to be adjusted to handle data from PVE and
>>    PBS properly.
>>
>> I went with the second option because the adjustments nedded in the UI
>> compenent were minimal and, IMHO, adjusting the API to fit the UI is the
>> wrong direction.
>>
>> For the mount status to shown in the UI, the patch[1] sent to pve-devel for
>> the 'mounted' column is needed.
>>
>> NOTE: The partition data will be needed in later patches for removable
>> datastores.
>>
>> v1->v2:
>>   * use builder pattern for queries as suggested by Wolfgang
>>   * move mounted out of Enum and add filesystem string
>>   * add missing zfsreserve usage type
>>   * add 'mounted' column to disklist (separate patch[1])
>>
>>
>> [1] https://lists.proxmox.com/pipermail/pve-devel/2022-June/053211.html
> 
> @Dominik: Not sure how to deal with the above patch linked, since AFAICT
> right now the added column is (currently) pbs specific.
> 
> Should the visibility of the mounted column be configurable?
> Then again, we probably want to change PVE::Diskmanage & API to return
> the mounted boolean as well? Currently it just slams a " (mounted)" text
> onto the file system type, which is rather awkward (and does not happen
> at all for eg. an ESP partition, which can definitely be mounted...).
> 
> Though I suppose the series would still work without it, so I guess we
> can apply this regardless?


mhmm... i agree that we probably want to add that column for pve too...
as an interim solution we could have a 'computed' field/renderer
that takes either the 'mounted' column or tries to parse the '(mounted)' part
of the 'used' one.

once we have the mounted field in pve too, we can drop that and switch
to that




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint
  2022-06-15  8:08   ` Dominik Csapak
@ 2022-06-15  8:10     ` Hannes Laimer
  2022-06-15  8:17       ` Wolfgang Bumiller
  0 siblings, 1 reply; 14+ messages in thread
From: Hannes Laimer @ 2022-06-15  8:10 UTC (permalink / raw)
  To: Dominik Csapak, Wolfgang Bumiller; +Cc: pbs-devel



Am 15.06.22 um 10:08 schrieb Dominik Csapak:
> On 6/15/22 09:58, Wolfgang Bumiller wrote:
>> On Wed, Jun 08, 2022 at 08:51:50AM +0000, Hannes Laimer wrote:
>>> In order to work with the existing frontend component for displying the
>>> disklist, either
>>> * the partition data has to be return with the same struct a disk is
>>>    returned. Which leads to the same struct being used for different
>>>    things -> quite a few fields are always empty for partitions and a 
>>> new
>>>    'type' field would be needed. Also the code structure in PBS has 
>>> to be
>>>    changed quite a bit.
>>> * the existing frontend has to be adjusted to handle data from PVE and
>>>    PBS properly.
>>>
>>> I went with the second option because the adjustments nedded in the UI
>>> compenent were minimal and, IMHO, adjusting the API to fit the UI is the
>>> wrong direction.
>>>
>>> For the mount status to shown in the UI, the patch[1] sent to 
>>> pve-devel for
>>> the 'mounted' column is needed.
>>>
>>> NOTE: The partition data will be needed in later patches for removable
>>> datastores.
>>>
>>> v1->v2:
>>>   * use builder pattern for queries as suggested by Wolfgang
>>>   * move mounted out of Enum and add filesystem string
>>>   * add missing zfsreserve usage type
>>>   * add 'mounted' column to disklist (separate patch[1])
>>>
>>>
>>> [1] https://lists.proxmox.com/pipermail/pve-devel/2022-June/053211.html
>>
>> @Dominik: Not sure how to deal with the above patch linked, since AFAICT
>> right now the added column is (currently) pbs specific.
>>
>> Should the visibility of the mounted column be configurable?
>> Then again, we probably want to change PVE::Diskmanage & API to return
>> the mounted boolean as well? Currently it just slams a " (mounted)" text
>> onto the file system type, which is rather awkward (and does not happen
>> at all for eg. an ESP partition, which can definitely be mounted...).
>>
>> Though I suppose the series would still work without it, so I guess we
>> can apply this regardless?
> 
> 
> mhmm... i agree that we probably want to add that column for pve too...
> as an interim solution we could have a 'computed' field/renderer
> that takes either the 'mounted' column or tries to parse the '(mounted)' 
> part
> of the 'used' one.
> 
> once we have the mounted field in pve too, we can drop that and switch
> to that
> 
btw, there was a seconds patch that added 'mounted' to the pve API [1], 
I just did not link it

[1] https://lists.proxmox.com/pipermail/pve-devel/2022-June/053213.html




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint
  2022-06-15  8:10     ` Hannes Laimer
@ 2022-06-15  8:17       ` Wolfgang Bumiller
  0 siblings, 0 replies; 14+ messages in thread
From: Wolfgang Bumiller @ 2022-06-15  8:17 UTC (permalink / raw)
  To: Hannes Laimer; +Cc: Dominik Csapak, pbs-devel

On Wed, Jun 15, 2022 at 10:10:37AM +0200, Hannes Laimer wrote:
> 
> 
> Am 15.06.22 um 10:08 schrieb Dominik Csapak:
> > On 6/15/22 09:58, Wolfgang Bumiller wrote:
> > > On Wed, Jun 08, 2022 at 08:51:50AM +0000, Hannes Laimer wrote:
> > > > In order to work with the existing frontend component for displying the
> > > > disklist, either
> > > > * the partition data has to be return with the same struct a disk is
> > > >    returned. Which leads to the same struct being used for different
> > > >    things -> quite a few fields are always empty for partitions
> > > > and a new
> > > >    'type' field would be needed. Also the code structure in PBS
> > > > has to be
> > > >    changed quite a bit.
> > > > * the existing frontend has to be adjusted to handle data from PVE and
> > > >    PBS properly.
> > > > 
> > > > I went with the second option because the adjustments nedded in the UI
> > > > compenent were minimal and, IMHO, adjusting the API to fit the UI is the
> > > > wrong direction.
> > > > 
> > > > For the mount status to shown in the UI, the patch[1] sent to
> > > > pve-devel for
> > > > the 'mounted' column is needed.
> > > > 
> > > > NOTE: The partition data will be needed in later patches for removable
> > > > datastores.
> > > > 
> > > > v1->v2:
> > > >   * use builder pattern for queries as suggested by Wolfgang
> > > >   * move mounted out of Enum and add filesystem string
> > > >   * add missing zfsreserve usage type
> > > >   * add 'mounted' column to disklist (separate patch[1])
> > > > 
> > > > 
> > > > [1] https://lists.proxmox.com/pipermail/pve-devel/2022-June/053211.html
> > > 
> > > @Dominik: Not sure how to deal with the above patch linked, since AFAICT
> > > right now the added column is (currently) pbs specific.
> > > 
> > > Should the visibility of the mounted column be configurable?
> > > Then again, we probably want to change PVE::Diskmanage & API to return
> > > the mounted boolean as well? Currently it just slams a " (mounted)" text
> > > onto the file system type, which is rather awkward (and does not happen
> > > at all for eg. an ESP partition, which can definitely be mounted...).
> > > 
> > > Though I suppose the series would still work without it, so I guess we
> > > can apply this regardless?
> > 
> > 
> > mhmm... i agree that we probably want to add that column for pve too...
> > as an interim solution we could have a 'computed' field/renderer
> > that takes either the 'mounted' column or tries to parse the '(mounted)'
> > part
> > of the 'used' one.
> > 
> > once we have the mounted field in pve too, we can drop that and switch
> > to that
> > 
> btw, there was a seconds patch that added 'mounted' to the pve API [1], I
> just did not link it
> 
> [1] https://lists.proxmox.com/pipermail/pve-devel/2022-June/053213.html

Ah, I should have just used my mail client instead of clicking the link,
would have been more obvious ;-)




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-widget-toolkit v2 1/1] ui: DiskLisk: handle partition data from PBS backend
  2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-widget-toolkit v2 1/1] ui: DiskLisk: handle partition data from PBS backend Hannes Laimer
@ 2022-06-15  8:58   ` Wolfgang Bumiller
  0 siblings, 0 replies; 14+ messages in thread
From: Wolfgang Bumiller @ 2022-06-15  8:58 UTC (permalink / raw)
  To: Hannes Laimer; +Cc: pbs-devel

applied, thanks

On Wed, Jun 08, 2022 at 08:51:54AM +0000, Hannes Laimer wrote:
> Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
> ---
>  src/panel/DiskList.js | 22 ++++++++++++++++++++--
>  1 file changed, 20 insertions(+), 2 deletions(-)
> 
> diff --git a/src/panel/DiskList.js b/src/panel/DiskList.js
> index 76d92cd..c5a0050 100644
> --- a/src/panel/DiskList.js
> +++ b/src/panel/DiskList.js
> @@ -167,10 +167,19 @@ Ext.define('Proxmox.DiskList', {
>  
>  	    for (const item of records) {
>  		let data = item.data;
> -		data.leaf = true;
>  		data.expanded = true;
> -		data.children = [];
> +		data.children = data.partitions ?? [];
> +		for (let p of data.children) {
> +		    p['disk-type'] = 'partition';
> +		    p.iconCls = 'fa fa-fw fa-hdd-o x-fa-tree';
> +		    p.used = p.used === 'filesystem' ? p.filesystem : p.used;
> +		    p.parent = data.devpath;
> +		    p.children = [];
> +		    p.leaf = true;
> +		}
>  		data.iconCls = 'fa fa-fw fa-hdd-o x-fa-tree';
> +		data.leaf = data.children.length === 0;
> +
>  		if (!data.parent) {
>  		    disks[data.devpath] = data;
>  		}
> @@ -227,6 +236,15 @@ Ext.define('Proxmox.DiskList', {
>  		extendedInfo = `, Ceph (${types.join(', ')})`;
>  	    }
>  	}
> +	const formatMap = {
> +	    'bios': 'BIOS boot',
> +	    'zfsreserved': 'ZFS reserved',
> +	    'efi': 'EFI',
> +	    'lvm': 'LVM',
> +	    'zfs': 'ZFS',
> +	};
> +
> +	v = formatMap[v] || v;
>  	return v ? `${v}${extendedInfo}` : Proxmox.Utils.noText;
>      },
>  
> -- 
> 2.30.2




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] applied-series: [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpointg
  2022-06-08  8:51 [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint Hannes Laimer
                   ` (4 preceding siblings ...)
  2022-06-15  7:58 ` [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint Wolfgang Bumiller
@ 2022-06-15  9:09 ` Wolfgang Bumiller
  5 siblings, 0 replies; 14+ messages in thread
From: Wolfgang Bumiller @ 2022-06-15  9:09 UTC (permalink / raw)
  To: Hannes Laimer; +Cc: pbs-devel

applied series, thanks

On Wed, Jun 08, 2022 at 08:51:50AM +0000, Hannes Laimer wrote:
> In order to work with the existing frontend component for displying the
> disklist, either
> * the partition data has to be return with the same struct a disk is
>   returned. Which leads to the same struct being used for different
>   things -> quite a few fields are always empty for partitions and a new
>   'type' field would be needed. Also the code structure in PBS has to be
>   changed quite a bit.
> * the existing frontend has to be adjusted to handle data from PVE and
>   PBS properly.
> 
> I went with the second option because the adjustments nedded in the UI
> compenent were minimal and, IMHO, adjusting the API to fit the UI is the
> wrong direction.
> 
> For the mount status to shown in the UI, the patch[1] sent to pve-devel for
> the 'mounted' column is needed.
> 
> NOTE: The partition data will be needed in later patches for removable
> datastores.
> 
> v1->v2:
>  * use builder pattern for queries as suggested by Wolfgang
>  * move mounted out of Enum and add filesystem string
>  * add missing zfsreserve usage type
>  * add 'mounted' column to disklist (separate patch[1])
> 
> 
> [1] https://lists.proxmox.com/pipermail/pve-devel/2022-June/053211.html
> 
> * proxmox-backup
> Hannes Laimer (3):
>   api2: disks endpoint return partitions
>   disks: use builder pattern for querying disk usage
>   ui: disks: show partitions by default
> 
>  src/api2/node/disks/directory.rs |   6 +-
>  src/api2/node/disks/mod.rs       |  19 ++-
>  src/api2/node/disks/zfs.rs       |   2 +-
>  src/tools/disks/mod.rs           | 206 +++++++++++++++++++++++++++++--
>  www/panel/StorageAndDisks.js     |   1 +
>  5 files changed, 215 insertions(+), 19 deletions(-)
> 
> * proxmox-widget-toolkit
> Hannes Laimer (1):
>   ui: DiskLisk: handle partition data from PBS backend
> 
>  src/panel/DiskList.js | 22 ++++++++++++++++++++--
>  1 file changed, 20 insertions(+), 2 deletions(-)
> 
> -- 
> 2.30.2




^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-06-15  9:10 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-08  8:51 [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint Hannes Laimer
2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-backup v2 1/3] api2: disks endpoint return partitions Hannes Laimer
2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-backup v2 2/3] disks: use builder pattern for querying disk usage Hannes Laimer
2022-06-09 10:38   ` Wolfgang Bumiller
2022-06-15  6:07     ` [pbs-devel] [PATCH proxmox-backup v3 " Hannes Laimer
2022-06-15  6:17     ` Hannes Laimer
2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-backup v2 3/3] ui: disks: show partitions by default Hannes Laimer
2022-06-08  8:51 ` [pbs-devel] [PATCH proxmox-widget-toolkit v2 1/1] ui: DiskLisk: handle partition data from PBS backend Hannes Laimer
2022-06-15  8:58   ` [pbs-devel] applied: " Wolfgang Bumiller
2022-06-15  7:58 ` [pbs-devel] [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpoint Wolfgang Bumiller
2022-06-15  8:08   ` Dominik Csapak
2022-06-15  8:10     ` Hannes Laimer
2022-06-15  8:17       ` Wolfgang Bumiller
2022-06-15  9:09 ` [pbs-devel] applied-series: [PATCH proxmox-backup/proxmox-widget-toolkit v2 0/4] add partitions to disks/list endpointg Wolfgang Bumiller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal