* Re: [pve-devel] busy dataset when trying the migrate iscsi disk
[not found] <DM6PR17MB34662B3F92B53FF3394433C4D015A@DM6PR17MB3466.namprd17.prod.outlook.com>
@ 2025-09-17 12:04 ` Max R. Carrara
2025-09-17 17:47 ` Lorne Guse via pve-devel
[not found] ` <DM6PR17MB3466AF450A03DC5A752C4B3BD017A@DM6PR17MB3466.namprd17.prod.outlook.com>
0 siblings, 2 replies; 4+ messages in thread
From: Max R. Carrara @ 2025-09-17 12:04 UTC (permalink / raw)
To: Lorne Guse, Proxmox VE development discussion
On Mon Sep 15, 2025 at 5:34 AM CEST, Lorne Guse wrote:
> I'm working on TrueNAS over iSCSI for Proxmox and have run into an issue migrating disks. When trying to delete the old storage, which has just successfully been transfered, the iscsidirect connection must remain open because we are getting:
>
> cannot destroy 'slow/vm-188-disk-0': dataset is busy
>
> Is there a way to ensure the iscsidirect connection is closed before trying to delete the underlying zfs dataset?
Hi Lorne! Glad to see you on the mailing list!
I've sifted around our code to see how we handle this, and it seems that
we're simply retrying a couple of times until the dataset is actually
deleted [0]. I think that might be your best bet, though if you find an
alternative, I'd be curious to know.
[0]: https://git.proxmox.com/?p=pve-storage.git;a=blob;f=src/PVE/Storage/ZFSPoolPlugin.pm;h=d8d8d0f9fc1cc6f1a02d8f5800c388b355609bf5;hb=refs/heads/master#l362
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [pve-devel] busy dataset when trying the migrate iscsi disk
2025-09-17 12:04 ` [pve-devel] busy dataset when trying the migrate iscsi disk Max R. Carrara
@ 2025-09-17 17:47 ` Lorne Guse via pve-devel
[not found] ` <DM6PR17MB3466AF450A03DC5A752C4B3BD017A@DM6PR17MB3466.namprd17.prod.outlook.com>
1 sibling, 0 replies; 4+ messages in thread
From: Lorne Guse via pve-devel @ 2025-09-17 17:47 UTC (permalink / raw)
To: Max R. Carrara, Proxmox VE development discussion; +Cc: Lorne Guse
[-- Attachment #1: Type: message/rfc822, Size: 12569 bytes --]
From: Lorne Guse <boomshankerx@hotmail.com>
To: "Max R. Carrara" <m.carrara@proxmox.com>, Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: busy dataset when trying the migrate iscsi disk
Date: Wed, 17 Sep 2025 17:47:25 +0000
Message-ID: <DM6PR17MB3466AF450A03DC5A752C4B3BD017A@DM6PR17MB3466.namprd17.prod.outlook.com>
Ironically the patch version of the plugin actually utilizes the native Proxmox code in ZFSPlugin.pm / ZFSPoolPlugin.pm for managed the zvol dataset. The patch version only manages the iSCSI configuration.
I think the best course is to consider the patch version legacy and focus on the new fully API driven Custom Storage Plugin.
TrueNAS indicated there are plans to implement some form of UI hooks for Custom Storage Plugins. This is very encouraging.
________________________________
From: Max R. Carrara <m.carrara@proxmox.com>
Sent: Wednesday, September 17, 2025 6:04 AM
To: Lorne Guse <boomshankerx@hotmail.com>; Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: busy dataset when trying the migrate iscsi disk
On Mon Sep 15, 2025 at 5:34 AM CEST, Lorne Guse wrote:
> I'm working on TrueNAS over iSCSI for Proxmox and have run into an issue migrating disks. When trying to delete the old storage, which has just successfully been transfered, the iscsidirect connection must remain open because we are getting:
>
> cannot destroy 'slow/vm-188-disk-0': dataset is busy
>
> Is there a way to ensure the iscsidirect connection is closed before trying to delete the underlying zfs dataset?
Hi Lorne! Glad to see you on the mailing list!
I've sifted around our code to see how we handle this, and it seems that
we're simply retrying a couple of times until the dataset is actually
deleted [0]. I think that might be your best bet, though if you find an
alternative, I'd be curious to know.
[0]: https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.proxmox.com%2F%3Fp%3Dpve-storage.git%3Ba%3Dblob%3Bf%3Dsrc%2FPVE%2FStorage%2FZFSPoolPlugin.pm%3Bh%3Dd8d8d0f9fc1cc6f1a02d8f5800c388b355609bf5%3Bhb%3Drefs%2Fheads%2Fmaster%23l362&data=05%7C02%7C%7C08ddb2ac62744e0c4a2908ddf5e259ac%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C638937074705932603%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=dV3Ch024RrnvSKKDsT4k1zP23S%2BCX9jFR6YISZ5Lpe0%3D&reserved=0<https://git.proxmox.com/?p=pve-storage.git;a=blob;f=src/PVE/Storage/ZFSPoolPlugin.pm;h=d8d8d0f9fc1cc6f1a02d8f5800c388b355609bf5;hb=refs/heads/master#l362>
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [pve-devel] busy dataset when trying the migrate iscsi disk
[not found] ` <DM6PR17MB3466AF450A03DC5A752C4B3BD017A@DM6PR17MB3466.namprd17.prod.outlook.com>
@ 2025-09-18 17:52 ` Max R. Carrara
0 siblings, 0 replies; 4+ messages in thread
From: Max R. Carrara @ 2025-09-18 17:52 UTC (permalink / raw)
To: Lorne Guse, Proxmox VE development discussion
On Wed Sep 17, 2025 at 7:47 PM CEST, Lorne Guse wrote:
> Ironically the patch version of the plugin actually utilizes the native Proxmox code in ZFSPlugin.pm / ZFSPoolPlugin.pm for managed the zvol dataset. The patch version only manages the iSCSI configuration.
Ah, this is for the old plugin! I see.
I had another look through some more manpages just to be sure, but there
unfortunately isn't really anything that allows you to close a single
connection or something.
Side note: There's a way to log out from the session
(`iscsiadm -m session --logout`, to be precise), but that only works for
iSCSI-boxes that provide one target for each LUN. As soon as a target
has multiple LUNs, you end up breaking things, which is the case here.
(Huge thanks to Mira for clarifying all of this off-list btw! This
really nerdsniped me.)
Any other solution is probably way too convoluted, unfortunately.
>
> I think the best course is to consider the patch version legacy and focus on the new fully API driven Custom Storage Plugin.
I think so too; perhaps it could be worth checking if there's a possible
migration path for the users of the patch version..? Not sure if that's
possible / you have enough capacity for that, though.
>
> TrueNAS indicated there are plans to implement some form of UI hooks for Custom Storage Plugins. This is very encouraging.
Yes, it's in the works—there's an RFC for that currently:
https://lore.proxmox.com/pve-devel/20250908180058.530119-1-m.carrara@proxmox.com/
However, please don't consider that implementation as finished yet—it
isn't at all (it's an RFC after all). I'd like to rework it quite soon,
as the current implementation is way too complex (among other things).
> ________________________________
> From: Max R. Carrara <m.carrara@proxmox.com>
> Sent: Wednesday, September 17, 2025 6:04 AM
> To: Lorne Guse <boomshankerx@hotmail.com>; Proxmox VE development discussion <pve-devel@lists.proxmox.com>
> Subject: Re: busy dataset when trying the migrate iscsi disk
>
> On Mon Sep 15, 2025 at 5:34 AM CEST, Lorne Guse wrote:
> > I'm working on TrueNAS over iSCSI for Proxmox and have run into an issue migrating disks. When trying to delete the old storage, which has just successfully been transfered, the iscsidirect connection must remain open because we are getting:
> >
> > cannot destroy 'slow/vm-188-disk-0': dataset is busy
> >
> > Is there a way to ensure the iscsidirect connection is closed before trying to delete the underlying zfs dataset?
>
> Hi Lorne! Glad to see you on the mailing list!
>
> I've sifted around our code to see how we handle this, and it seems that
> we're simply retrying a couple of times until the dataset is actually
> deleted [0]. I think that might be your best bet, though if you find an
> alternative, I'd be curious to know.
>
> [0]: https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.proxmox.com%2F%3Fp%3Dpve-storage.git%3Ba%3Dblob%3Bf%3Dsrc%2FPVE%2FStorage%2FZFSPoolPlugin.pm%3Bh%3Dd8d8d0f9fc1cc6f1a02d8f5800c388b355609bf5%3Bhb%3Drefs%2Fheads%2Fmaster%23l362&data=05%7C02%7C%7C08ddb2ac62744e0c4a2908ddf5e259ac%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C638937074705932603%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=dV3Ch024RrnvSKKDsT4k1zP23S%2BCX9jFR6YISZ5Lpe0%3D&reserved=0<https://git.proxmox.com/?p=pve-storage.git;a=blob;f=src/PVE/Storage/ZFSPoolPlugin.pm;h=d8d8d0f9fc1cc6f1a02d8f5800c388b355609bf5;hb=refs/heads/master#l362>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] busy dataset when trying the migrate iscsi disk
@ 2025-09-15 3:34 Lorne Guse via pve-devel
0 siblings, 0 replies; 4+ messages in thread
From: Lorne Guse via pve-devel @ 2025-09-15 3:34 UTC (permalink / raw)
To: Proxmox VE development discussion; +Cc: Lorne Guse
[-- Attachment #1: Type: message/rfc822, Size: 14201 bytes --]
From: Lorne Guse <boomshankerx@hotmail.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: busy dataset when trying the migrate iscsi disk
Date: Mon, 15 Sep 2025 03:34:00 +0000
Message-ID: <DM6PR17MB34662B3F92B53FF3394433C4D015A@DM6PR17MB3466.namprd17.prod.outlook.com>
I'm working on TrueNAS over iSCSI for Proxmox and have run into an issue migrating disks. When trying to delete the old storage, which has just successfully been transfered, the iscsidirect connection must remain open because we are getting:
cannot destroy 'slow/vm-188-disk-0': dataset is busy
Is there a way to ensure the iscsidirect connection is closed before trying to delete the underlying zfs dataset?
TrueNAS [INFO] : zvol/slow/vm-188-disk-0 with key 'path' found : /dev/zvol/slow/vm-188-disk-0
TrueNAS [INFO] : zvol/slow/vm-188-disk-0 with key 'lunid' found : 3
create full clone of drive scsi0 (iSCSI-vm-storage-HDD:vm-188-disk-0)
TrueNAS [INFO] : /dev/zvol/fast/vm-188-disk-0
TrueNAS [INFO] : Created LUN: /dev/zvol/fast/vm-188-disk-0 : T2:E236:L4
TrueNAS [INFO] : zvol/fast/vm-188-disk-0 with key 'path' found : /dev/zvol/fast/vm-188-disk-0
TrueNAS [INFO] : zvol/fast/vm-188-disk-0 with key 'lunid' found : 4
drive mirror is starting for drive-scsi0
mirror-scsi0: transferred 924.0 MiB of 32.0 GiB (2.82%) in 1s
mirror-scsi0: transferred 1.8 GiB of 32.0 GiB (5.70%) in 2s
mirror-scsi0: transferred 2.7 GiB of 32.0 GiB (8.56%) in 3s
mirror-scsi0: transferred 3.7 GiB of 32.0 GiB (11.68%) in 4s
mirror-scsi0: transferred 4.7 GiB of 32.0 GiB (14.54%) in 5s
mirror-scsi0: transferred 5.5 GiB of 32.0 GiB (17.22%) in 6s
mirror-scsi0: transferred 6.3 GiB of 32.0 GiB (19.71%) in 7s
mirror-scsi0: transferred 7.1 GiB of 32.0 GiB (22.29%) in 8s
mirror-scsi0: transferred 8.0 GiB of 32.0 GiB (24.97%) in 9s
mirror-scsi0: transferred 8.8 GiB of 32.0 GiB (27.55%) in 10s
mirror-scsi0: transferred 9.6 GiB of 32.0 GiB (30.06%) in 11s
mirror-scsi0: transferred 10.6 GiB of 32.0 GiB (33.00%) in 12s
mirror-scsi0: transferred 11.5 GiB of 32.0 GiB (36.04%) in 13s
mirror-scsi0: transferred 12.5 GiB of 32.0 GiB (38.97%) in 14s
mirror-scsi0: transferred 13.4 GiB of 32.0 GiB (41.72%) in 15s
mirror-scsi0: transferred 14.2 GiB of 32.0 GiB (44.43%) in 16s
mirror-scsi0: transferred 15.1 GiB of 32.0 GiB (47.25%) in 17s
mirror-scsi0: transferred 16.1 GiB of 32.0 GiB (50.19%) in 18s
mirror-scsi0: transferred 17.0 GiB of 32.0 GiB (53.15%) in 19s
mirror-scsi0: transferred 18.0 GiB of 32.0 GiB (56.16%) in 20s
mirror-scsi0: transferred 18.8 GiB of 32.0 GiB (58.88%) in 21s
mirror-scsi0: transferred 19.8 GiB of 32.0 GiB (61.81%) in 22s
mirror-scsi0: transferred 20.6 GiB of 32.0 GiB (64.42%) in 23s
mirror-scsi0: transferred 21.5 GiB of 32.0 GiB (67.07%) in 24s
mirror-scsi0: transferred 22.4 GiB of 32.0 GiB (70.02%) in 25s
mirror-scsi0: transferred 23.3 GiB of 32.0 GiB (72.92%) in 26s
mirror-scsi0: transferred 24.3 GiB of 32.0 GiB (75.80%) in 27s
mirror-scsi0: transferred 25.1 GiB of 32.0 GiB (78.58%) in 28s
mirror-scsi0: transferred 26.1 GiB of 32.0 GiB (81.62%) in 29s
mirror-scsi0: transferred 27.1 GiB of 32.0 GiB (84.76%) in 30s
mirror-scsi0: transferred 28.1 GiB of 32.0 GiB (87.73%) in 31s
mirror-scsi0: transferred 29.1 GiB of 32.0 GiB (90.79%) in 32s
mirror-scsi0: transferred 30.0 GiB of 32.0 GiB (93.73%) in 33s
mirror-scsi0: transferred 30.9 GiB of 32.0 GiB (96.55%) in 34s
mirror-scsi0: transferred 31.8 GiB of 32.0 GiB (99.44%) in 35s
mirror-scsi0: transferred 32.0 GiB of 32.0 GiB (100.00%) in 36s, ready
all 'mirror' jobs are ready
mirror-scsi0: Completing block job...
mirror-scsi0: Completed successfully.
mirror-scsi0: mirror-job finished
TrueNAS [INFO] : Ping
TrueNAS [INFO] : Pong
TrueNAS [INFO] : zvol/slow/vm-188-disk-0 with key 'path' found : /dev/zvol/slow/vm-188-disk-0
TrueNAS [INFO] : Deleted LUN: zvol/slow/vm-188-disk-0
cannot destroy 'slow/vm-188-disk-0': dataset is busy
TrueNAS [INFO] : /dev/zvol/slow/vm-188-disk-0
TrueNAS [INFO] : Created LUN: /dev/zvol/slow/vm-188-disk-0 : T5:E237:L3
command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/vm-storage_id_rsa root@vm-storage zfs destroy -r slow/vm-188-disk-0' failed: exit code 1
TASK OK
[
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-09-18 17:52 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <DM6PR17MB34662B3F92B53FF3394433C4D015A@DM6PR17MB3466.namprd17.prod.outlook.com>
2025-09-17 12:04 ` [pve-devel] busy dataset when trying the migrate iscsi disk Max R. Carrara
2025-09-17 17:47 ` Lorne Guse via pve-devel
[not found] ` <DM6PR17MB3466AF450A03DC5A752C4B3BD017A@DM6PR17MB3466.namprd17.prod.outlook.com>
2025-09-18 17:52 ` Max R. Carrara
2025-09-15 3:34 Lorne Guse via pve-devel
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.