From: "Max R. Carrara" <m.carrara@proxmox.com>
To: "Lorne Guse" <boomshankerx@hotmail.com>,
"Proxmox VE development discussion" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] busy dataset when trying the migrate iscsi disk
Date: Tue, 23 Sep 2025 19:08:35 +0200 [thread overview]
Message-ID: <DD0CFZ654RZY.32DY4OSG6GKD3@proxmox.com> (raw)
In-Reply-To: <DM6PR17MB34667826A8F8A2A47FCA0475D01DA@DM6PR17MB3466.namprd17.prod.outlook.com>
On Tue Sep 23, 2025 at 5:06 AM CEST, Lorne Guse wrote:
> I had a look at the code in ZFSPoolPlugin.pm and found why we are having an issue deleting the zvol
>
> https://github.com/boomshankerx/proxmox-truenas/issues/52#issuecomment-3322144855
>
> This code doesn't seem to match the error message we are getting when deleting the zvol:
>
> cannot destroy 'slow/vm-188-disk-0': dataset is busy
>
> sub zfs_delete_zvol {
> ...
> if ($err =~ m/^zfs error:(.*): dataset is busy.*/) {
> ...
> }
>
> If this code was simplified to match 'dataset is busy' it would work.
Wow, thanks for spotting that! I definitely did not notice that. I'll
see if I can fix that soon!
>
> This issue won't exist in my new custom plugin since I override free_image. It is causing issues for the old patch version users. I don't really want to go and patch ZFSPoolPlugin.pm since the new plugin is right around the corner.
>
> I told the users having the issue that I would make an attempt to resolve it. Otherwise they have to wait for and upgrade to TrueNAS 25.10 and the new plugin.
Yeah I agree with you here; this seems like a simple enough fix on our
side. I'll still test it with one of the other iSCSI providers as well
though, just to play it safe (though I don't think anything will break).
Thanks again for bringing this to our attention; much appreciated!
>
> [https://opengraph.githubassets.com/9e61db5ee7ae1557d66bf3cc5dc92f5ae80c4b33a981241e95afc86f146d8626/boomshankerx/proxmox-truenas/issues/52]<https://github.com/boomshankerx/proxmox-truenas/issues/52#issuecomment-3322144855>
> Issue with Deleting data sets using the new storage plugin<https://github.com/boomshankerx/proxmox-truenas/issues/52#issuecomment-3322144855>
> Not sure if this is an issue with my nas instance or not. It complains the data sets are busy when I go to delete them causing it fail, and then fail to re add them back as an extent root@guardian:...
> github.com
>
>
> ________________________________
> From: Max R. Carrara <m.carrara@proxmox.com>
> Sent: Wednesday, September 17, 2025 6:04 AM
> To: Lorne Guse <boomshankerx@hotmail.com>; Proxmox VE development discussion <pve-devel@lists.proxmox.com>
> Subject: Re: busy dataset when trying the migrate iscsi disk
>
> On Mon Sep 15, 2025 at 5:34 AM CEST, Lorne Guse wrote:
> > I'm working on TrueNAS over iSCSI for Proxmox and have run into an issue migrating disks. When trying to delete the old storage, which has just successfully been transfered, the iscsidirect connection must remain open because we are getting:
> >
> > cannot destroy 'slow/vm-188-disk-0': dataset is busy
> >
> > Is there a way to ensure the iscsidirect connection is closed before trying to delete the underlying zfs dataset?
>
> Hi Lorne! Glad to see you on the mailing list!
>
> I've sifted around our code to see how we handle this, and it seems that
> we're simply retrying a couple of times until the dataset is actually
> deleted [0]. I think that might be your best bet, though if you find an
> alternative, I'd be curious to know.
>
> [0]: https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.proxmox.com%2F%3Fp%3Dpve-storage.git%3Ba%3Dblob%3Bf%3Dsrc%2FPVE%2FStorage%2FZFSPoolPlugin.pm%3Bh%3Dd8d8d0f9fc1cc6f1a02d8f5800c388b355609bf5%3Bhb%3Drefs%2Fheads%2Fmaster%23l362&data=05%7C02%7C%7C08ddb2ac62744e0c4a2908ddf5e259ac%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C638937074705932603%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=dV3Ch024RrnvSKKDsT4k1zP23S%2BCX9jFR6YISZ5Lpe0%3D&reserved=0<https://git.proxmox.com/?p=pve-storage.git;a=blob;f=src/PVE/Storage/ZFSPoolPlugin.pm;h=d8d8d0f9fc1cc6f1a02d8f5800c388b355609bf5;hb=refs/heads/master#l362>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2025-09-23 17:08 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <DM6PR17MB34662B3F92B53FF3394433C4D015A@DM6PR17MB3466.namprd17.prod.outlook.com>
2025-09-17 12:04 ` Max R. Carrara
2025-09-17 17:47 ` Lorne Guse via pve-devel
[not found] ` <DM6PR17MB3466AF450A03DC5A752C4B3BD017A@DM6PR17MB3466.namprd17.prod.outlook.com>
2025-09-18 17:52 ` Max R. Carrara
2025-09-23 3:06 ` Lorne Guse via pve-devel
[not found] ` <DM6PR17MB34667826A8F8A2A47FCA0475D01DA@DM6PR17MB3466.namprd17.prod.outlook.com>
2025-09-23 17:08 ` Max R. Carrara [this message]
2025-09-15 3:34 Lorne Guse via pve-devel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DD0CFZ654RZY.32DY4OSG6GKD3@proxmox.com \
--to=m.carrara@proxmox.com \
--cc=boomshankerx@hotmail.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox