From: "DERUMIER, Alexandre via pve-devel" <pve-devel@lists.proxmox.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>,
"cmos@maklee.com" <cmos@maklee.com>
Cc: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
Subject: Re: [pve-devel] issues with Virtio-SCSI devicde on Proxmox...
Date: Wed, 14 Aug 2024 06:44:36 +0000 [thread overview]
Message-ID: <mailman.238.1723617914.302.pve-devel@lists.proxmox.com> (raw)
In-Reply-To: <59c288d9-fff7-4a06-ad15-c462f571cc42@proxmox.com>
[-- Attachment #1: Type: message/rfc822, Size: 19942 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "cmos@maklee.com" <cmos@maklee.com>
Subject: Re: [pve-devel] issues with Virtio-SCSI devicde on Proxmox...
Date: Wed, 14 Aug 2024 06:44:36 +0000
Message-ID: <1178cb3475d719ae31f0c375cd3930fc24d98401.camel@groupe-cyllene.com>
Hi,
I didn't see the responde of Fiona but indeed:
https://lists.gnu.org/archive/html/qemu-devel/2021-09/msg01567.html
"virtio devices can be exposed in upto three ways
- Legacy - follows virtio 0.9 specification. always uses PCI
ID range 0x1000-0x103F
- Transitional - follows virtio 0.9 specification by default, but
can auto-negotiate with guest for 1.0 spce. Always
uses PCI ID range 0x1000-0x103F
- Modern - follows virtio 1.0 specification. always uses PCI
ID range 0x1040-0x107F
With QEMU, historically devices placed on a PCI bus will always default
to being in transitional mode, while devices placed on a PCI-E bus will
always dfault to being in modern mode.
"
-------- Message initial --------
De: Fiona Ebner <f.ebner@proxmox.com>
Répondre à: Proxmox VE development discussion <pve-
devel@lists.proxmox.com>
À: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
Christian Moser <cmos@maklee.com>
Objet: Re: [pve-devel] issues with Virtio-SCSI devicde on Proxmox...
Date: 13/08/2024 10:55:47
Hi,
Am 12.08.24 um 12:40 schrieb Christian Moser:
> Hello,
>
> I work for VSI (VMS Software Inc) which is porting the OpenVMS
> operating system to x86. At this point we successfully on various
> hypervisors, but have some issues on the KVM running on Proxmox.
>
> The OpenVMS VM works just fine with SATA disks and it also works with
> for example virtio-network device etc., but trying to use
> virtio-scsi hangs the driver. I have debugged this and I can
> successfully configure the port/controller, send the IO request to
> the
> device. It then gets processed by the device, which posts the results
> and sets the interrupt bit in the ISR register, but it never
> asserts the interrupt hence the driver never gets notified and the
> I/O hangs.
>
> I have tried both “virtio-scsi-pci” and “virtio-scsi-single”, but no
> luck. The emulated virtio-scsi device is a legacy device. But
> then again, the virtio-network device is also a legacy device and
> here we are getting interrupts. One thing which bothers me is the
> fact that the “legacy interrupt disable” bit is set in the PCI config
> space of the virtio-scsi device (i.e. bit 10 at offset 4)
>
> Questions:
> * is there a way to configure a modern virtio-scsi devcie (i.e.
> disable_legacy=on) ?
I've already answered this question when you asked in a mail addressed
directly to me:
Am 12.08.24 um 11:58 schrieb Fiona Ebner:
> Hi,
>
> It seems that you either need to attach the "virtio-scsi-pci" device
> to
> a pcie bus or explicitly set the "disable_legacy=on" option for the
> device, neither of which Proxmox VE currently does or allows
> configuring. The only way right now seems to be to attach the disk
> yourself via custom arguments (the 'args' in the Proxmox VE VM
> configuration), but then the disk will be invisible to Proxmox VE
> operations which look at specific disk keys in the configuration!
>
> Feel free to open a feature request on our bug tracker to make this
> configurable:
> https://antiphishing.vadesecure.com/v4?f=MDk0SW9xRkhTVGYydkJlTIW3tbTr
> aK7neiUoWcvis0-pokd-Q2cwuWCZhgBcTw_yw4KqJS-oP6jCsk-zj-
> 1YMQ&i=ZURHSDhnY0huQ2tPS3VZahJdhRaQu1ItpJrYkl8wXrA&k=q1N6&r=RjIyR1Rob
> kVxVWlHTXhKT3I72JjP2S9ryFg3B_csBogfeb2oROpP8B8yUJUd6awk&s=1aab5a2ada7
> 3beb46aa02df4e18ff6c5ba2db6d6ff2d1f302a3c4c83b13c8ef6&u=https%3A%2F%2
> Fbugzilla.proxmox.com%2F
>
> P.S. Please write to the developer list rather than individual
> developers for such questions in the feature:
> https://antiphishing.vadesecure.com/v4?f=MDk0SW9xRkhTVGYydkJlTIW3tbTr
> aK7neiUoWcvis0-pokd-Q2cwuWCZhgBcTw_yw4KqJS-oP6jCsk-zj-
> 1YMQ&i=ZURHSDhnY0huQ2tPS3VZahJdhRaQu1ItpJrYkl8wXrA&k=q1N6&r=RjIyR1Rob
> kVxVWlHTXhKT3I72JjP2S9ryFg3B_csBogfeb2oROpP8B8yUJUd6awk&s=50133960c87
> 16b5426bc084f398f7760f04af8739fd68cad36d17b1dcd887778&u=https%3A%2F%2
> Flists.proxmox.com%2Fcgi-bin%2Fmailman%2Flistinfo%2Fpve-devel
>
> Best Regards,
> Fiona
> * why is the legacy interrupt bit set in the PCI config space ?
Most likely because the virtio-scsi-pci is configured without the
"disable_legacy=on" option. If not explicitly set, the option will be
"disable_legacy=auto" and when not attached to PCIe (which is the case
for Proxmox VE currently), then legacy mode will be enabled.
> * Are there any working driver for virtio-scsi on this KVM using Q35
> machine? i.e. any other OS
The virtio drivers for Windows and the ones in Linux work just fine
with
our configuration.
> Any thoughts why these interrupts are not getting delivered on the
> PCIE bus?
We do not configure the virtio-scsi-pci on a PCIe bus currently, see my
initial answer.
Best Regards,
Fiona
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://antiphishing.vadesecure.com/v4?f=MDk0SW9xRkhTVGYydkJlTIW3tbTraK
7neiUoWcvis0-pokd-Q2cwuWCZhgBcTw_yw4KqJS-oP6jCsk-zj-
1YMQ&i=ZURHSDhnY0huQ2tPS3VZahJdhRaQu1ItpJrYkl8wXrA&k=q1N6&r=RjIyR1RobkV
xVWlHTXhKT3I72JjP2S9ryFg3B_csBogfeb2oROpP8B8yUJUd6awk&s=50133960c8716b5
426bc084f398f7760f04af8739fd68cad36d17b1dcd887778&u=https%3A%2F%2Flists
.proxmox.com%2Fcgi-bin%2Fmailman%2Flistinfo%2Fpve-devel
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2024-08-14 6:45 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-12 10:40 Christian Moser
2024-08-13 8:55 ` Fiona Ebner
2024-08-13 9:33 ` [pve-devel] FW: " Christian Moser
2024-08-14 6:44 ` DERUMIER, Alexandre via pve-devel [this message]
[not found] ` <1178cb3475d719ae31f0c375cd3930fc24d98401.camel@groupe-cyllene.com>
2024-08-14 7:20 ` Christian Moser
2024-08-14 12:22 ` DERUMIER, Alexandre via pve-devel
[not found] ` <c851a27f4f5eca7c767fbffa48cb208e2d8fe1e6.camel@groupe-cyllene.com>
2024-08-14 12:48 ` Christian Moser
2024-08-14 13:05 ` DERUMIER, Alexandre via pve-devel
[not found] ` <6ea8c90b469da48525ed743b6541bb3644be4a6c.camel@groupe-cyllene.com>
2024-08-15 5:35 ` [pve-devel] FW: " Christian Moser
2024-08-13 16:12 ` [pve-devel] " DERUMIER, Alexandre via pve-devel
[not found] ` <67b222322506a9eab0f7cf7da5a9fd715c8a91ff.camel@groupe-cyllene.com>
2024-08-14 6:42 ` [pve-devel] FW: " Christian Moser
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=mailman.238.1723617914.302.pve-devel@lists.proxmox.com \
--to=pve-devel@lists.proxmox.com \
--cc=alexandre.derumier@groupe-cyllene.com \
--cc=cmos@maklee.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox