all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: Daniel Kral <d.kral@proxmox.com>,
	Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [RFC qemu-server v2 3/4] fix #6378 (continued): warn intel-iommu users about iommu and host aw bits mismatch
Date: Fri, 5 Sep 2025 14:52:29 +0200	[thread overview]
Message-ID: <03689eea-7824-4139-8ff8-4917d9bc62ec@proxmox.com> (raw)
In-Reply-To: <DCKU5N4L1IXW.2NZABFDZFQAEE@proxmox.com>

Am 05.09.25 um 1:38 PM schrieb Daniel Kral:
> On Fri Sep 5, 2025 at 12:50 PM CEST, Fiona Ebner wrote:
>> Am 02.09.25 um 1:23 PM schrieb Daniel Kral:
>>
>>> Signed-off-by: Daniel Kral <d.kral@proxmox.com>
>>> ---
>>> I already talked about this with @Fiona off-list, but the code this
>>> adds to qemu-server only for a warning is quite a lot, but is more
>>> readable than the above error that is only issued when the VM is already
>>> run.
>>>
>>> Particularily, I don't like the logic duplication of
>>> get_cpu_address_width(...), which tries to copy what
>>> target/i386/{,host-,kvm/kvm-}cpu.c do to retrieve the {,guest_}phys_bits
>>> value, where I'd rather see this implemented in pve-qemu as in [0].
>>>
>>> There are two qemu and edk2 discussion threads that might help in
>>> deciding how to go with this patch [0] [1]. It could also be better to
>>> implement this downstream in pve-qemu for now similar to [0], or of
>>> course contribute to upstream with an actual fix.
>>>
>>> [0] https://lore.kernel.org/qemu-devel/20250130115800.60b7cbe6.alex.williamson@redhat.com/
>>> [1] https://edk2.groups.io/g/devel/topic/patch_v1/102359124
>>
>> To avoid all the complexity and maintainability burden to stay
>> compatible with how QEMU calculates, can we simply notify/warn users who
>> set aw-bits that they might need to set guest-phys-bits to the same
>> value too?
> 
> Hm, the reason for this warning is for people that get the above
> vfio_container_dma_map(...) error, which was happening before aw-bits
> was increased from 39 to 48 bits with qemu 9.2 already.
> 
> Now that the default value for aw-bits is 48 bits, the people that have
> less than 48 bits physical address width will set aw-bits more often, as
> their machine cannot start anyway because of the fatal aw-bits > host
> aw-bits error.
> 
> So we could go for that warning at all times, but that leave out users
> who don't have aw-bits set (e.g. machine version set to < 9.2) or other
> cases that could come in the future (e.g. when CPUs with 5-level paging
> are more present)..
> 
> But I agree with you about the maintainability burden, so maybe we'll
> just do a warning whenever aw-bits is set, then guest-phys-bits should
> also be set to a value guest-phys-bits = aw-bits?

Ah, I wasn't aware this issue could also happen without aw-bits set.

As discussed off-list:

The simple notice/warning when aw-bits is set (and vfio is used) would
still catch most newly affected people. Would be nice to have the
aw-bits feature available, so that users can work around the regression.

The other warning is best done in QEMU itself and it just seems like
there was no follow-up series for that yet [0]. We could also go ahead
and apply/backport the warning [1] ourselves without waiting for
upstream. Still would be good to briefly ask the author if this is still
planned or if it should/can be picked up.

[0]:
https://lore.kernel.org/qemu-devel/20250206131438.1505542-1-clg@redhat.com/
[1]:
https://lore.kernel.org/qemu-devel/20250130134346.1754143-9-clg@redhat.com/


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  reply	other threads:[~2025-09-05 12:52 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-02 11:21 [pve-devel] [PATCH common/qemu-server v2 0/5] fix issues with viommu+vfio passthrough in #6608, #6378 Daniel Kral
2025-09-02 11:21 ` [pve-devel] [PATCH common v2 1/1] procfs: cpuinfo: expose x86_phys_bits and x86_virt_bits values Daniel Kral
2025-09-05  9:10   ` Fiona Ebner
2025-09-05 11:47     ` Daniel Kral
2025-09-02 11:21 ` [pve-devel] [PATCH qemu-server v2 1/4] fix #6608: expose viommu driver aw-bits option Daniel Kral
2025-09-05 10:07   ` Fiona Ebner
2025-09-05 11:45     ` Daniel Kral
2025-09-05 12:00       ` Fiona Ebner
2025-09-05 14:18   ` Daniel Kral
2025-09-02 11:21 ` [pve-devel] [PATCH qemu-server v2 2/4] cpu config: factor out gathering common cpu properties Daniel Kral
2025-09-05 10:32   ` Fiona Ebner
2025-09-02 11:22 ` [pve-devel] [RFC qemu-server v2 3/4] fix #6378 (continued): warn intel-iommu users about iommu and host aw bits mismatch Daniel Kral
2025-09-02 11:26   ` Daniel Kral
2025-09-05 10:50   ` Fiona Ebner
2025-09-05 11:38     ` Daniel Kral
2025-09-05 12:52       ` Fiona Ebner [this message]
2025-09-02 11:22 ` [pve-devel] [RFC qemu-server v2 4/4] machine: warn intel-iommu users about too large address width Daniel Kral
2025-09-05 10:55   ` Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=03689eea-7824-4139-8ff8-4917d9bc62ec@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=d.kral@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal