all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Dominik Csapak <d.csapak@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
	"DERUMIER, Alexandre" <Alexandre.DERUMIER@groupe-cyllene.com>
Subject: Re: [pve-devel] last training week student feedback/request
Date: Thu, 23 Jun 2022 10:37:46 +0200	[thread overview]
Message-ID: <c7f216ed-369e-6cc4-dec0-d72bae0e00c6@proxmox.com> (raw)
In-Reply-To: <dd3a9ffa9a66f7a7d8c5cd383f080eb5d3f7076c.camel@groupe-cyllene.com>

On 6/23/22 10:25, DERUMIER, Alexandre wrote:
> Hi,
> 
> I just finished my proxmox training week,
> 
> here some student requests/feedback:

Hi,

i just answer the points where i'm currently involved, so someone else
might answer to the other ones ;)

[snip]
> 2)
> Another student have a need with pci passthrough, cluster with
> multiples nodes with multiple pci cards.
> He's using HA and have 1 or 2 backups nodes with a lot of cards,
> to be able to failover 10 others servers.
> 
> The problem is that on the backups nodes, the pci address of the cards
> are not always the same than production nodes.
> So Ha can't work.
> 
> I think it could be great to add some kind of "shared local device
> pool" at datacenter level, where we could define
> 
> pci:     poolname
>           node1:pciaddress
>           node2:pciaddress
> 
> usb:     poolname
>           node1:usbport
>           node2:usbport
>           
> 
> so we could dynamicaly choose the correct pci address when restarting
> the vm.
> 
> Permissions could be added too, maybe a migratable option when mdev
> live migration support will be ready, ...

i was working on that last year, but got hold up with other stuff,
but i'm planning to picking this up again this/next week

my solution looked very similar to yours, with additional fields
to uniquely identify the card (to prevent accidental pass-through
when the address changes fore example)

permissions are also planned there...

> 
> 
> 3)
> Related to 2), another student have a need of live migraton with nvidia
> card with mdev.
> I'm currently trying to test to see if it's possible, as they are some
> experimental vfio option to enable it, but it doesn't seem to be ready.
> 

would be cool, i'd like to have some vgpu capable cards to test here,
but so far no luck (also access/support to/of the vgpu driver
from nvidia is probably the bigger problem AFAICS)

kind regards
Dominik




  reply	other threads:[~2022-06-23  8:37 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-23  8:25 DERUMIER, Alexandre
2022-06-23  8:37 ` Dominik Csapak [this message]
2022-06-23 11:27 ` Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c7f216ed-369e-6cc4-dec0-d72bae0e00c6@proxmox.com \
    --to=d.csapak@proxmox.com \
    --cc=Alexandre.DERUMIER@groupe-cyllene.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal