From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id BB2028AF3E for ; Mon, 22 Aug 2022 16:07:45 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B11322F872 for ; Mon, 22 Aug 2022 16:07:45 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Mon, 22 Aug 2022 16:07:41 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 50D1D43766; Mon, 22 Aug 2022 16:07:41 +0200 (CEST) Message-ID: Date: Mon, 22 Aug 2022 16:07:40 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:104.0) Gecko/20100101 Thunderbird/104.0 Content-Language: en-US To: "DERUMIER, Alexandre" , Proxmox VE development discussion References: <20220726065559.674547-1-d.csapak@proxmox.com> <287d3d9d-7a5e-f228-3685-dc0af245790b@proxmox.com> <7e086d5b-7265-eaf9-dafb-15c060800893@groupe-cyllene.com> <9cb39d2c-432e-cc8a-6485-3ae8f865cf91@proxmox.com> <0d467617-8ca3-0418-c2b3-add8a02cea74@groupe-cyllene.com> From: Dominik Csapak In-Reply-To: <0d467617-8ca3-0418-c2b3-add8a02cea74@groupe-cyllene.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.096 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -0.001 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [pve-devel] [PATCH common/qemu-server/manager] improve vGPU (mdev) usage for NVIDIA X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Aug 2022 14:07:45 -0000 On 8/22/22 15:39, DERUMIER, Alexandre wrote: > Le 22/08/22 à 12:16, Dominik Csapak a écrit : >> On 8/17/22 01:15, DERUMIER, Alexandre wrote: >>> Le 9/08/22 à 10:39, Dominik Csapak a écrit : >>>> On 8/9/22 09:59, DERUMIER, Alexandre wrote: >>>>> Le 26/07/22 à 08:55, Dominik Csapak a écrit : >>>>>> so maybe someone can look at that and give some feedback? >>>>>> my idea there would be to allow multiple device mappings per node >>>>>> (instead of one only) and the qemu code would select one automatically >>>>> Hi Dominik, >>>>> >>>>> do you want to create some kind of pool of pci devices in your ""add >>>>> cluster-wide hardware device mapping" patches series ? >>>>> >>>>> Maybe in hardwaremap, allow to define multiple pci address on same >>>>> node ? >>>>> >>>>> Then, for mdev, look if a mdev already exist in 1 of the device. >>>>> If not, try to create the mdev if 1 device, if it's failing (max >>>>> number of mdev reached), try to create mdev on the other device,... >>>>> >>>>> if not mdev, choose a pci device in the pool not yet detached from >>>>> host. >>>>> >>>> >>>> yes i plan to do this in my next iteration of the mapping series >>>> (basically what you describe) >>> Hi, sorry to be late. >>> >>> >>>> my (rough) idea: >>>> >>>> have a list of pci paths in mapping (e.g. 01:00.0;01:00.4;...) >>>> (should be enough, i don't think grouping unrelated devices (different >>>> vendor/product) makes much sense?) >>> yes, that's enough for me. we don't want to mix unrelated devices. >>> >>> BTW, I'm finally able to do live migration with nvidia mdev vgpu. (need >>> to compile the nvidia vfio driver with an option to enable it + add >>> "-device vfio-pci,x-enable-migration=on,..." >> >> nice (what flag do you need on the driver install? i did not find it) >> i'll see if i can test that on a single card (only have one here) >> > > > I have use 460.73.01 driver. (last 510 driver don't have the flag and > code, don't known why) > https://github.com/mbilker/vgpu_unlock-rs/issues/15 > > > the flag is NV_KVM_MIGRATION_UAP=1. > As I didn't known to pass the flag, > > I have simply decompress the driver > "NVIDIA-Linux-x86_64-460.73.01-grid-vgpu-kvm-v5.run -x" > edit the "kernel/nvidia-vgpu-vfio/nvidia-vgpu-vfio.Kbuild" to add > NV_KVM_MIGRATION_UAP=1 > > then ./nvidia-installer > thx, i am using the 510.73.06 driver here (official grid driver) and the dkms source has that flag, so i changed the .Kbuild in my /usr/src folder and rebuilt it. i'll test it tomorrow