From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <f.ebner@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 13CBF8E18
 for <pve-devel@lists.proxmox.com>; Fri,  1 Sep 2023 12:24:36 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id F06A51444A
 for <pve-devel@lists.proxmox.com>; Fri,  1 Sep 2023 12:24:35 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Fri,  1 Sep 2023 12:24:35 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 3F3CC47D37;
 Fri,  1 Sep 2023 12:24:35 +0200 (CEST)
Message-ID: <3e337e38-1a91-8b41-c03c-1f89c8885df7@proxmox.com>
Date: Fri, 1 Sep 2023 12:24:34 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.14.0
Content-Language: en-US
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
 Thomas Lamprecht <t.lamprecht@proxmox.com>,
 Alexandre Derumier <aderumier@odiso.com>
References: <20230619072841.38531-1-aderumier@odiso.com>
 <20230619072841.38531-5-aderumier@odiso.com>
 <809ca35e-ba06-4326-b830-734096ed0370@proxmox.com>
From: Fiona Ebner <f.ebner@proxmox.com>
In-Reply-To: <809ca35e-ba06-4326-b830-734096ed0370@proxmox.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 1.662 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 NICE_REPLY_A           -3.478 Looks like a legit reply (A)
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] [PATCH v2 pve-manager 2/2] ui: qemu : memoryedit:
 add new max && virtio fields
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Fri, 01 Sep 2023 10:24:36 -0000

Am 01.09.23 um 11:48 schrieb Thomas Lamprecht:
> Am 19/06/2023 um 09:28 schrieb Alexandre Derumier:
>> +		xtype: 'pveMemoryField',
>> +		name: 'max',
>> +		minValue: 65536,
>> +		maxValue: 4194304,
>> +		value: '',
>> +		step: 65536,
>> +		fieldLabel: gettext('Maximum memory') + ' (MiB)',
> 
> This huge step size will be confusing to users, there should be a way to have
> smaller steps (e.g., 1 GiB or even 128 MiB).
> 
> As even nowadays, with a huge amount of installed memory on a lot of servers,
> deciding that a (potentially bad actor) VM can use up 64G or 128G is still
> quite the difference on a lot of setups. Fiona is checking the backend here
> to see if it might be done with a finer granularity, or what other options
> we have here.
> 

>From a first glance, I think it should be possible. Even if we keep the
restriction "all memory devices should have the same size", which makes
the code easier:

For dimms, we have 64 slots, so I don't see a reason why we can't use 64
MiB granularity rather than 64 GiB.

For virtio-mem, we have one device per socket (up to 8, assuming a power
of 2), and the blocksize is 2 MiB, so we could have 16 MiB granularity.
Or is there an issue setting the 'size' for a  virtio-mem-pci device to
such a fine grained value? Even if there is, we can just create the
device with a bigger supported 'size' and have our API reject a request
to go beyond the maximum later.