From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 2D8628D4A2 for ; Tue, 8 Nov 2022 08:59:08 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 1528540D6 for ; Tue, 8 Nov 2022 08:59:08 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Tue, 8 Nov 2022 08:59:06 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id ADFDA415BA; Tue, 8 Nov 2022 08:59:05 +0100 (CET) Message-ID: Date: Tue, 8 Nov 2022 08:59:04 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:107.0) Gecko/20100101 Thunderbird/107.0 To: Thomas Lamprecht , Proxmox Backup Server development discussion , Wolfgang Bumiller References: <20221031113953.3111599-1-d.csapak@proxmox.com> <20221031113953.3111599-4-d.csapak@proxmox.com> <20221107123538.i355vavyncl5edm7@casey.proxmox.com> <36968ac0-73c1-466a-e122-485a2598d14b@proxmox.com> Content-Language: en-US From: Dominik Csapak In-Reply-To: <36968ac0-73c1-466a-e122-485a2598d14b@proxmox.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.066 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -0.001 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pbs-devel] applied: [RFC PATCH proxmox-backup 2/2] file-restore: dynamically increase memory of vm for zpools X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Nov 2022 07:59:08 -0000 On 11/7/22 18:15, Thomas Lamprecht wrote: > Am 07/11/2022 um 13:35 schrieb Wolfgang Bumiller: >> applied > > meh, can we please get this opt-in any only enabled it for root@pam or for users > with some powerfull priv on / as talked as chosen approach to allow more memory the > last time this came up (off list IIRC)... I really do *not* want a memory DOS potential > increased by a lot just opening some file-restore tabs, this actually should get more > restrictions (for common "non powerfull" users), not less.. understandable, so i can do that, but maybe it's time we rethink the file-restore mechanism as a whole, since it's currently rather inergonomic: * users don't know how many and which file restore vms are running, they may not even know it starts a vm at all * regardless with/without my patch, the only thing necessary to start a bunch vms is VM.Backup to the vm and Datastore.AllocateSpace on the storage (which in turn is probably enough to create an arbitrary amount of backups) * having arbitrary sized disks/fs inside, no fixed amount we give will always be enough so here some proposals on how to improve that (we won't implement all of them simultaneously, but maybe something from that list is usable) * make the list of running file-restore vms visible, and maybe add a manual 'shutdown' * limit the amount of restore vms per user (or per vm?) - this would need the mechanism from above anyway, since otherwise either the user cannot start the restore vm or we abort an older vm (with possibly running download operation) * make the vm memory configurable (per user/vm/globally?) * limit the global memory usage for file restore vms - again needs some control mechanism for stopping/seeing these vms * throw away the automatic starting of vms, and make it explicit, i.e. make the user start/shutdown a vm manually - we can have some 'configuration panel' before starting (like with restore) - the user is aware it's starting - still needs some mechanism to see them, but with a manual starting api call it's easier to have e.g. a running worker that can be stopped > >> >> AFAICT if the kernel patch is not applied it'll simply have no effect >> anyway, so we shouldn't need any "hard dependency bumps" where >> otherweise things break? > > if you actually want to enforce that the new behavior is there you need a dependency > bump.