public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Stefan Reiter <s.reiter@proxmox.com>
To: Fabian Ebner <f.ebner@proxmox.com>, pve-devel@lists.proxmox.com
Subject: Re: [pve-devel] [PATCH qemu-server] fix #3324: clone disk: use larger blocksize for EFI disk when possible
Date: Mon, 1 Mar 2021 11:13:30 +0100	[thread overview]
Message-ID: <c1a3b388-a734-8827-10f0-e5cbf3faebba@proxmox.com> (raw)
In-Reply-To: <59858abd-5032-2130-1aae-db734ecd8a50@proxmox.com>

On 3/1/21 11:06 AM, Fabian Ebner wrote:
> Am 01.03.21 um 10:54 schrieb Stefan Reiter:
>> On 3/1/21 10:42 AM, Fabian Ebner wrote:
>>> Moving to Ceph is very slow when bs=1. Instead, use the biggest 
>>> possible power
>>> of two <= 1024. At the moment our EFI image sizes are multiples of 
>>> 1024, so
>>> just using 1024 wouldn't be a problem, but this feels more future-proof.
>>>
>>> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
>>> ---
>>>
>>> I did not see an way for 'qemu-img dd' to use a larger blocksize 
>>> while still
>>> specifying the exact total size if it is not a multiple of the 
>>> blocksize.
>>>
>>
>> Could it make sense to just set the block size equal to the image size 
>> with count=1 ? Since the images will always be very small anyway...
>>
> 
> Note that AAVMF_VARS.fd is 64 MiB. Are blocksizes that big a good idea?
> 

That'd be too much, but the VARS file shouldn't be copied anyway? Only 
the efidisk attached to the VM?

>>>   PVE/QemuServer.pm | 10 +++++++++-
>>>   1 file changed, 9 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
>>> index f401baf..e579cdf 100644
>>> --- a/PVE/QemuServer.pm
>>> +++ b/PVE/QemuServer.pm
>>> @@ -6991,7 +6991,15 @@ sub clone_disk {
>>>           # that is given by the OVMF_VARS.fd
>>>           my $src_path = PVE::Storage::path($storecfg, $drive->{file});
>>>           my $dst_path = PVE::Storage::path($storecfg, $newvolid);
>>> -        run_command(['qemu-img', 'dd', '-n', '-O', $dst_format, 
>>> "bs=1", "count=$size",
>>> +
>>> +        # Ceph doesn't like too small blocksize, see bug #3324
>>> +        my $bs = 1;
>>> +        while ($bs < $size && $bs < 1024 && $size % $bs == 0) {
>>> +            $bs *= 2;
>>> +        }
>>> +        my $count = $size / $bs;
>>> +
>>> +        run_command(['qemu-img', 'dd', '-n', '-O', $dst_format, 
>>> "bs=$bs", "count=$count",
>>>               "if=$src_path", "of=$dst_path"]);
>>>           } else {
>>>           qemu_img_convert($drive->{file}, $newvolid, $size, 
>>> $snapname, $sparseinit);
>>>




  reply	other threads:[~2021-03-01 10:14 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-01  9:42 Fabian Ebner
2021-03-01  9:54 ` Stefan Reiter
2021-03-01 10:06   ` Fabian Ebner
2021-03-01 10:13     ` Stefan Reiter [this message]
2021-03-01 10:18       ` Dietmar Maurer
2021-03-01 10:22         ` Fabian Ebner
2021-03-02  7:11         ` Fabian Ebner
2021-03-01 10:19       ` Fabian Ebner
2021-03-01 10:23 ` Fabian Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c1a3b388-a734-8827-10f0-e5cbf3faebba@proxmox.com \
    --to=s.reiter@proxmox.com \
    --cc=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal