From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id E75B762854 for ; Wed, 30 Sep 2020 16:22:32 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id D800F1A617 for ; Wed, 30 Sep 2020 16:22:32 +0200 (CEST) Received: from mail-qk1-x72f.google.com (mail-qk1-x72f.google.com [IPv6:2607:f8b0:4864:20::72f]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id E03361A603 for ; Wed, 30 Sep 2020 16:22:30 +0200 (CEST) Received: by mail-qk1-x72f.google.com with SMTP id w16so1537506qkj.7 for ; Wed, 30 Sep 2020 07:22:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=me+x6D06UdaVJZ6ssfh9ATq9mt7v3PBnDwnOPVcdKs0=; b=GDIZ7t2lkmfRjFK9h85eM+8lEnp1jiXz9dV9+u7dVb3IRW7eF57sofIwfSJu2KG6o1 igF+WCtOC6kN0ErBZn90PhRSv0wR74a6/RlvS2pyGepIL2+3mask4N8rAjX+A1I6vxg/ LultLe869oK8k9t2jXLdjmIJxy9AYv5GPklknYmPbVN99sTquybixK27y+aopCGhWSEX wOYZrE0cKZIffeRRzO7Fer5N4k4R97RGQ1QYKlDatBZx/a1A4drAoqizBgsSDLbp2zzK xCPyH/UWD6+O7cJWo23KZR/bKqM5M14/WwQM31V1ODN9HBXodSp/zIWrEEDKsvrXh9lP QxfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=me+x6D06UdaVJZ6ssfh9ATq9mt7v3PBnDwnOPVcdKs0=; b=QtnlTic2JnlDVA/BCEx3XlAeOm2eDV8Mb8+GjvMhzXrur4ultnTbqwHxsmOlyARyTk 8pYmeYELQg8gh2GQkfffXkHY/+N3BBUab8GqVvW4VmSaGl5wIbkYcqi3MJU/wYmNyEhU fFDGLnEgNFZum/0CPv64GzrSwp7PMccgrwWOKzd2o9Z2shIOgg/yopIikr2SFok+E9Sa PsqgkRA4smS6qZnPBF9jmQLW/YyOu69AQFMPvKJA+lu3EIX9vxohNQhdsgBIYyUU0POP YMvcBMeQHAq6sKPAqLMIK/PSIS5TQM692o/W74OK1+USrURHJXHcBOZCbt77VBKDaAWe tG/w== X-Gm-Message-State: AOAM531buU2d+NzciyKfs+b8xDxdMqjHFu1GEr1q017KH14BlnZ41gDI 2ixNZ5M1s36ua/WOgZGVhVSM4uKZVprx3G8Y4RBQYtu60HTRFg== X-Google-Smtp-Source: ABdhPJwU8rawKLKV8zK/Jyh6ZROp8t1drzYPS7KVl3n+FHCwTTkAU/ejBZ1fokQdSOFjRuLTefyfzc7opDB/oula/2w= X-Received: by 2002:a05:620a:15e4:: with SMTP id p4mr2721429qkm.457.1601475743387; Wed, 30 Sep 2020 07:22:23 -0700 (PDT) MIME-Version: 1.0 References: <39ec0356-5003-4996-07e6-09c02127bd6d@proxmox.com> In-Reply-To: From: Gilberto Nunes Date: Wed, 30 Sep 2020 11:21:47 -0300 Message-ID: To: Aaron Lauterer Cc: Proxmox VE development discussion X-SPAM-LEVEL: Spam detection results: 0 AWL 0.036 Adjusted score from AWL reputation of From: address DKIM_SIGNED 0.1 Message has a DKIM or DK signature, not necessarily valid DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's domain DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from domain FREEMAIL_ENVFROM_END_DIGIT 0.25 Envelope-from freemail username ends in digit FREEMAIL_FROM 0.001 Sender email is commonly abused enduser mail provider HTML_MESSAGE 0.001 HTML included in message RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/, no trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com] Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [pve-devel] Qm move_disk bug (?) X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Sep 2020 14:22:33 -0000 Ok! Just to be sure, I did it again... In the LVM-Thin I have an 100.00g vm disk. Note that only about 6% are filled up. lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-aotz-- 18.87g 31.33 1.86 root pve -wi-ao---- 9.75g swap pve -wi-ao---- 4.00g vm-100-disk-0 pve Vwi-aotz-- 100.00g data 5.91 No tried to use move_disk cmd: qm move_disk 100 scsi0 VMS --format qcow2 (VMS is the Directory Storage) Using this command to check the qcow2 file cmd: watch -n 1 qemu-img info vm-100-disk-0.qcow2 Every 1.0s: qemu-img info vm-100-disk-0.qcow2 proxmox01: Wed Sep 30 11:02:02 2020 image: vm-100-disk-0.qcow2 file format: qcow2 virtual size: 100 GiB (107374182400 bytes) disk size: 21.2 GiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: false refcount bits: 16 corrupt: false After a while, all space in /DATA, which is the Directory Storage are full. df -h Filesystem Size Used Avail Use% Mounted on udev 1.9G 0 1.9G 0% /dev tmpfs 394M 5.8M 388M 2% /run /dev/mapper/pve-root 9.8G 2.5G 7.4G 25% / tmpfs 2.0G 52M 1.9G 3% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/vdb1 40G 40G 316K 100% /DATA /dev/fuse 30M 16K 30M 1% /etc/pve tmpfs 394M 0 394M 0% /run/user/0 and the image has almost 40G filled.... qemu-img info vm-100-disk-0.qcow2 image: vm-100-disk-0.qcow2 file format: qcow2 virtual size: 100 GiB (107374182400 bytes) disk size: 39.9 GiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: false refcount bits: 16 corrupt: false And the command qm move_disk got error after a while: qm move_disk 100 scsi0 VMS --format qcow2 create full clone of drive scsi0 (local-lvm:vm-100-disk-0) Formatting '/DATA/images/100/vm-100-disk-0.qcow2', fmt=3Dqcow2 cluster_size=3D65536 preallocation=3Dmetadata compression_type=3Dzlib size=3D107374182400 lazy_refcounts=3Doff refcount_bits=3D16 drive mirror is starting for drive-scsi0 drive-scsi0: transferred: 384827392 bytes remaining: 106989355008 bytes total: 107374182400 bytes progression: 0.36 % busy: 1 ready: 0 ... ... drive-scsi0: transferred: 42833281024 bytes remaining: 64541097984 bytes total: 107374379008 bytes progression: 39.89 % busy: 1 ready: 0 drive-scsi0: transferred: 42833281024 bytes remaining: 64541097984 bytes total: 107374379008 bytes progression: 39.89 % busy: 1 ready: 0 drive-scsi0: Cancelling block job drive-scsi0: Done. storage migration failed: mirroring error: drive-scsi0: mirroring has been cancelled Then I tried to use qemu-img convert and everything works fine qemu-img convert -O qcow2 /dev/pve/vm-100-disk-0 /DATA/images/100/vm-100-disk-0.qcow2 qemu-img info vm-100-disk-0.qcow2 image: vm-100-disk-0.qcow2 file format: qcow2 virtual size: 100 GiB (107374182400 bytes) disk size: 6.01 GiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: false refcount bits: 16 corrupt: false --- Gilberto Nunes Ferreira Em qua., 30 de set. de 2020 =C3=A0s 10:59, Gilberto Nunes < gilberto.nunes32@gmail.com> escreveu: > UPDATE > From CLI I have used > > qm move_disk 100 scsi0 VMS --format qcow2 > > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em qua., 30 de set. de 2020 =C3=A0s 10:26, Gilberto Nunes < > gilberto.nunes32@gmail.com> escreveu: > >> >> How did you move the disk? GUI or CLI? >> Both. >> From CLI qm move_disk 100 scsi0 VMS (VMS is the Directory Storage) >> >> Proxmox all up to date... >> pveversion -v >> proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve) >> pve-manager: 6.2-12 (running version: 6.2-12/b287dd27) >> pve-kernel-5.4: 6.2-7 >> pve-kernel-helper: 6.2-7 >> pve-kernel-5.4.65-1-pve: 5.4.65-1 >> pve-kernel-5.4.34-1-pve: 5.4.34-2 >> ceph-fuse: 12.2.11+dfsg1-2.1+b1 >> corosync: 3.0.4-pve1 >> criu: 3.11-3 >> glusterfs-client: 5.5-3 >> ifupdown: 0.8.35+pve1 >> ksm-control-daemon: 1.3-1 >> libjs-extjs: 6.0.1-10 >> libknet1: 1.16-pve1 >> libproxmox-acme-perl: 1.0.5 >> libpve-access-control: 6.1-2 >> libpve-apiclient-perl: 3.0-3 >> libpve-common-perl: 6.2-2 >> libpve-guest-common-perl: 3.1-3 >> libpve-http-server-perl: 3.0-6 >> libpve-storage-perl: 6.2-6 >> libqb0: 1.0.5-1 >> libspice-server1: 0.14.2-4~pve6+1 >> lvm2: 2.03.02-pve4 >> lxc-pve: 4.0.3-1 >> lxcfs: 4.0.3-pve3 >> novnc-pve: 1.1.0-1 >> proxmox-backup-client: 0.8.21-1 >> proxmox-mini-journalreader: 1.1-1 >> proxmox-widget-toolkit: 2.2-12 >> pve-cluster: 6.1-8 >> pve-container: 3.2-2 >> pve-docs: 6.2-6 >> pve-edk2-firmware: 2.20200531-1 >> pve-firewall: 4.1-3 >> pve-firmware: 3.1-3 >> pve-ha-manager: 3.1-1 >> pve-i18n: 2.2-1 >> pve-qemu-kvm: 5.1.0-2 >> pve-xtermjs: 4.7.0-2 >> qemu-server: 6.2-14 >> smartmontools: 7.1-pve2 >> spiceterm: 3.1-1 >> vncterm: 1.6-2 >> zfsutils-linux: 0.8.4-pve1 >> >> >> >>> The VM disk (100G) or the physical disk of of the storage? >> >> The VM disk has 100G in size, but the storage has 40G... It's just a >> lab... >> >> >> >> --- >> Gilberto Nunes Ferreira >> >> >> Em qua., 30 de set. de 2020 =C3=A0s 10:22, Aaron Lauterer < >> a.lauterer@proxmox.com> escreveu: >> >>> Hey, >>> >>> How did you move the disk? GUI or CLI? >>> >>> If via CLI, could you post the command? >>> >>> Additionally, which versions are installed? (pveversion -v) >>> >>> One more question inline. >>> >>> On 9/30/20 3:16 PM, Gilberto Nunes wrote: >>> > Hi all >>> > >>> > I tried to move a vm disk from LVM-thin to a Directory Storage but >>> when I >>> > did this, the qm move_disk just filled up the entire disk. >>> >>> The VM disk (100G) or the physical disk of of the storage? >>> >>> > The disk inside LVM-thin has 100G in size but only about 5G is >>> occupied by >>> > the OS. >>> > I have used the qcow2 format. >>> > However, if I do it from CLI with the command: >>> > >>> > qemu-img convert -O qcow2 /dev/pve/vm-100-disk-0 >>> > /DATA/images/100/vm-100-disk-0.qcow2 >>> > >>> > It works nicely and just copied what the OS occupied inside the VM, b= ut >>> > created a virtual disk with 100GB. >>> > >>> > It's some kind of bug with qm move_disk??? >>> > >>> > Thanks a lot >>> > >>> > --- >>> > Gilberto Nunes Ferreira >>> > _______________________________________________ >>> > pve-devel mailing list >>> > pve-devel@lists.proxmox.com >>> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >>> > >>> > >>> >>>