public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Joseph John <jjk.saji@gmail.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Moving virtual machine from CE 6.X to latest enterprise edition, advice requested
Date: Fri, 4 Aug 2023 13:06:08 +0400	[thread overview]
Message-ID: <CAKeuxjAGy-aNtZaOgivwWyx=3GrotN7d2O6JLH7=v_u=v6XoiA@mail.gmail.com> (raw)
In-Reply-To: <1bab43c9-edf3-9a3a-c5f6-0d9b0aee1de6@unibe.ch>

Hi ,
Thanks for the advice
I tried first with a CIFS share , checked by accessing it from other
machines . created CIFS share (Left hand side of menu, Data Center,
Storage) ,
Now when I go to the VM , backup option , and click backup I can only see
the local storage, I am not able to see the CIFS share which I created

Similarly I did with NFS server and shared directory , checked it by
accessing it from other hosts. Created NFS share (Left hand side of menu,
Data Center, Storage).
This also when I go the the VM, backup option and click back I can only see
the local storage , not the NFS or the CIFS storage which we created
earlier . The prox mox version is 6.3-3.

But when I give a df-h , I get the results

oot@server-1:~# df -h
Filesystem                               Size  Used Avail Use% Mounted on
udev                                     378G     0  378G   0% /dev
tmpfs                                     76G   13M   76G   1% /run
/dev/mapper/pve-root                      29G  6.2G   22G  23% /
tmpfs                                    378G   69M  378G   1% /dev/shm
tmpfs                                    5.0M     0  5.0M   0% /run/lock
tmpfs                                    378G     0  378G   0%
/sys/fs/cgroup
/dev/sdb2                                511M  312K  511M   1% /boot/efi
/dev/fuse                                 30M   72K   30M   1% /etc/pve

*//10.115.129.160/proxmox <http://10.115.129.160/proxmox>
1.0T  359G  666G  36%
/mnt/pve/backup10.115.129.169:/home/itsupport/nfsshare 1017G   14G  955G
2% /mnt/pve/nfsbackup*
tmpfs                                     76G     0   76G   0% /run/user/0


So from the df -h , we can see the CIFS and NFS, but through the web
interfaces, backup option do not show them in the drop down menu.
I am thinking in this situation, to make use command line and run the
backup so that I can specify the location (cifs or nfs share) to which I
can  copy the disk image
I will be posting another thread on requesting on how to use command line
to  take backup of VM images



Thanks
Joseph John



On Thu, Aug 3, 2023 at 7:14 PM Peppo Brambilla <peppo.brambilla@unibe.ch>
wrote:

> Hi
>
> I was using an NFS server connected to the old and new cluster. This
> should be setup as similar as possible on both cluster. Maybe it's
> sufficient if they are named the same.
> Then I proceeded as follows:
>
> * move VM's disks to the NFS share
> * shutdown VM on old cluster
> * copy VM's config files in /etc/pve/qemu-server/  from old to new server
> * you should now see the VM on the new server as well
> * start VM on new server
> * move disks of VM to ceph on new server (you may want to keep the old
> disk images)
> * if the VM runs fine, you can remove the config files on the old
> cluster and remove the disk images there as well)
>
> Cheers -- Peppo
>
>
> On 03.08.23 11:19, Joseph John wrote:
> > Hi
> > Thanks.
> >
> > The backup option for some reason not completing successfully
> > from the log files
> > cat  /var/lib/vz/dump/vzdump-qemu-109-2023_08_02-12_34_08.log
> > 2023-08-02 12:42:10 ERROR: vma_queue_write: write error - Broken pipe
> > 2023-08-02 12:42:10 INFO: aborting backup job
> > 2023-08-02 12:42:10 INFO: stopping kvm after backup task
> > 2023-08-02 12:42:13 ERROR: Backup of VM 109 failed - vma_queue_write:
> write
> > error - Broken pipe
> >
> > the GUI /Web way of taking backup not working
> > I have to find another way of taking the image and putting it on the new
> > setup , the present one storage is based on the ceph storage and the vm
> are
> > in the ceph pook
> > I am thinking a way in which I can try to take to VM image from the ceph
> > pool  and move it to the other  new pve instance
> >
> > Guidance requested  for getting the vm images to new proxmox
> >
> >
> >
> > Thanks
> > Joseph John
> >
> >
> >
> > On Wed, Aug 2, 2023 at 11:37 AM Aaron Lauterer <a.lauterer@proxmox.com>
> > wrote:
> >
> >> That sounds like a reasonable plan.
> >>
> >> If you have a network share that you want to utilize for the backups
> later
> >> on
> >> (never store them on the same machine, 3-2-1 backup strategy and such
> ;) )
> >> you
> >> could already configure it on the old Proxmox VE server. That way, you
> >> don't
> >> have to manually move the backup files over. Since you are
> decommissioning
> >> the
> >> old server, there is no risk of running into issues of two different
> >> Proxmox VE
> >> clusters (or separate single nodes), accessing the exact same storage.
> >>
> >> Under normal situations, you don't want to give two Proxmox VE clusters
> >> access
> >> to the same storage, as you might run into VMID conflicts.
> >>
> >> Cheers,
> >> Aaron
> >>
> >> On 8/2/23 09:31, Joseph John wrote:
> >>> Dear All,
> >>> Good morning
> >>> We are going to have ProxMox 8 enterprise edition setup, installation
> is
> >>> going on.
> >>>
> >>> Earlier we were using ProxMox 6.X Community Edition, this is a
> separate
> >>> unit which we were using . Now once the enterprise edition is up, we
> plan
> >>> to move all the VM instance which were on the old 6.X  CE to the
> >> separate
> >>> ProxMox 8 enterprise edition
> >>>
> >>> I am planning to move the old vm form the CE (6.X) to ProxMox 8
> >> (Enterprise
> >>> Edition)  in the following way
> >>>
> >>>      - Take  backup of the VM at CE using the backup option
> >>>      - scp/rsync  the  backup files to the Prox Mox 8 Enterprise
> version
> >>>      - Use the restore option from the new server to restore the image
> >>>
> >>>
> >>> Like to get advice, if the above mentioned steps is apt when you are
> >> going
> >>> to move from one of the older CE edition [6.X] to the latest enterprise
> >>> edition
> >>>
> >>> Thanks
> >>> Joseph John
> >>> _______________________________________________
> >>> pve-user mailing list
> >>> pve-user@lists.proxmox.com
> >>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>>
> >>>
> >>
> > _______________________________________________
> > pve-user mailing list
> > pve-user@lists.proxmox.com
> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
> --
> Universität Bern
> Phil.-nat. Fakultät
> Institut für Informatik
>
> Dr. Peppo Brambilla
> Systemadministrator
>
> Neubrückstrasse 10
> 3012 Bern
> Schweiz
> Telefon +41 31 684 33 10
>
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


  reply	other threads:[~2023-08-04  9:06 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-02  7:31 Joseph John
2023-08-02  7:37 ` Aaron Lauterer
2023-08-03  9:19   ` Joseph John
2023-08-03 15:08     ` Peppo Brambilla
2023-08-04  9:06       ` Joseph John [this message]
2023-08-04  9:14         ` Peppo Brambilla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKeuxjAGy-aNtZaOgivwWyx=3GrotN7d2O6JLH7=v_u=v6XoiA@mail.gmail.com' \
    --to=jjk.saji@gmail.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal