From: Alex K <rightkicktech@gmail.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Proxmox questions/features
Date: Thu, 8 Jul 2021 10:54:53 +0300 [thread overview]
Message-ID: <CABMULtLLfnrrE8Odo94xghUnYAaO2L0_+UktKpp5JOb+yHvToQ@mail.gmail.com> (raw)
In-Reply-To: <CABMULtL50PNMjqe8+7DNh0P3k5Zf27_hFMYq2-voL1V0g1xw2A@mail.gmail.com>
Checking the qemu process for the specific VM I get the following:
/usr/bin/kvm -id 100 -name Debian -no-shutdown -chardev
socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon
chardev=qmp,mode=control -chardev
socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon
chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/100.pid
-daemonize -smbios type=1,uuid=667352ae-6b86-49fc-a892-89b96e97ab8d -smp
1,sockets=1,cores=1,maxcpus=1 -nodefaults -boot
menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg
-vnc unix:/var/run/qemu-server/100.vnc,password -cpu
kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep -m 2048 -device
pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device
pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device
vmgenid,guid=3ce34c1d-ed4f-457d-a569-65f7755f47f1 -device
piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device
usb-tablet,id=tablet,bus=uhci.0,port=1 -device
VGA,id=vga,bus=pci.0,addr=0x2 -chardev
socket,path=/var/run/qemu-server/100.qga,server,nowait,id=qga0 -device
virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device
virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -object
rng-random,filename=/dev/urandom,id=rng0 -device
virtio-rng-pci,rng=rng0,max-bytes=1024,period=1000,bus=pci.1,addr=0x1d
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi
initiator-name=iqn.1993-08.org.debian:01:094fb2bbde3 -drive
file=/mnt/pve/share/template/iso/debian-10.5.0-amd64-netinst.iso,if=none,id=drive-ide2,media=cdrom,aio=threads
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101
-device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5 *-drive
file=gluster://node0/vms/images/100/vm-100-disk-1.qcow2*,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on
-device
scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100
-netdev
type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on
-device
virtio-net-pci,mac=D2:83:A3:B9:77:2C,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102
-machine type=pc+pve0
How one can confirm that *libgfapi* is being used to access the VM disk or
if not used how libgfapi can be enabled?
Thanx,
Alex
On Thu, Jul 8, 2021 at 10:40 AM Alex K <rightkicktech@gmail.com> wrote:
> Hi all,
>
> Anyone has any info to share for the below?
> Many thanx
>
> On Tue, Jul 6, 2021 at 12:00 PM Alex K <rightkicktech@gmail.com> wrote:
>
>> Hi all,
>>
>> I've been assessing Proxmox for the last couple of days, coming from a
>> previous experience with oVirt. The intent is to switch to this solution if
>> most of the features are covered.
>>
>> The below questions might have been put again, though searching the forum
>> or online I was not able to find any specific reference or not sure if the
>> feedback is still relevant.
>>
>> - When adding a gluster volume I see that the UI provides an option for a
>> secondary server. In case I have a 3 replica glusterfs setup where I need
>> to add two backup servers as below, how can this be defined?
>>
>> backup-volfile-servers=node1,node2
>>
>>
>> - I've read that qemu does use *libgfapi* when accessing VM disks on
>> glusterfs volumes. Can someone confirm this? I tried to find out the VM
>> configs that may reference this detail but was not able to do so.
>>
>> - I have not seen any *load balancing/scheduling* feature being provided
>> and looking through the forum it seems that this is still missing. Is there
>> any future plan to provide such a feature. By load balancing I mean to
>> automatically balance VMs through the available hosts/nodes depending on a
>> set policy (CPU load, memory load or other).
>>
>>
>> Thanx for reading and appreciate any feedback,
>>
>> Alex
>>
>>
>>
>>
next prev parent reply other threads:[~2021-07-08 7:55 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-06 9:00 Alex K
2021-07-08 7:40 ` Alex K
2021-07-08 7:54 ` Alex K [this message]
2021-07-08 11:39 ` Lindsay Mathieson
2021-07-08 13:53 ` Alex K
2021-07-08 20:22 ` alexandre derumier
2021-07-09 8:16 ` Mark Schouten
2021-07-11 23:01 ` alexandre derumier
2021-07-14 12:44 ` Alex K
2021-07-20 10:56 ` Alex K
2021-07-20 11:45 ` Gilberto Ferreira
2021-07-20 13:01 ` Alex K
2021-07-20 14:01 ` Alex K
2021-07-24 11:29 ` Alex K
2021-07-24 12:07 ` Gilberto Ferreira
2021-07-09 13:54 ` Alex K
2021-07-08 11:33 ` Lindsay Mathieson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CABMULtLLfnrrE8Odo94xghUnYAaO2L0_+UktKpp5JOb+yHvToQ@mail.gmail.com \
--to=rightkicktech@gmail.com \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox