public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Stefan Radman via pve-user <pve-user@lists.proxmox.com>
To: "Tonči Stipičević" <tonci@suma-informatika.hr>
Cc: Stefan Radman <stefan.radman@me.com>,
	Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] replication bandwith limit not respected
Date: Sun, 15 Sep 2024 23:37:51 +0300	[thread overview]
Message-ID: <mailman.254.1726432724.414.pve-user@lists.proxmox.com> (raw)
In-Reply-To: <477dc5c4-5fc6-461c-b6b6-7731d9160c3c@suma-informatika.hr>

[-- Attachment #1: Type: message/rfc822, Size: 8347 bytes --]

From: Stefan Radman <stefan.radman@me.com>
To: "Tonči Stipičević" <tonci@suma-informatika.hr>
Cc: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] replication bandwith limit not respected
Date: Sun, 15 Sep 2024 23:37:51 +0300
Message-ID: <CCF3D6FC-EDA7-40B3-9278-F56CEA62B657@me.com>

Hi Tonči

> Now, two hosts are directly connected with optical patch cable (peer-to-peer) .. 10G

Then you should configure jumbo frames (e.g. MTU 9000) on those two interfaces.

Stefan

> On Sep 9, 2024, at 10:40, Tonči Stipičević <tonci@suma-informatika.hr> wrote:
> 
> Hello,
> 
> this was just FYI ..  I was just about to upgrade NICs to 10G , but before I had to "rearrange" my VMs around my cluster and that is why I lowered BW down to 80MBs
> 
> No, I  did not measured/tested the speed, haven't gone that deep, and this info was from network GUI graph ... it was saturated at 1Gbs
> 
> Anyway , I remember that custom replication bandwith adjustment was always followed by corresponding GUI network graph presentation ... but like you said , maybe this lowering was to small to reflect to the network GUI graph
> 
> Pretty soon I'll test it again (with 10G cards)  and write back the results      Now, two hosts are directly connected with optical patch cable (peer-to-peer) .. 10G
> 
>    ...srdačan pozdrav / best regards
> 
> Tonči Stipičević, dipl. ing. elektr.
> direktor / manager
> 
> SUMA Informatika d.o.o., Badalićeva 27, OIB 93926415263
> 
> Podrška / Upravljanje IT sustavima za male i srednje tvrtke
> Small & Medium Business IT Support / Management
> 
> mob: 091 1234003
> www.suma-informatika.hr
> 
> 06. 09. 2024. u 19:29, pve-user-request@lists.proxmox.com piše:
>> Send pve-user mailing list submissions to
>> 	pve-user@lists.proxmox.com
>> 
>> To subscribe or unsubscribe via the World Wide Web, visit
>> 	https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> or, via email, send a message with subject or body 'help' to
>> 	pve-user-request@lists.proxmox.com
>> 
>> You can reach the person managing the list at
>> 	pve-user-owner@lists.proxmox.com
>> 
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of pve-user digest..."
>> 
>> 
>> Today's Topics:
>> 
>>    1. replication bandwith limit not respected (Ton?i Stipi?evi?)
>>    2. Re: replication bandwith limit not respected (Stefan Nehlsen)
>>    3. Re: replication bandwith limit not respected (Stefan Radman)
>>    4. Re: replication bandwith limit not respected (Stefan Radman)
>>    5. Re: 8.2-1 ISO doesn't boot via UEFI on usb (Bryan Fields)
>> 
>> 
>> ----------------------------------------------------------------------
>> 
>> Message: 1
>> Date: Fri, 6 Sep 2024 15:00:57 +0200
>> From: Ton?i Stipi?evi? <tonci@suma-informatika.hr>
>> To: PVE User List <pve-user@pve.proxmox.com>
>> Subject: [PVE-User] replication bandwith limit not respected
>> Message-ID: <3ee95219-6894-4ff6-a89c-9e4a8d81b74d@suma-informatika.hr>
>> Content-Type: text/plain; charset=UTF-8; format=flowed
>> 
>> Hello,
>> 
>> I'm running latest cluster (community support)
>> 
>> And today I've lowered replication speed down to 80MB/s? but replcation
>> still uses whole NIC bandwith (1G)? ... after few host restarts? still
>> the same
>> 
>> 2024-09-06 14:37:31 using a bandwidth limit of 80000000 bytes per second
>> for transferring 'data2:subvol-1007-disk-0'
>> 
>> Does somebody else experience that too ?
>> 
>> 
>> Thank you in advance
>> 
>> BR
>> 
>> Ton?i
>> 
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user



[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

  reply	other threads:[~2024-09-15 20:38 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <mailman.95.1725643774.414.pve-user@lists.proxmox.com>
2024-09-09  7:40 ` [PVE-User] pve-user Digest, Vol 198, Issue 6 Tonči Stipičević
2024-09-15 20:37   ` Stefan Radman via pve-user [this message]
     [not found] <3ee95219-6894-4ff6-a89c-9e4a8d81b74d@suma-informatika.hr>
2024-09-06 15:10 ` [PVE-User] replication bandwith limit not respected Stefan Nehlsen
2024-09-06 17:18 ` Stefan Radman via pve-user
2024-09-06 17:18 ` Stefan Radman via pve-user

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mailman.254.1726432724.414.pve-user@lists.proxmox.com \
    --to=pve-user@lists.proxmox.com \
    --cc=stefan.radman@me.com \
    --cc=tonci@suma-informatika.hr \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal