public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Esi Y via pve-devel <pve-devel@lists.proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: Esi Y <esiy0676+proxmox@gmail.com>, t.lamprecht@proxmox.com
Subject: Re: [pve-devel] [RFC PATCH pve-cluster] fix #5728: pmxcfs: allow bigger writes than 4k for fuse
Date: Fri, 20 Sep 2024 07:29:05 +0200	[thread overview]
Message-ID: <mailman.26.1726810165.332.pve-devel@lists.proxmox.com> (raw)
In-Reply-To: <mailman.23.1726805127.332.pve-devel@lists.proxmox.com>

[-- Attachment #1: Type: message/rfc822, Size: 14348 bytes --]

From: Esi Y <esiy0676+proxmox@gmail.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: t.lamprecht@proxmox.com, Dominik Csapak <d.csapak@proxmox.com>
Subject: Re: [pve-devel] [RFC PATCH pve-cluster] fix #5728: pmxcfs: allow bigger writes than 4k for fuse
Date: Fri, 20 Sep 2024 07:29:05 +0200
Message-ID: <CABtLnHoQFAUN0KcahbMF6hoX=WTfL8bHL0St77gQMSaojVGhBA@mail.gmail.com>

Somehow, my ending did not get included, regarding the "hitting block
later" - this is almost impossible to quantify reliably. The reason is
the use of the WAL, 80% of instructions [1] are happening in the
hocus-pocus SQLite and it really may or may not checkpoint at any
given time depending on how many transactions are hitting (from all
sides, i.e. other nodes).

In the end and for my single-node testing at first, I ended up with
MEMORY journal (yes, the old fashioned one), at least it was giving
some consistent results. The amplification dropped (just from not
using WAL alone) rapidly [2] and it would half with bigger buffers.
But I found this a non-topic. It is in memory, it needs copying there,
then it is in the backend, additionally at times in WAL and now FUSE3
would be increasing buffers in order to ... hit memory first.

One has to question, if the WAL is necessary (on read once, write all
the time DB), but for me it's more about whether the DB is necessary.
It is there for the atomicity, I know, but there's other ways to do it
without all the overhead. At the end of the day, it is about whether
one wants a hard or soft state CPG. I personally aim for the soft, I
don't expect much sympathy with that though.

[1] https://forum.proxmox.com/threads/etc-pve-pmxcfs-amplification-inefficiencies.154074/#post-703261
[2] https://forum.proxmox.com/threads/ssd-wearout-and-rrdcache-pmxcfs-commit-interval.124638/#post-702765

On Fri, Sep 20, 2024 at 6:05 AM Esi Y via pve-devel
<pve-devel@lists.proxmox.com> wrote:
>
>
>
>
> ---------- Forwarded message ----------
> From: Esi Y <esiy0676+proxmox@gmail.com>
> To: t.lamprecht@proxmox.com, Dominik Csapak <d.csapak@proxmox.com>
> Cc: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
> Bcc:
> Date: Fri, 20 Sep 2024 06:04:36 +0200
> Subject: Re: [pve-devel] [RFC PATCH pve-cluster] fix #5728: pmxcfs: allow bigger writes than 4k for fuse
> I can't help it, I am sorry in advance, but ...
>
> No one is going to bring up the elephant in the room (you may want to
> call FUSE), such as that backend_write_inode is hit every single time
> on virtually every memdb_pwrite, i.e. in addition to cfs_fuse_write
> also on (logically related):
> cfs_fuse_truncate
> cfs_fuse_rename
> cfs_fuse_utimens
>
> So these are all separate transactions hitting the backend capturing
> one and the same event.
>
> Additionally, there's nothing atomic about updating __version__ and
> the actual file ("inode") DB rows, so double the number of
> transactions hit on every amplified hit yet.
>
> Also, locks are persisted into backend only to be removed soon after.
>
> WRT to FUSE2 buffering doing just fine for overwrites (<= original
> size), this is true, but then at the same time the mode of PVE
> operation (albeit quite correctly) is to create a .tmp.XXX (so this is
> your NEW file being appended) and then rename, whilst all that
> in-place of that very FUSE mountpoint (not so correctly) and at the
> same time pmxcfs being completely oblivious to this.
>
> I could not help this because this is a developer who - in my opinion
> - quite rightly wanted to pick the low hanging fruit first with his
> intuition (and a self-evident reasoning) completely disregarded,
> however the same scrutiny was not exercised when e.g. bumping limits
> [1] of that very FS . And that all back then was "tested with touch".
> And this is all on someone else's codebase that is 10 years old (so
> designed with different use case in mind, good enough for ~4K files),
> meanwhile the well-meaning individual even admits he is not a C guru,
> but is asked to spend a day profiling this multi-threaded CPG bespoke
> code?
>
> NB I will completely leave out what the above mentioned does to the
> CPG messages flying around, for brevity. But it is why I originally
> got interested.
>
> I am sure I made many friends now that I called even the FUSE
> migration on its own futile, but well, it is an RFC after all.
>
> Thank you, gentlemen.
>
> Esi Y
>
> On Thu, Sep 19, 2024 at 4:57 PM Thomas Lamprecht
> <t.lamprecht@proxmox.com> wrote:
> >
> > Am 19/09/2024 um 14:45 schrieb Dominik Csapak:
> > > On 9/19/24 14:01, Thomas Lamprecht wrote:
> > >> Am 19/09/2024 um 11:52 schrieb Dominik Csapak:
> > >>> by default libfuse2 limits writes to 4k size, which means that on writes
> > >>> bigger than that, we do a whole write cycle for each 4k block that comes
> > >>> in. To avoid that, add the option 'big_writes' to allow writes bigger
> > >>> than 4k at once.
> > >>>
> > >>> This should improve pmxcfs performance for situations where we often
> > >>> write large files (e.g. big ha status) and maybe reduce writes to disk.
> > >>
> > >> Should? Something like before/after for benchmark numbers, flamegraphs
> > >> would be really good to have, without those it's rather hard to discuss
> > >> this, and I'd like to avoid having to do those, or check the inner workings
> > >> of the affected fuse userspace/kernel code paths here myself.
> > >
> > > well I mean the code change is relatively small and the result is rather clear:
> >
> > Well sure the code change is just setting an option... But the actual change is
> > abstracted away and would benefit from actually looking into..
> >
> > > in the current case we have the following calls from pmxcfs (shortened for e-mail)
> > > when writing a single 128k block:
> > > (dd if=... of=/etc/pve/test bs=128k count=1)
> >
> > Better than nothing but still no actual numbers (reduced time, reduced write amp
> > in combination with sqlite, ...), some basic analysis over file/write size distribution
> > on a single node and (e.g. three node) cluster, ...
> > If that's all obvious for you then great, but as already mentioned in the past, I
> > want actual data in commit messages for such stuff, and I cannot really see a downside
> > of having such numbers.
> >
> > Again, as is I'm not really seeing what's to discuss, you send it as RFC after
> > all.
> >
> > > [...]
> > > so a factor of 32 less calls to cfs_fuse_write (including memdb_pwrite)
> >
> > That can be huge or not so big at all, i.e. as mentioned above, it would we good to
> > measure the impact through some other metrics.
> >
> > And FWIW, I used bpftrace to count [0] with an unpatched pmxcfs, there I get
> > the 32 calls to cfs_fuse_write only for a new file, overwriting the existing
> > file again with the same amount of data (128k) just causes a single call.
> > I tried using more data (e.g. from 128k initially to 256k or 512k) and it's
> > always the data divided by 128k (even if the first file has a different size)
> >
> > We do not override existing files often, but rather write to a new file and
> > then rename, but still quite interesting and IMO really showing that just
> > because this is 1 +-1 line change it doesn't necessarily have to be trivial
> > and obvious in its effects.
> >
> > [0]: bpftrace -e 'u:cfs_fuse_write /str(args->path) == "/test"/ {@ = count();} END { print(@) }' -p "$(pidof pmxcfs)"
> >
> >
> > >>> If we'd change to libfuse3, this would be a non-issue, since that option
> > >>> got removed and is the default there.
> > >>
> > >> I'd prefer that. At least if done with the future PVE 9.0, as I do not think
> > >> it's a good idea in the middle of a stable release cycle.
> > >
> > > why not this change now, and the rewrite to libfuse3 later? that way we can
> > > have some improvements now too...
> >
> > Because I want some actual data and reasoning first, even if it's quite likely
> > that this improves things Somehow™, I'd like to actually know in what metrics
> > and by how much (even if just an upper bound due to the benchmark or some
> > measurement being rather artificial).
> >
> > I mean you name the big HA status, why not measure that for real? like, probably
> > among other things, in terms of bytes hitting the block layer, i.e. the actual
> > backing disk from those requests as then we'd know for real if this can reduce
> > the write load there, not just that it maybe should.
> >
> >
> > _______________________________________________
> > pve-devel mailing list
> > pve-devel@lists.proxmox.com
> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
>
> ---------- Forwarded message ----------
> From: Esi Y via pve-devel <pve-devel@lists.proxmox.com>
> To: t.lamprecht@proxmox.com, Dominik Csapak <d.csapak@proxmox.com>
> Cc: Esi Y <esiy0676+proxmox@gmail.com>, Proxmox VE development discussion <pve-devel@lists.proxmox.com>
> Bcc:
> Date: Fri, 20 Sep 2024 06:04:36 +0200
> Subject: Re: [pve-devel] [RFC PATCH pve-cluster] fix #5728: pmxcfs: allow bigger writes than 4k for fuse
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

  reply	other threads:[~2024-09-20  5:29 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-19  9:52 Dominik Csapak
2024-09-19 12:01 ` Thomas Lamprecht
2024-09-19 12:45   ` Dominik Csapak
2024-09-19 14:57     ` Thomas Lamprecht
2024-09-20  4:04       ` Esi Y via pve-devel
2024-09-20  5:29         ` Esi Y via pve-devel [this message]
     [not found]         ` <CABtLnHoQFAUN0KcahbMF6hoX=WTfL8bHL0St77gQMSaojVGhBA@mail.gmail.com>
2024-09-20  7:32           ` Dominik Csapak
2024-09-20  6:16       ` Dominik Csapak
2024-09-22  9:25       ` Dietmar Maurer
2024-09-23  9:17       ` Dominik Csapak
2024-09-23 11:48         ` Filip Schauer
2024-09-23 14:08           ` Filip Schauer
2024-09-23 12:00         ` Friedrich Weber
2024-09-23 12:03           ` Dominik Csapak
2024-09-19 12:23 ` Esi Y via pve-devel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mailman.26.1726810165.332.pve-devel@lists.proxmox.com \
    --to=pve-devel@lists.proxmox.com \
    --cc=esiy0676+proxmox@gmail.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal