From: "Naumann, Thomas" <thomas.naumann@ovgu.de>
To: "pve-user@lists.proxmox.com" <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Limitation File Restore List
Date: Fri, 16 Jun 2023 06:54:12 +0000 [thread overview]
Message-ID: <33a7c8ec94705b13f36fa4c2b0b57b087ac3c05b.camel@ovgu.de> (raw)
In-Reply-To: <E289558D-890E-4783-B627-DF9268F568A3@qwertz1.com>
[-- Attachment #1: Type: text/plain, Size: 2585 bytes --]
Hi,
thanks for your response.
I did some tests about numbers of directories and behaviour of web-GUI
listing the directories. Result: maximum number of directories of
directories that will be listed correctly is 18999. From 19000
directories the mentioned error message ("list not finished in time
(503)") appears.
best regards
--
Thomas Naumann
On Wed, 2023-06-14 at 22:29 +0200, Stefan wrote:
> Long shot:
> Since you say it's working from cli and you get a timeout error after
> a couple of minutes when trying to render a 23000 elements webpage, I
> would guess it's an inefficiency on how that feature is implemented.
> So some web component aborts the request before it has finished. (
> e.g. JavaScript, reverse proxy, web server etc.).
>
>
>
> Am 13. Juni 2023 10:30:52 MESZ schrieb "Naumann, Thomas"
> <thomas.naumann@ovgu.de>:
> > Hi at all,
> >
> > following general conditions are given:
> >
> > - 12 node Proxmox-Cluster (48 x Intel(R) Xeon(R) CPU E5-2690 v3 /
> > 256
> > GB RAM)
> > - 96 Osd Ceph-Pool -> 6x HDD / TOSHIBA_MG07ACA14TE + 2x SSD /
> > SAMSUNG_MZILT7T6HALA_007 (Ceph-DB) per node
> > - 10 GBit Network
> > - 10 PBS VMs running
> > - 1 PBS VM (55 GB RAM, 16 CPUs, 35 TB Backup-Datastore XFS)
> >
> > Last mentioned PBS VM ist connected with another Proxmox-Cluster (9
> > nodes 24 x Intel(R) Xeon(R) CPU X5670, 142 GB RAM, 56 SSD OSDs, 30
> > VMs
> > running) as Backup-Storage for VMs. 1 VM running on this cluster
> > (32 GB
> > RAM, 16 CPUs, 12 TB HDD (virtio, discard, iothread), OpenSuse Leap
> > 15.4) is mailserver. 12 TB HDD has one directory "MAIL" with 23000
> > subdirectories (1 subdir per user), 10 TB are in use. PBS-client
> > per
> > CLI is working without any problems - for example: mapping PBS-
> > snapshot
> > and listing subdirs in MAIL-directory.
> > Listening / viewing subdirectories of MAIL-dir per Web-GUI (VM ->
> > Backup -> File Restore -> click "+" of MAIL-dir) is not working.
> > After
> > waiting for about 4-5 minutes error code "list not finished in time
> > (503)" appears. Logfiles (/var/log/proxmox-backup/file-
> > restore/qemu.log
> > + journalctl) do not show anything "unnormal" or error related.
> >
> > What exactly is the root cause of this behaviour and how to solve
> > this?
> > Any hint / thought is welcome / helpful....
> >
> > best regards
> > Th. Naumann
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
prev parent reply other threads:[~2023-06-16 6:54 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-13 8:30 Naumann, Thomas
2023-06-14 20:29 ` Stefan
2023-06-16 6:54 ` Naumann, Thomas [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=33a7c8ec94705b13f36fa4c2b0b57b087ac3c05b.camel@ovgu.de \
--to=thomas.naumann@ovgu.de \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox