public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] CEPH - benchmark
@ 2023-09-19 10:28 lord_Niedzwiedz
  0 siblings, 0 replies; only message in thread
From: lord_Niedzwiedz @ 2023-09-19 10:28 UTC (permalink / raw)
  To: pve-user; +Cc: kkolator

         I have non heterogenic network and hardware.
CEPH about        Write 1160MB/sec,     Read 1820 MB/sec

One nvme drive started going crazy.
The performance of the entire array dropped catastrophically.
The system said nothing.
I wonder if there is any mechanism in CEPH/Proxmox that informs us about 
this automatically ??


I quickly wrote a script that periodically checks performance.

root@tjall1:~# cat /Backup/script/Ceph.sh

#!/bin/sh
#        Ceph test by sir_Misiek@o2.pl
#        Grzegorz Mi$kiewicz
#        19.09.2023

MINWRITE=600
MINREAD=1200
POOL1=ceph-lxc
POOL2=ceph-vm

WRITE=`rados bench -p $POOL1 120 write --no-cleanup | grep "Bandwidth "| 
awk '{ print $3}'`
READ=`rados bench -p $POOL1 60 rand| grep "Bandwidth "| awk '{ print $3}'`

  echo Write = $WRITE
  echo READ = $READ

# We cut out everything before the dot. Converts a floating point value 
to an integer.
WRITE_INT=${WRITE%%.*}
READ_INT=${READ%%.*}

if [ ${MINWRITE} -ge "${WRITE_INT}" ]; then
     echo "Ceph slow write on pool($POOL1): ${WRITE} MB/sec"
   fi

if [ ${MINREAD} -ge "${READ_INT}" ]; then
     echo "Ceph slow read on pool($POOL1): ${WRITE} MB/sec"
   fi

rados -p $POOL1 cleanup > /dev/null
rados -p $POOL1 cleanup; sync; rados -p $POOL2 cleanup; sync

exit




^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-09-19 10:35 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-19 10:28 [PVE-User] CEPH - benchmark lord_Niedzwiedz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal