If you click on the zip file for the entries on the list, e.g. zip, there should be a description of the system used in the results file. Since that file is voluntary, there may be less or more info. I tried to put in as much as I could on my systems. 

--Ken

On Mar 10, 2020, at 6:43 PM, Harms, Kevin via IO-500 <io-500@vi4io.org> wrote:

Mark,

 currently there is no requirement for replication = 2, you can run with replication = 1.

kevin

________________________________________
From: IO-500 <io-500-bounces@vi4io.org> on behalf of Mark Nelson via IO-500 <io-500@vi4io.org>
Sent: Tuesday, March 10, 2020 4:30 PM
To: io-500@vi4io.org
Subject: [IO-500] How to judge scoring vs storage HW

Hi Folks,

I'm one of the Ceph developers but used to work in the HPC world in a
previous life.  Recently I saw that we were listed on the SC19 IO-500 10
node challenge list but had ranked pretty low.  I figured that it might
be fun to play around for a couple of days and see if I could get our
score up a bit.

Let me first say that it's great having mdtest and ior packaged up like
this.  Already the hard test cases have identified a couple of
performance issues we should take care of with unaligned reads/writes
and cephfs dynamic subtree partitioning (which are also dragging our
score down).  Very useful!  I was so happy with the effort that I ended
up writing a new libcephfs aiori backend for ior/mdtest.  The PR just
merged but is here for anyone interested:

https://urldefense.com/v3/__https://github.com/hpc/ior/pull/217__;!!Eh6p8Q!RX10xva403LDLSzNaym6usKB_2h854O7mMOYaeI1sf-c_nVj6ieP4L78wH-7_TQRVrmQ$ 

Our test cluster has 10 nodes with 8 NVMe drives each, and we are
co-locating the metadata servers and client processes on the same nodes
during testing.  So far with 2x replication we've managed to hit scores
in the 55-60 range which looks like it would have put us in 10th place
on the SC19 list (note that for that result we are pre-creating the
mdtest easy directories for static round-robin MDS pinning, though we
have a feature coming soon for ephemeral pinning via a single
parent-directory xattr).  Anyway, I have really no idea how that score
actually compares to the other systems listed.  I was wondering if
there's any way to easily compare what kind of hardware and software
configuration is being used for the storage clusters for each entry?

IE in our case we're using 2x replication and 10 nodes total with pretty
beefy Xeon CPUs, 8xP4610 NVMe drives, and 4x25GbE.  Total storage
capacity before replication is ~640TB.

Thanks,
Mark

_______________________________________________
IO-500 mailing list
IO-500@vi4io.org
https://urldefense.com/v3/__https://www.vi4io.org/mailman/listinfo/io-500__;!!Eh6p8Q!RX10xva403LDLSzNaym6usKB_2h854O7mMOYaeI1sf-c_nVj6ieP4L78wH-7_eC-cjIz$ 
_______________________________________________
IO-500 mailing list
IO-500@vi4io.org
https://urldefense.com/v3/__https://www.vi4io.org/mailman/listinfo/io-500__;!!Eh6p8Q!RX10xva403LDLSzNaym6usKB_2h854O7mMOYaeI1sf-c_nVj6ieP4L78wH-7_eC-cjIz$