Dear Dorian,
since nobody has responded so far:
I understand the issue, but firstly, I want to remind that we also
wanted to use the IO-500 to bound the possible performance for users.
Hence, a value that cannot be achieved possibly by any users is not
achieving this goal.
Regardless of how to compute a single value for a single data center,
I propose, you run the benchmark on each supercomputer independently,
anyway that would be a good values to have!
There are several options:
1) best would be (of course) you manage to run the benchmark across
supercomputers
2) you ignore the one io500 value as we will have the value soon in
the comprehensive data center list, which allows to aggregate
performance of a site using various metrics to accompany individual
needs.
3) running two benchmark instances concurrently for an extensive time,
synchronized.
This could be done by extending the benchmark script to e.g., pause
until the other part is done communicating via the files.
However, the resulting value would not be exactly what is the purpose
of the benchmark as the phase out will be different.
What would be happening if people then say, well I run this for every
node independently! It won`t be the same.
Still, I would be interested to see what the result of 3) would be and
willing to help you to change the script(s).
I could imagine there is a list that contains such results / or we
integrate it into the default list but mark it clearly.
It all depends from the actual achieved results and how much
*cheating* we would see.
Hence, if you imagine this is good, why not do all three things! The
next BoF at SC you could present your results and we could continue
the discussion based on the results.
Regards,
Julian
2018-06-29 20:33 GMT+01:00 Dorian Krause <d.krause(a)fz-juelich.de>:
Dear list,
I have raised the following question during yesterday's workshop at ISC: We
do operate a facility-wide centralized storage infrastructure to which a
number of different client systems (clusters) are connected. None of the
client system alone is capable of saturating the bandwidth of the system and
hence any IO500 submission using a single system will not be representative
for the performance/capabilities of the overall I/O subsystem.
We are interested in a benchmark execution mode that would allow to assess
the center-wide performance level. One possible option would be to allow
summing up concurrently executed IOR runs that have a sufficiently large
overlap. At least for the IOR easy/hard cases that would be a sensible
number.
Thank you for your consideration.
Best regards,
Dorian Krause
--
Dorian Krause
Juelich Supercomputing Centre (JSC)
Institute for Advanced Simulation (IAS)
Phone: +49 2461 61-3631
Fax: +49 2461 61-6656
-------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-------------------------------------------------------------------------------------
_______________________________________________
IO-500 mailing list
IO-500(a)vi4io.org
https://www.vi4io.org/mailman/listinfo/io-500
--
Dr. Julian Kunkel
Lecturer, Department of Computer Science
+44 (0) 118 378 8218
http://www.cs.reading.ac.uk/
https://hps.vi4io.org/