As I've been playing with the IO500 benchmark on various systems, I'm seeing a fair amount of variability between runs. This may ultimately be to my detriment, but have there been any discussions of perhaps requiring the mean of 3 or 5 runs for the official numbers? I think the most striking examples are when I've managed to hit some kind of storage side caching just right and produced numbers, particularly on mdtest_hard_stat that are an order of magnitude greater on one run vs other runs. Now, I really LIKE having a higher score, but it doesn't seem exactly fair or representative...
My team unfortunately missed the deadline for SC18 10-Node Challenge
official scoring. We have since continued to work on getting official
scores and improving them.
1.) Should we or can we submit results in between the two lists to get
independent verification of our scores between ISC and SC lists?
2.) There was an updated calculation of the SC18 10-Node Challenge (mdtest
result calculation bug, reported on list). How can we correct our
individual results to be in line with the update?
*Kevin R. Tubbs, PhD* | Senior Director
Technology & Business Development
Advanced Solutions Group
45800 Northport Loop West
Fremont, CA 94538
*Changing the world through technical innovation!*
*Follow us on Twitter: @PenguinHPC*