The below is an out-of-band conversation I had about “gaming” the system that I wanted to
share with all of us. I believe the summary is that ‘gaming’ might be possible and just
requires us to periodically update our benchmarks if it happens. Also that attempts to
game will generally result in better systems.
>> The idea is that this bounds the performance of a system. A complaint at the BoF
was from users who port their code to run in a new place and then don’t know if they are
getting reasonable IO performance. With these measurements, the users should at least
know their upper and lower bounds for both metadata and data.
> Agree. (Until someone figures out how to game the “really hard” situations)
Yes, this was a popular sentiment at our SC BOF. To which my response is, “Gaming the
really hard situations improves the storage system for IO workloads that have really hard
I suppose by definition “gaming” means cheating to get a good result
on the benchmark without actually improving the system for real workloads. I suppose I
just don’t think that’s possible. If you can get a good score on this benchmark, you’ll
have made a general good and useful storage system. That’s my current belief anyway.
Happy to be proved wrong.
I agree with you in general, and if it turns out that someone is getting great scores on
the benchmark but users aren’t seeing good performance, then the benchmark needs to be