Thank you Osamu for reminding us of this. This is important.
My request would be that you size the submission such that it finishes successfully even if it doesn't run for 300 seconds. I guess you are running with ldiskfs and not enough DNE servers?
If you do so, there is the possibility that the community and committee can agree to consider it a valid submission. What I would consider would be to artificially extend the runtime to the full 300 seconds. So, if you created 8 million in 100 seconds, your calculated rate would be 8 million in 300 seconds.
There has been concern when this was discussed before that a person could cheat in this way by using a small cache for a short period of time and then the extrapolation would not be fair. However, in this case, you are not using a cache and mdtest does call sync/flush so we know that your reported rate is a fair one. Forcing you to artificially extend to 300 seconds is a bit unfair to your system but I think it is a safer policy than allow invalid results.
What do you think about this proposal Osamu?
Here's some more thoughts about this general question:
Is it fair for someone to do 300 TB in 1 second into a cache and then asks IO500 to artificially extends the runtime to the specified 300 seconds? This is not fair. IO500 is supposed to measure the sustained rate of the storage system. In the above example, if the backend system took more than 299 seconds to absorb the cached data, then the extrapolated score would be unfairly high.
However, if the work done in a short period of time has been flushed/synched to the appropriate location, then extrapolation in this way is fair. Imagine in the above scenario that the 300 TB in 1 second has been fully flushed. This means that the submitter could have used the remaining 299 seconds to get more work done. Thus the extrapolated score is strictly less than or equal to the actual sustained rate of the system.
Thanks,
John