I guess Andreas’s solution is good. It’s never too late, :-)
We still have about 2 weeks to the deadline.
Ruibo
John Bent <johnbent(a)gmail.xn--com>20191025-384x809q381a5re 周五03:27写道:
Thanks Ruibo for the question and thanks Andreas for the response.
The
committee's current plan is the following:
"In terms of a new list, it is our hope that a new list will not be needed
as the only submissions that were benefitting from client-side caching were
the obvious outliers which we have already removed. After we get all of
the SC19 submissions, we will analyze the mdtest scores to see if the new
rules have resulted in discernible differences; certainly we will also
consider community feedback. If warranted, we will propose to create a new
list and move the existing submissions to an historical list. If not, we
will merrily continue with the existing list."
However, we have been hearing from community members like Andreas that our
hope will not be realized.
Thanks.
On Thu, Oct 24, 2019 at 12:10 PM Andreas Dilger via IO-500 <
io-500(a)vi4io.org> wrote:
> On Oct 23, 2019, at 8:11 PM, Ruibo Wang via IO-500 <io-500(a)vi4io.org>
> wrote:
> >
> > Dear All,
> >
> > I've noticed that there is a newly added "-N 1" parameter to
the
> mdtest in the lastest SC19 version. I guess it would influence the stat
> result dramatically.
> > If we use the new parameter, how could we compete the old results
> in the IO500 list?
>
> The old and new results will not be directly comparable.
>
> An idea I proposed to the chat yesterday was to add a new
> "mdtest-harder-read"
> test that uses the "-N" parameter to be run some time *after* the original
> "mdtest-hard-read" test is run, so that the new test results are (mostly)
> a
> superset of the existing results. That will allow computing scores like
> the
> old isc-19 results using only "mdtest-hard-read" (they will be similar,
> but
> not exactly the same), and then using only "mdtest-harder-read" for sc-19
> and beyond.
>
> We were also discussing to add an "ior-harder-read" test that is doing 4KB
> *random* IOPS to better simulate AI/ML workloads. As above, it would be
> better to add this as a new test in addition to the existing
> "ior-hard-read"
> so that old and new test runs could be compared more easily (the IO500
> website makes it easy to change the formula for computing the scores), yet
> new runs could include the new result. Old runs would automatically be
> marked "invalid" because they are missing one of the results (essentially
> the geometric mean of anything with a zero result in it is zero).
>
> It _might_ be a bit late for adding the separate mdtest-harder-read test
> for io500-sc-19, but it would be (IMHO) a far superior solution to the
> current change that breaks all of the old results, so I would definitely
> be in favour of this change even if it is late. It wouldn't invalidate
> any results that *didn't* have the separate mdtest-harder-read results,
> but submissions that came in with "-N" before this change would have to
> map their mdtest-hard-read results to mdtest-harder-read in the database.
>
> Cheers, Andreas
>
>
>
>
>
> _______________________________________________
> IO-500 mailing list
> IO-500(a)vi4io.org
>
https://www.vi4io.org/mailman/listinfo/io-500
>