Julien, Thanks for the examples.
I think what you may be getting at is that the 10 client challenge is
really about, "Given a large storage system that submits a result to the
standard io500, how well does it do with only 10 clients?".
If this is the case, and we don't want to encourage the submission of
small non-scalable storage systems, then maybe there are other ways to
achieve it such as:
- A submission to the 10 client challenge is only valid if a submission
is also made to the standard io500 list. Users can then look at both
rankings to get an understanding of the system.
- Each submission must have at least 1PB of storage capacity, which will
increase by 10% each year.
Just rough ideas, but maybe we need to clarify why an io500 list cares
about 10 clients?
Dean
On 10/3/19 1:39 AM, Julian Kunkel wrote:
Hi,
IMHO: A simple way of seeing this matter for the 10 node challenge is
that it really should be about 10 nodes with interconnects to
normalize results to some extent. Such runs can be seen in a real
configuration.
However, deploying 10 VMs on a single host and seeing a performance
gain vs. running directly on the host seems to be artificial.
Regarding cheating: theoretically one could run 10 VMs on one big
node, the host could slow down the creation rates to a limit such that
all data is available in a big cache (say NVDIMMs) from the
perspective of the host (and the VMs then). Every read would then be
cached.
Here is a rather artificial example (if you have more appropriate
numbers, use them):
For IOR BW assume
* writes 5 GiB/s to NVDIMMs (throttled) => 1.5 * 2 TB space needed / doable.
* read 500 GiB/s.
=> (5*5*500*500)^0.25 = 50 score
Not an issue so far.
For MD, 10 Million IOOPS for create and 100 Million for any
read/delete and find would give
(10000*10000*100000*100000*100000*100000*100000*100000)^(1/8)
=> 56234.13
Total score: sqrt(56234*50) = 1676.812
Yes, it is a synthetic example but there could be technology out there
that generates such numbers o people may create an IOR backend to
exploit such a setup.
You could also use two nodes and only 1/5th of data needs to be
transferred over the network (as the IO500 does rank-shifting), that
would also lead to a superficial number.
Personally I would be interested in such gaming results, you can
always submit such numbers to the full list as synthetic "upper
bounds".
Best,
Julian
On Wed, Oct 2, 2019 at 10:02 PM Dean Hildebrand via IO-500
<io-500(a)vi4io.org> wrote:
> As a cloud provider, this rule isn't too onerous as there is always a way to get
dedicated machines through sole tenant offerings and simply using large VMs (although it
is a waste of $$ to use clients that have 60+ cores just to run a single benchmark
process).
>
> I'm more curious about the thinking here, can someone from the committee provide
some background? This is one of those funny and rare cases where we are worried about
someone with fewer resources having an advantage over someone with more resources. If a
system with a 1 or 2 clients can beat 10...isn't that one measure of success from an
HPC point of view?
>
> Dean
>
> On 9/30/19 9:10 AM, John Bent via IO-500 wrote:
>
> To IO500 Community,
>
>
> The committee has received some queries about the rules concerning virtual machines
for the 10 Node Challenge. As such, the committee has added the following rule:
>
>
> 13. For the 10 Node Challenge, there must be exactly 10 physical nodes for client
processes and at least one benchmark process must run on each
>
> Virtual machines can be used but the above rule must be followed. More than one
virtual machine can be run on each physical node.
>
>
> Although we recognize that this may disadvantage cloud architectures, we do want to
stress that this rule only applies to the 10 Node Challenge. The committee did feel it was
important to add this rule to ensure that the 10 Node Challenge sublist offers the maximum
potential for fair comparisons by ensuring equivalent client hardware quantities.
Submissions with any number/combination of virtual and physical machines can of course
always be submitted to the full list.
>
>
>
> Thank you,
>
>
> The IO500 Committee
>
>
>
> _______________________________________________
> IO-500 mailing list
> IO-500(a)vi4io.org
>
https://www.vi4io.org/mailman/listinfo/io-500
>
>
> _______________________________________________
> IO-500 mailing list
> IO-500(a)vi4io.org
>
https://www.vi4io.org/mailman/listinfo/io-500