Proposal of possible postponement of ISC20 IO500 BOF
by committee@io500.org
Dear IO500 Community,
We have heard your concerns about the late announcement of changes and
have already taken one concrete action and are considering a second but
we want your feedback about it first.
We are considering postponing our BoF, and our deadline, by one month.
Since we are not tied to ISC this year, we have this latitude. Are there
any objections from the community to this postponement?
For our concrete action, we will shift future schedules so that three
months from an event we release a candidate benchmark and the likely
rules set for public testing. Then commit to 6 weeks before the event to
finalize the benchmark and rules. Then things are locked until after the
event.
Please let us know how you feel about the possible postponement and
please do know how deeply we value you all in our community that makes
everything we do possible.
Based on the feedback received, we will make and announce our final
decision by EOD Sunday AOE.
Thanks,
The IO500 Committee
3 years, 3 months
Io500 runs twice - is that expected starting 2020 ?
by Pinkesh Valdria
Hello IO-500 experts,
I am trying to configure io500 . When I run it, it runs twice, first one is regular and 2nd one is called “Running the C version of the benchmark now”. Is it because I misconfigured it or is it required to run both, starting 2020 ? My config*.ini file is below.
[root@inst-q7cdd-good-crow io500-app]# ./io500.sh config-test1.ini
System: inst-q7cdd-good-crow
…..
Running the IO500 Benchmark now
[Creating] directories
…..
[Summary] Results files in ./results//2020.05.29-17.31.34-scr
[Summary] Data files in ./out//2020.05.29-17.31.34-scr
[RESULT] BW phase 1 ior_easy_write 6.188 GiB/s : time 357.44 seconds
[RESULT] BW phase 2 ior_hard_write 1.132 GiB/s : time 367.41 seconds
[RESULT] BW phase 3 ior_easy_read 8.090 GiB/s : time 273.37 seconds
[RESULT] BW phase 4 ior_hard_read 3.726 GiB/s : time 111.69 seconds
[RESULT] IOPS phase 1 mdtest_easy_write 4.263 kiops : time 3518.34 seconds
[RESULT] IOPS phase 2 mdtest_hard_write 2.953 kiops : time 303.52 seconds
[RESULT] IOPS phase 3 find 91.550 kiops : time 173.64 seconds
[RESULT] IOPS phase 4 mdtest_easy_stat 137.243 kiops : time 109.30 seconds
[RESULT] IOPS phase 5 mdtest_hard_stat 84.140 kiops : time 10.65 seconds
[RESULT] IOPS phase 6 mdtest_easy_delete 55.311 kiops : time 271.19 seconds
[RESULT] IOPS phase 7 mdtest_hard_read 21.778 kiops : time 41.16 seconds
[RESULT] IOPS phase 8 mdtest_hard_delete 7.133 kiops : time 129.24 seconds
[SCORE] Bandwidth 3.81212 GiB/s : IOPS 24.1149 kiops : TOTAL 9.58796
The io500.sh was run
Running the C version of the benchmark now
IO500 version io500-isc20
<currently running …when I posted this question ….
***************************************************
config-test1.ini (END)
***************************************************
[global]
datadir = ./out/
resultdir = ./results/
timestamp-resultdir = TRUE
# Chose parameters that are very small for all benchmarks
[debug]
stonewall-time = 300 # for testing
[ior-easy]
transferSize = 2m
blockSize = 102400m
[mdtest-easy]
API = POSIX
# Files per proc
n = 500000
[ior-hard]
API = POSIX
# Number of segments 10000000
segmentCount = 400000
[mdtest-hard]
API = POSIX
# Files per proc 1000000
n = 40000
[find]
external-script = /mnt/beeond/io500-app/bin/pfind
pfind-parallelize-single-dir-access-using-hashing = FALSE
***************************************************
3 years, 4 months
Some rules clarifications?
by Mark Nelson
Hi Folks,
We are thinking about throwing together some cephfs io500 results for
ISC20 and I just wanted to make sure that we are doing the right thing
in a couple of cases. Any help would be much appreciated since we've
never submitted results before. We might have a couple of additional
questions later on, but for now:
1) "All create/write phases must run for at least 300 seconds; the
stonewall flag must be set to 300 which should ensure this."
Is it acceptable to set the stonewall higher than 300, or is a setting
of exactly 300 required?
2) "The file names for the mdtest output files may not be pre-created."
Does this also include the directories? We have the ability to pin
directories to specific MDSes that helps in the easy tests. We also have
an experimental feature that more or less does this psuedo-randomly
behind the scenes so long as a top level xattr is set, but it would be
convenient if we could just pre-create the mdtest directories and set
the xattr to pin them individually in the "directory setup" phase of the
test if allowed. Likewise, we have code that allows users to provide a
hint if a specific directory is expected to have lots of files which can
improve performance in the hard tests. I would like to pre-create the
mdtest directory so that we can set the xattr informing ceph that we
expect a lot of files to be written in that directory.
3) "Only submissions using at least 10 physical client nodes are
eligible to win IO500 awards and at least one benchmark process must run
on each."
We are planning on running on AWS. So long as we are using 10+ metal
nodes does that meet the requirement to have "at least 10 physical
client nodes"?
Thanks,
Mark
3 years, 4 months
IO500 ISC20 Call for Submission
by committee@io500.org
Deadline: 08 June 2020 AoE
The IO500 [1] is now accepting and encouraging submissions for the
upcoming 6th IO500 list. Once again, we are also accepting submissions
to the 10 Node Challenge to encourage the submission of small scale
results. The new ranked lists will be announced via live-stream at a
virtual session. We hope to see many new results.
The benchmark suite is designed to be easy to run and the community has
multiple active support channels to help with any questions. Please note
that submissions of all sizes are welcome; the site has customizable
sorting so it is possible to submit on a small system and still get a
very good per-client score for example. Additionally, the list is about
much more than just the raw rank; all submissions help the community by
collecting and publishing a wider corpus of data. More details below.
Following the success of the Top500 in collecting and analyzing
historical trends in supercomputer technology and evolution, the IO500
[1] was created in 2017, published its first list at SC17, and has grown
exponentially since then. The need for such an initiative has long been
known within High-Performance Computing; however, defining appropriate
benchmarks had long been challenging. Despite this challenge, the
community, after long and spirited discussion, finally reached consensus
on a suite of benchmarks and a metric for resolving the scores into a
single ranking.
The multi-fold goals of the benchmark suite are as follows:
* Maximizing simplicity in running the benchmark suite
* Encouraging optimization and documentation of tuning parameters for
performance
* Allowing submitters to highlight their "hero run" performance
numbers
* Forcing submitters to simultaneously report performance for
challenging IO patterns.
Specifically, the benchmark suite includes a hero-run of both IOR and
mdtest configured however possible to maximize performance and establish
an upper-bound for performance. It also includes an IOR and mdtest run
with highly constrained parameters forcing a difficult usage pattern in
an attempt to determine a lower-bound. Finally, it includes a namespace
search as this has been determined to be a highly sought-after feature
in HPC storage systems that has historically not been well-measured.
Submitters are encouraged to share their tuning insights for
publication.
The goals of the community are also multi-fold:
* Gather historical data for the sake of analysis and to aid
predictions of storage futures
* Collect tuning data to share valuable performance optimizations
across the community
* Encourage vendors and designers to optimize for workloads beyond
"hero runs"
* Establish bounded expectations for users, procurers, and
administrators
10 NODE I/O CHALLENGE
The 10 Node Challenge is conducted using the regular IO500 benchmark,
however, with the rule that exactly 10 client nodes must be used to run
the benchmark. You may use any shared storage with, e.g., any number of
servers. When submitting for the IO500 list, you can opt-in for
"Participate in the 10 compute node challenge only", then we will not
include the results into the ranked list. Other 10-node node submissions
will be included in the full list and in the ranked list. We will
announce the result in a separate derived list and in the full list but
not on the ranked IO500 list at https://io500.org/.
This information and rules for ISC20 submissions are available here:
https://www.vi4io.org/io500/rules/submission
Thanks,
The IO500 Committee
Links:
------
[1] http://io500.org/
3 years, 4 months