Benchmark abstraction
by Julian Kunkel
Dear all,
based on our discussion during the BoF at SC, we could focus on the
access pattern(s) of interest first. Later we can define which
benchmarks (such as IOR) could implement these patterns (e.g., how to
call existing benchmarks).
This strategy gives other I/O paradigms the option to create a
benchmark with that pattern that fits their I/O paradigm/architecture.
Here is a draft of one that is probably not too difficult to discuss:
Goal: IOmax: Sustained performance for well-formed I/O
Rationales:
The benchmark shall determine the best sustained I/O behavior without
in-memory caching and I/O variability. A set of real applications that
are highly optimized should be able to show the described access
behavior.
Use case: A large data structure is distributed across N
threads/processes; a time series of this data structured shall be
stored/retrieved efficiently. (This could be a checkpoint.)
Processing steps:
S0) Each thread allocates and initializes a large consecutive memory
region of size S with a random (but well defined) pattern
S1) Repeat T times: Each process persists/reads its data to/from the
storage. Ech iteration is protected with a global barrier and the
runtime is measured
S2) Compute the throughput (as IOmax) by dividing the total accessed
data volume (N*S) by the maximum observed runtime for any single
iteration in step S1
Rules:
R1) The data of each thread and timestep must be stored individually
and cannot be overwritten during a benchmark run
R2) It must be ensured that the time includes all processes needed to
persist all data in volatile memory (for writes) and that prior
startup of reads no data is cached in any volatile memory
R3) A valid result must verify that read returns the expected (random) data
R4) N, T and S can be set arbitrarily. T must be >= 3. The benchmark
shall be repeated several times
Reported metrics:
* IOmax
* Working set size W: N*T*S
Regards,
Julian
7 years