Hi all,
After testing with 3 of my systems, (Weka (native client), Vast (nfs), and an unnamed
scale-out nas system(nfs)) I can confirm that using the -N <numtasks> flag both
works with stonewall and completely nerfs mdtest read and stat on those types of systems.
I am extremely curious what the effect is on gpfs and lustre if anyone has the opportunity
to test those with the -N <numtasks> setting.
--Ken
On Jul 15, 2019, at 11:33 AM, John Bent
<johnbent@gmail.com<mailto:johnbent@gmail.com>> wrote:
That’s excellent Ken and very much appreciated! Now that Ken has established himself as a
consumer, do we have a producer? :)
Thanks
GJJJ
On Mon, Jul 15, 2019 at 8:31 PM Carlile, Ken
<carlilek@janelia.hhmi.org<mailto:carlilek@janelia.hhmi.org>> wrote:
Hi all,
I have no expertise with coding (much less with C!), but I am happy to test any code you
throw at me and I generally have the resources to do so at small to medium scale on a few
different storage system types (although not gpfs or lustre).
--Ken
On Jul 15, 2019, at 10:37 AM, John Bent via IO-500
<io-500@vi4io.org<mailto:io-500@vi4io.org>> wrote:
Hello community,
We are trying to fix a problem in IO500 where certain results have been able to used
client-side cache for the mdtest stat phase. This is in violation of the intent of IO500
to measure non-cached behavior. To fix this problem, we need to use the shift flag to
mdtest as we have used it for IOR. Unfortunately, this code path is fragile and
doesn't work with the stonewall flag. We are hoping to fix this by SC19 and to insist
that all submissions do the shift for both mdtest and IOR and that all submissions use a
fixed stonewall value of 300 for all phases.
So, is there anyone willing to help with this? Nathan Hjelm was the last volunteer
(beside Julian) to make a significant contribution to the code base and it was greatly
appreciated. We will be very grateful. If there are any students who are willing to help
with this, we would be very happy to help them with eventual references or similar tokens
of gratitude!
Thanks very much and we hopefully await your enthusiastic responses!
IO500 committee (George, Jay, John, Julian)
_______________________________________________
IO-500 mailing list
IO-500@vi4io.org<mailto:IO-500@vi4io.org>
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.vi4io.org_mailma...