Hi Mark,
try this:
API = CEPHFS --cephfs.user=admin --cephfs.conf=/etc/ceph/ceph.conf
--cephfs.prefix=/tmp/cbt/mnt/cbt-cephfs-kernel/0
Please move follow up issues (if remains) to a GitHub issue to
minimize traffic on the Mailing List.
Best,
Julian
On Thu, May 28, 2020 at 5:11 PM Mark Nelson <mnelson(a)redhat.com> wrote:
Hi Julian,
Thanks much! Sadly it didn't work for me. Here's my provided API
string that's working with the scr version:
API = CEPHFS --cephfs.user admin --cephfs.conf /etc/ceph/ceph.conf
--cephfs.prefix /tmp/cbt/mnt/cbt-cephfs-kernel/0
and with f6a02956e I'm repeatedly failing the strncasecmp check on line
150 resulting in the FATAL call on line 153:
FATAL (src/util.c:153) Provided API option admin appears to be no API
supported version
Looks like it's trying to read the parameter value as a new parameter
itself.
Mark
On 5/28/20 10:39 AM, Julian Kunkel wrote:
> Hi Mark,
> this seems very reasonable to feed in the API options this way.
>
> Can you test the latest version of the IO500 application, hash:
> f6a02957e5fca68049cf253bc1bb4a146328a7bb
> That should work for your use case.
>
> Best,
> Julian
>
> On Thu, May 28, 2020 at 3:16 PM Mark Nelson via IO-500 <io-500(a)vi4io.org>
wrote:
>> Hi John,
>>
>>
>> Thank you for the quick reply! I already have a follow up question. :)
>>
>>
>> Previously we could directly edit the params to add custom api options.
>> For example the DAOS run from the SC19 list does:
>>
>> io500_mdtest_easy_params="-a DFS --dfs.cont $DAOS_CONT --dfs.svcl
>> $DAOS_SVCL --dfs.pool $DAOS_POOL -u -L --dfs.oclass S1 --dfs.prefix
>> $DAOS_FUSE"
>>
>>
>> I can get the script version of the new io500 benchmark to work by
>> hijacking the API field to be something like:
>>
>> API = <api> --api.param1 foo --api.param2 bar
>>
>>
>> That doesn't work for the C version of io500 though. Is there a correct
>> way for me to provide extra API options?
>>
>>
>> Thanks,
>>
>> Mark
>>
>>
>> On 5/27/20 6:14 PM, John Bent wrote:
>>> Hey Mark,
>>>
>>> Thanks for the interest. It will be great to get your contributions!
>>>
>>> 1. Must be exactly 300 seconds.
>>> 2. Does not include the directories. Other historical submissions
>>> have tuned the directories exactly as you describe.
>>> 3. Yes, 10+ metal nodes in AWS satisfies this requirement.
>>>
>>> Other committee members, and community members, please chime in if I
>>> got anything wrong! Mark, you might note the disclaimer below my
>>> signature which is just our committee's way of being careful. I'll
>>> make sure to discuss this email with the rest of the committee and
>>> will let you know if any of my answers need official clarification.
>>>
>>> Thanks,
>>>
>>> John(*)
>>>
>>> * These statements merely reflect my own personal view; the only
>>> mechanism for announcing official IO500 policies and decisions is the
>>> committee(a)io500.org <mailto:committee@io500.org> email address.
>>>
>>>
>>> On Wed, May 27, 2020 at 4:44 PM Mark Nelson via IO-500
>>> <io-500(a)vi4io.org <mailto:io-500@vi4io.org>> wrote:
>>>
>>> Hi Folks,
>>>
>>>
>>> We are thinking about throwing together some cephfs io500 results for
>>> ISC20 and I just wanted to make sure that we are doing the right
>>> thing
>>> in a couple of cases. Any help would be much appreciated since
we've
>>> never submitted results before. We might have a couple of additional
>>> questions later on, but for now:
>>>
>>>
>>> 1) "All create/write phases must run for at least 300 seconds;
the
>>> stonewall flag must be set to 300 which should ensure this."
>>>
>>> Is it acceptable to set the stonewall higher than 300, or is a
>>> setting
>>> of exactly 300 required?
>>>
>>>
>>> 2) "The file names for the mdtest output files may not be
>>> pre-created."
>>>
>>> Does this also include the directories? We have the ability to pin
>>> directories to specific MDSes that helps in the easy tests. We
>>> also have
>>> an experimental feature that more or less does this psuedo-randomly
>>> behind the scenes so long as a top level xattr is set, but it
>>> would be
>>> convenient if we could just pre-create the mdtest directories and set
>>> the xattr to pin them individually in the "directory setup"
phase
>>> of the
>>> test if allowed. Likewise, we have code that allows users to
>>> provide a
>>> hint if a specific directory is expected to have lots of files
>>> which can
>>> improve performance in the hard tests. I would like to pre-create
>>> the
>>> mdtest directory so that we can set the xattr informing ceph that we
>>> expect a lot of files to be written in that directory.
>>>
>>>
>>> 3) "Only submissions using at least 10 physical client nodes are
>>> eligible to win IO500 awards and at least one benchmark process
>>> must run
>>> on each."
>>>
>>> We are planning on running on AWS. So long as we are using 10+ metal
>>> nodes does that meet the requirement to have "at least 10
physical
>>> client nodes"?
>>>
>>>
>>> Thanks,
>>>
>>> Mark
>>>
>>> _______________________________________________
>>> IO-500 mailing list
>>> IO-500(a)vi4io.org <mailto:IO-500@vi4io.org>
>>>
https://www.vi4io.org/mailman/listinfo/io-500
>>>
>> _______________________________________________
>> IO-500 mailing list
>> IO-500(a)vi4io.org
>>
https://www.vi4io.org/mailman/listinfo/io-500
>
>
--
Dr. Julian Kunkel
Lecturer, Department of Computer Science
+44 (0) 118 378 8218