Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autodetected benchmark parameters should be reported in output #203

Open
ceresek opened this issue Sep 20, 2019 · 5 comments
Open

Autodetected benchmark parameters should be reported in output #203

ceresek opened this issue Sep 20, 2019 · 5 comments
Assignees
Labels
enhancement New feature or request

Comments

@ceresek
Copy link
Collaborator

ceresek commented Sep 20, 2019

When a benchmark autodetects a parameter (such as thread_count=$cpu.count), the actual value used should be reported in the benchmark output.

@axel22 axel22 added the enhancement New feature or request label Sep 20, 2019
@lbulej
Copy link
Member

lbulej commented Sep 20, 2019

Do you mean the textual output, or the JSON output file?

I was already thinking about making it possible for the writer to dump the actual configuration of a benchmark. The reason why I haven't done that yet is that I will need to change the plugin API to provide access to this information.

It would be easy to just expose the BenchmarkInfo object in the event notification, but to make it smooth, I would probably want to move the Plugin and the ExecutionPolicy classess into the harness package, so as to move the API for benchmark writers and plugin writers to separate packages.

I'll try to put this into the next API update -- these are the exact reasons I was waiting for to do something :-)

@lbulej lbulej self-assigned this Sep 20, 2019
@axel22
Copy link
Member

axel22 commented Sep 20, 2019

Perhaps we could additionally add a -v flag to enable verbose output that shows information such as this on stdout.

@ceresek
Copy link
Collaborator Author

ceresek commented Sep 20, 2019

For me the JSON output makes most sense, because that will document how the measurements were obtained, but I can imagine other use cases where one would want to see the value of the options on a console without executing the workload.

@axel22
Copy link
Member

axel22 commented Sep 20, 2019

JSON sounds good to me then.

@farquet
Copy link
Collaborator

farquet commented Sep 23, 2019

Adding it to JSON is good.
But I would also add it to stdout by default at the very beginning of the execution. That's very little output and can be helpful to quickly identify differences between runs by grepping through stdouts.

Other suites do that (here dacapo) :

The derived number of threads (72) is outside the range [1,64]; rescaling to match thread limit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants