-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualize and interact with micro benchmarks in performance report #65
Conversation
No I can't, probably because he hasn't interacted with the repo so we have to tag @alistaire47 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, nice work! I mostly reviewed the visual results instead of the code and tried to think like a dev who's not too familiar with Conbench and such. I think a few changes could help reduce confusion and hopefully help people find interesting things.
Co-authored-by: Edward Visel <1693477+alistaire47@users.noreply.github.com>
@alistaire47 thanks for such a thorough review. Hugely beneficial to improve this. I've address a number of things but I also have some follow-ups that I was hoping you could look at if possible. |
Great work and review everyone! It already looks super useful 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! A few follow-up thoughts. I do think allowing deselection in the explorer (with truncation) would be powerful if it's not too complicated
opt_table_font(font = google_font("Roboto Mono")) | ||
``` | ||
|
||
Because of the large number of benchmarks, the top 20 benchmark results that deviate most from the baseline in both the positive and negative directions are presented below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the suite grouping, that does help. But the explorer doesn't really help find the most extreme results because you can't deselect / select all on the suite and name filters. The data would get big if you did, but maybe it's worth it and just truncating head/tail with a warning? That would get to my ask definitely
@assignUser we are happy with this. Hopefully upon the next release, we can elicit some feedback when the RC is voted upon. LMK if you want to review as well. Otherwise I think this is good to merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks!
After #65 I realized that it would be simple to add JavaScript benchmarks (which we've not visualized before). Because those are run in the same machines as the R and python benchmarks I've added a column that distinguishes between the macro/micro benchmarks: ![image](https://github.com/ursacomputing/crossbow/assets/18472598/85253c94-9117-4097-8bf6-8371466dd906) Otherwise this PR is just some display tweaks to better show labels and updating some text to better explain the microbenchmark explorer.
This PR implements an interface to work with the microbenchmarks, include the C++ and Java ones. Additionally, because there are so many, I've enabled the user to be able to filter for conbench detected regressions and improvements. I've followed the style of the other plots in this report though these one are built in Javascript.
I have also added a link so users can go to conbench to directly view the compare page.
@assignUser I would like to request a review from Ed (https://github.com/alistaire47) but I'm not actually able to tag him here. Are you able to do that?
Attached is a zip file containing a rendered copy using the last release:
performance-release-report.html.zip. Reminder that this needs to be served up via http because of js libraries.