Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visualize and interact with micro benchmarks in performance report #65

Merged
merged 35 commits into from
Sep 11, 2023

Conversation

boshek
Copy link
Contributor

@boshek boshek commented Sep 5, 2023

This PR implements an interface to work with the microbenchmarks, include the C++ and Java ones. Additionally, because there are so many, I've enabled the user to be able to filter for conbench detected regressions and improvements. I've followed the style of the other plots in this report though these one are built in Javascript.

I have also added a link so users can go to conbench to directly view the compare page.

@assignUser I would like to request a review from Ed (https://github.com/alistaire47) but I'm not actually able to tag him here. Are you able to do that?

Attached is a zip file containing a rendered copy using the last release:
performance-release-report.html.zip. Reminder that this needs to be served up via http because of js libraries.

@boshek boshek changed the title Visualize and interact with micro benchmarks [WIP] Visualize and interact with micro benchmarks Sep 6, 2023
@assignUser
Copy link
Contributor

No I can't, probably because he hasn't interacted with the repo so we have to tag @alistaire47

Copy link
Contributor

@alistaire47 alistaire47 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, nice work! I mostly reviewed the visual results instead of the code and tried to think like a dev who's not too familiar with Conbench and such. I think a few changes could help reduce confusion and hopefully help people find interesting things.

performance-release-report/performance-release-report.qmd Outdated Show resolved Hide resolved
performance-release-report/performance-release-report.qmd Outdated Show resolved Hide resolved
performance-release-report/performance-release-report.qmd Outdated Show resolved Hide resolved
performance-release-report/performance-release-report.qmd Outdated Show resolved Hide resolved
performance-release-report/performance-release-report.qmd Outdated Show resolved Hide resolved
performance-release-report/performance-release-report.qmd Outdated Show resolved Hide resolved
@boshek
Copy link
Contributor Author

boshek commented Sep 8, 2023

@alistaire47 thanks for such a thorough review. Hugely beneficial to improve this. I've address a number of things but I also have some follow-ups that I was hoping you could look at if possible.

@assignUser
Copy link
Contributor

Great work and review everyone! It already looks super useful 🚀

Copy link
Contributor

@alistaire47 alistaire47 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! A few follow-up thoughts. I do think allowing deselection in the explorer (with truncation) would be powerful if it's not too complicated

performance-release-report/performance-release-report.qmd Outdated Show resolved Hide resolved
opt_table_font(font = google_font("Roboto Mono"))
```

Because of the large number of benchmarks, the top 20 benchmark results that deviate most from the baseline in both the positive and negative directions are presented below.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the suite grouping, that does help. But the explorer doesn't really help find the most extreme results because you can't deselect / select all on the suite and name filters. The data would get big if you did, but maybe it's worth it and just truncating head/tail with a warning? That would get to my ask definitely

@boshek
Copy link
Contributor Author

boshek commented Sep 11, 2023

@assignUser we are happy with this. Hopefully upon the next release, we can elicit some feedback when the RC is voted upon. LMK if you want to review as well. Otherwise I think this is good to merge.

Copy link
Contributor

@assignUser assignUser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, thanks!

@assignUser assignUser changed the title [WIP] Visualize and interact with micro benchmarks Visualize and interact with micro benchmarks in performance report Sep 11, 2023
@assignUser assignUser merged commit 994f7b3 into ursacomputing:main Sep 11, 2023
1 check passed
assignUser pushed a commit that referenced this pull request Sep 15, 2023
After #65 I realized that it would be simple to add JavaScript
benchmarks (which we've not visualized before). Because those are run in
the same machines as the R and python benchmarks I've added a column
that distinguishes between the macro/micro benchmarks:


![image](https://github.com/ursacomputing/crossbow/assets/18472598/85253c94-9117-4097-8bf6-8371466dd906)

Otherwise this PR is just some display tweaks to better show labels and
updating some text to better explain the microbenchmark explorer.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants