Skip to content

Commit

Permalink
Release report maintenance and Microbenchmark UI change (#76)
Browse files Browse the repository at this point in the history
This PR does a bit of maintenance on the release report including:
- bump the version of conbenchcoms
- properly facet macrobenchmarks by cleaning up label

This report is tested locally by comparing this baseline
apache/arrow@2dcee3f
to this contender
apache/arrow@f60c281

This PR also re-enables changes previously reverted in #73.
  • Loading branch information
boshek committed Jan 12, 2024
1 parent 2ddfe77 commit cc45293
Show file tree
Hide file tree
Showing 4 changed files with 73 additions and 64 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/performance-release-report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ permissions:

env:
## payload vars
BASELINE_GIT_COMMIT: ${{ github.event.inputs.baseline_git_commit }}
CONTENDER_GIT_COMMIT: ${{ github.event.inputs.contender_git_commit }}
BASELINE_GIT_COMMIT: ${{ github.event.inputs.baseline_git_commit || '2dcee3f82c6cf54b53a64729fd81840efa583244' }}
CONTENDER_GIT_COMMIT: ${{ github.event.inputs.contender_git_commit || 'b5d26f833c5dfa1494adecccbcc9181bd31e3787' }}
RC_LABEL: ${{ github.event.inputs.rc_label || 'manual' }}

jobs:
Expand Down
2 changes: 1 addition & 1 deletion performance-release-report/R/functions.R
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ tidy_compare <- function(.x, .y) {
)
) %>%
mutate(
tags_used = list(tag_names),
tags_used = list(gsub("baseline.tags.", "", tag_names)), ## just need the bare tag names
language = .y$baseline.language,
benchmark_name = .y$baseline.benchmark_name
)
Expand Down
121 changes: 65 additions & 56 deletions performance-release-report/performance-release-report.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ if (Sys.Date() %in% unique(as.Date(hardware_summary$commit.timestamp))) {
```{r function-plots}
#| include: false
change_cols <- viridisLite::mako(2, begin = 0.5, end = 0.75)
change_cols <- viridisLite::mako(3, begin = 0.5, end = 0.75)
names(change_cols) <- c(glue("{contender_git_commit_short} (contender) faster"), glue("{baseline_git_commit_short} (baseline) faster"))
```

Expand Down Expand Up @@ -419,7 +419,7 @@ top_zscore_table(micro_bm_proced, direction = "improvement")

## z-score distribution

Plotting the distribution of zscores for all microbenchmark results will help identify any systematic differences between the baseline and contender. The shape of the distribution of z-scores provides a sense of the overall performance of the contender relative to the baseline. Narrow distirbutions centered around 0 indicate that the contender is performing similarly to the baseline. Wider distributions indicate that the contender is performing differently than the baseline with left skewing indicating regressions and right skewing indicating improvements.
Plotting the distribution of zscores for all microbenchmark results will help identify any systematic differences between the baseline and contender. The shape of the distribution of z-scores provides a sense of the overall performance of the contender relative to the baseline. Narrow distributions centered around 0 indicate that the contender is performing similarly to the baseline. Wider distributions indicate that the contender is performing differently than the baseline with left skewing indicating regressions and right skewing indicating improvements.


```{ojs}
Expand Down Expand Up @@ -453,7 +453,7 @@ microBmProced = aq.from(transpose(ojs_micro_bm_proced))

## Microbenchmark explorer {#micro-bm-explorer}

This microbenchmarks explorer allows you to filter the microbenchmark results by language, suite, and benchmark name and toggle regressions and improvements based on a threshold level of `r threshold` z-scores. Languages, suite and benchmark name default to showing all results for that category. Additional benchmark parameters are displayed on the vertical axis resulting in each bar representing a case permutation. If a becnhmark does not have additional parameters, the full case permutation string is displayted. The display can be further filtered by selecting a specific language, suite, or benchmark name. Each bar can be clicked to open the Conbench UI page for that benchmark providing additional history and metadata for that case permutation.
This microbenchmarks explorer allows you to filter the microbenchmark results by language, suite, and benchmark name and toggle regressions and improvements based on a threshold level of `r threshold` z-scores. Languages, suite and benchmark name need to be selected to show a benchmark plot. Additional benchmark parameters are displayed on the vertical axis resulting in each bar representing a case permutation. If a benchmark does not have additional parameters, the full case permutation string is displayed. Each bar can be clicked to open the Conbench UI page for that benchmark providing additional history and metadata for that case permutation.

```{ojs filter-micro-bm}
// Top level: are there regressions/improvements?
Expand Down Expand Up @@ -485,7 +485,7 @@ microBmProcedChanges = {
}
// Choose the language
allLanguageValues = ["All languages"].concat(microBmProcedChanges.dedupe('language').array('language'))
allLanguageValues = [null].concat(microBmProcedChanges.dedupe('language').array('language'))
viewof languageSelected = Inputs.select(allLanguageValues, {
label: md`**Language**`,
Expand All @@ -494,13 +494,13 @@ viewof languageSelected = Inputs.select(allLanguageValues, {
})
languages = {
return (languageSelected === "All languages")
return (languageSelected === null)
? microBmProcedChanges // If languageSelected is "All languages", no filtering is applied
: microBmProcedChanges.filter(aq.escape(d => op.includes(d.language, languageSelected)));
}
allSuiteValues = ["All suites"].concat(languages.dedupe('suite').array('suite'))
allSuiteValues = [null].concat(languages.dedupe('suite').array('suite'))
// Choose the suite
viewof suiteSelected = Inputs.select(allSuiteValues, {
Expand All @@ -511,12 +511,12 @@ viewof suiteSelected = Inputs.select(allSuiteValues, {
suites = {
return (suiteSelected === "All suites")
return (suiteSelected === null)
? languages
: languages.filter(aq.escape(d => op.includes(d.suite, suiteSelected)));
}
allNameValues = ["All benchmarks"].concat(suites.dedupe('name').array('name'))
allNameValues = [null].concat(suites.dedupe('name').array('name'))
// Choose the benchmark
viewof nameSelected = Inputs.select(allNameValues, {
Expand All @@ -526,7 +526,7 @@ viewof nameSelected = Inputs.select(allNameValues, {
})
microBmProcedChangesFiltered = {
return (nameSelected === "All benchmarks")
return (nameSelected === null)
? suites
: suites.filter(aq.escape(d => op.includes(d.name, nameSelected)));
}
Expand All @@ -548,54 +548,63 @@ margins = {
return margin;
}
Plot.plot({
width: 1200,
height: (microBmProcedChangesFiltered.numRows()*30)+100, //adjust height of plot based on number of rows
marginRight: margins[0],
marginLeft: margins[1],
label: null,
x: {
axis: "top",
label: "% change",
labelAnchor: "center",
labelOffset: 30
},
style: {
fontSize: "14px",
fontFamily: "Roboto Mono"
},
color: {
range: ojs_change_cols,
domain: ojs_pn_lab,
type: "categorical",
legend: true
},
marks: [
Plot.barX(microBmProcedChangesFiltered, {
y: "params",
x: "change",
color: "black",
fill: "pn_lab",
fillOpacity: 0.75,
sort: {y: "x"},
channels: {difference: "difference", params: "params"},
href: "cb_url",
tip: true
}),
Plot.gridX({stroke: "white", strokeOpacity: 0.5}),
Plot.ruleX([0]),
d3
.groups(microBmProcedChangesFiltered, (d) => d.change > 0)
.map(([posneg, dat]) => [
Plot.axisY({
x: 0,
ticks: dat.map((d) => d.params),
tickSize: 0,
anchor: posneg ? "left" : "right"
displayPlot = nameSelected !== null && suiteSelected !== null && languageSelected !== null
// Only display plots if a benchmark is selected
mbPlot = {
if (displayPlot) {
return Plot.plot({
width: 1200,
height: microBmProcedChangesFiltered.numRows() * 30 + 100, //adjust height of plot based on number of rows
marginRight: margins[0],
marginLeft: margins[1],
label: null,
x: {
axis: "top",
label: "% change",
labelAnchor: "center",
labelOffset: 30
},
style: {
fontSize: "14px",
fontFamily: "Roboto Mono"
},
color: {
range: ojs_change_cols,
domain: ojs_pn_lab,
type: "categorical",
legend: true
},
marks: [
Plot.barX(microBmProcedChangesFiltered, {
y: "params",
x: "change",
color: "black",
fill: "pn_lab",
fillOpacity: 0.75,
sort: { y: "x" },
channels: { difference: "difference", params: "params" },
href: "cb_url",
tip: true
}),
])
]
})
Plot.gridX({ stroke: "white", strokeOpacity: 0.5 }),
Plot.ruleX([0]),
d3
.groups(microBmProcedChangesFiltered, (d) => d.change > 0)
.map(([posneg, dat]) => [
Plot.axisY({
x: 0,
ticks: dat.map((d) => d.params),
tickSize: 0,
anchor: posneg ? "left" : "right"
})
])
]
});
} else {
return md`**Language, suite and benchmark all need a selection for a plot to be displayed.**`;
}
}
```


Expand Down
10 changes: 5 additions & 5 deletions performance-release-report/renv.lock
Original file line number Diff line number Diff line change
Expand Up @@ -187,22 +187,22 @@
},
"conbenchcoms": {
"Package": "conbenchcoms",
"Version": "0.0.9",
"Version": "0.0.10",
"Source": "GitHub",
"RemoteType": "github",
"RemoteHost": "api.github.com",
"RemoteUsername": "conbench",
"RemoteRepo": "conbenchcoms",
"RemoteRef": "main",
"RemoteSha": "84c2f70545c60ac7e0609bb7520a81ff2e722c89",
"RemoteUsername": "conbench",
"RemoteRef": "HEAD",
"RemoteSha": "55cdb120bbe2c668d3cf8ae543f4922131653645",
"Requirements": [
"R",
"dplyr",
"glue",
"httr2",
"yaml"
],
"Hash": "8449699cad98178fb87fc8baaa9824d6"
"Hash": "83a34157d58e22c20bb06738cc363693"
},
"cpp11": {
"Package": "cpp11",
Expand Down

0 comments on commit cc45293

Please sign in to comment.