Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark runner script #918

Merged
merged 9 commits into from
Oct 16, 2020

Conversation

andygrove
Copy link
Contributor

@andygrove andygrove commented Oct 8, 2020

Signed-off-by: Andy Grove andygrove@nvidia.com

This PR adds a Python script for running a series of TPC-* benchmark queries with one or more configurations. See the documentation in the PR for more details.

The approach is purposely simplistic and we may want to make it smarter in the future but this allows us to leave benchmarks running unattended with minimal effort.

I will need to update the benchmark guide (this is in a separate PR) to cover this new utility.

@andygrove andygrove added the benchmark Benchmarking, benchmarking tools label Oct 8, 2020
@andygrove andygrove added this to the Sep 28 - Oct 9 milestone Oct 8, 2020
@andygrove andygrove self-assigned this Oct 8, 2020
@andygrove andygrove changed the title [WIP] Benchmark runner script Benchmark runner script Oct 8, 2020
Copy link
Collaborator

@abellina abellina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

had a couple of questions mostly, but otherwise this script looks good to me.

Main thing was should we include spark-submit-template.txt?

abellina
abellina previously approved these changes Oct 9, 2020
@andygrove
Copy link
Contributor Author

build

@andygrove
Copy link
Contributor Author

build

1 similar comment
@andygrove
Copy link
Contributor Author

build

--input-format parquet \
--output /path/to/output \
--output-format parquet \
--configs cpu gpu-ucx-on
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing \ at end?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Fixed.

This benchmark script assumes that the following environment variables have been set for
the location of the relevant JAR files to be used:

- SPARK_RAPIDS_PLUGIN_JAR
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason we use env variables vs parameters to script? For script purposes parameters would be easier, I think its also more obvious to user and they don't accidentally get something unexpected.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My reasoning was that we tell users to set up these environment variables in the getting started guide and I have been using this approach as a bit of a stop-gap solution for the reporting tools to show the plugin and cuDF versions that were used to run benchmarks. This isn't ideal and it would be better to use cuDF and plugin APIs to get the version numbers instead. I haven't looked into whether this is possible or not. I'll give this some more thought.

--query q4 q5

In this example, configuration key-value pairs will be loaded from cpu.properties and
gpu-ucx-on.properties and appended to a spark-submit-template.txt to build the spark-submit
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where does spark-submit-template.txt come from? ./? What if I have multiple of these and want to switch between them? what if I run from different directory?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've made the template file configurable now.

Signed-off-by: Andy Grove <andygrove@nvidia.com>
Signed-off-by: Andy Grove <andygrove@nvidia.com>
Signed-off-by: Andy Grove <andygrove@nvidia.com>
Signed-off-by: Andy Grove <andygrove@nvidia.com>
Signed-off-by: Andy Grove <andygrove@nvidia.com>
Signed-off-by: Andy Grove <andygrove@nvidia.com>
…nt. Also make all arguments required.

Signed-off-by: Andy Grove <andygrove@nvidia.com>
Signed-off-by: Andy Grove <andygrove@nvidia.com>
--output /path/to/output \
--output-format parquet \
--configs cpu gpu-ucx-on \
--query q4 q5
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add the --template option

Signed-off-by: Andy Grove <andygrove@nvidia.com>
@andygrove
Copy link
Contributor Author

build

1 similar comment
@andygrove
Copy link
Contributor Author

build

@andygrove andygrove merged commit f3bb506 into NVIDIA:branch-0.3 Oct 16, 2020
@andygrove andygrove deleted the benchmark-automation-script branch October 16, 2020 00:30
tgravescs pushed a commit to tgravescs/spark-rapids that referenced this pull request Oct 20, 2020
* Benchmark runner script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add argument for number of iterations

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Fix docs

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* add license

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* improve documentation for the configuration files

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add missing line-continuation symbol in example

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required.

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update benchmarking guide to link to the benchmark python script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add --template to example and fix markdown header

Signed-off-by: Andy Grove <andygrove@nvidia.com>
tgravescs added a commit that referenced this pull request Oct 21, 2020
* Add some more checks to databricks build scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* remove extra newline

* use the right -gt for bash

* Add new python file for databricks cluster utils

* Fix up scripts

* databricks scripts working

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Pass in sshkey

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* cluster creation script mods

* fix

* fix pub key

* fix missing quote

* fix $

* update public key to be param

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Add public key value

* clenaup

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* modify permissions

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* change loc cluster id file

* fix extra /

* quote public key

* try different setting cluster id

* debug

* try again

* try readfile

* try again

* try quotes

* cleanup

* Add option to control number of partitions when converting from CSV to Parquet (#915)

* Add command-line arguments for applying coalesce and repartition on a per-table basis

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Move command-line validation logic and address other feedback

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update copyright years and fix import order

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update docs/benchmarks.md

Co-authored-by: Jason Lowe <jlowe@nvidia.com>

* Remove withPartitioning option from TPC-H and TPC-xBB file conversion

Signed-off-by: Andy Grove <andygrove@nvidia.com>

Co-authored-by: Jason Lowe <jlowe@nvidia.com>

* Benchmark runner script (#918)

* Benchmark runner script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add argument for number of iterations

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Fix docs

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* add license

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* improve documentation for the configuration files

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add missing line-continuation symbol in example

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required.

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update benchmarking guide to link to the benchmark python script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add --template to example and fix markdown header

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add legacy config to clear active Spark 3.1.0 session in tests (#970)

Signed-off-by: Jason Lowe <jlowe@nvidia.com>

* XFail tests until final fix can be put in (#968)

Signed-off-by: Robert (Bobby) Evans <bobby@apache.org>

* Stop reporting totalTime metric for GpuShuffleExchangeExec (#973)

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add some more checks to databricks build scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Pass in sshkey

* Add create script, add more parameters, etc

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* add create script

* rework some scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* fix is_cluster_running

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* put slack back in

* update text

* cleanup

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* remove datetime

* send output to stderr

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

Co-authored-by: Andy Grove <andygrove@users.noreply.github.com>
Co-authored-by: Jason Lowe <jlowe@nvidia.com>
Co-authored-by: Robert (Bobby) Evans <bobby@apache.org>
sperlingxx pushed a commit to sperlingxx/spark-rapids that referenced this pull request Nov 20, 2020
* Benchmark runner script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add argument for number of iterations

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Fix docs

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* add license

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* improve documentation for the configuration files

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add missing line-continuation symbol in example

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required.

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update benchmarking guide to link to the benchmark python script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add --template to example and fix markdown header

Signed-off-by: Andy Grove <andygrove@nvidia.com>
sperlingxx pushed a commit to sperlingxx/spark-rapids that referenced this pull request Nov 20, 2020
* Add some more checks to databricks build scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* remove extra newline

* use the right -gt for bash

* Add new python file for databricks cluster utils

* Fix up scripts

* databricks scripts working

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Pass in sshkey

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* cluster creation script mods

* fix

* fix pub key

* fix missing quote

* fix $

* update public key to be param

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Add public key value

* clenaup

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* modify permissions

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* change loc cluster id file

* fix extra /

* quote public key

* try different setting cluster id

* debug

* try again

* try readfile

* try again

* try quotes

* cleanup

* Add option to control number of partitions when converting from CSV to Parquet (NVIDIA#915)

* Add command-line arguments for applying coalesce and repartition on a per-table basis

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Move command-line validation logic and address other feedback

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update copyright years and fix import order

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update docs/benchmarks.md

Co-authored-by: Jason Lowe <jlowe@nvidia.com>

* Remove withPartitioning option from TPC-H and TPC-xBB file conversion

Signed-off-by: Andy Grove <andygrove@nvidia.com>

Co-authored-by: Jason Lowe <jlowe@nvidia.com>

* Benchmark runner script (NVIDIA#918)

* Benchmark runner script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add argument for number of iterations

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Fix docs

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* add license

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* improve documentation for the configuration files

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add missing line-continuation symbol in example

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required.

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update benchmarking guide to link to the benchmark python script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add --template to example and fix markdown header

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add legacy config to clear active Spark 3.1.0 session in tests (NVIDIA#970)

Signed-off-by: Jason Lowe <jlowe@nvidia.com>

* XFail tests until final fix can be put in (NVIDIA#968)

Signed-off-by: Robert (Bobby) Evans <bobby@apache.org>

* Stop reporting totalTime metric for GpuShuffleExchangeExec (NVIDIA#973)

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add some more checks to databricks build scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Pass in sshkey

* Add create script, add more parameters, etc

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* add create script

* rework some scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* fix is_cluster_running

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* put slack back in

* update text

* cleanup

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* remove datetime

* send output to stderr

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

Co-authored-by: Andy Grove <andygrove@users.noreply.github.com>
Co-authored-by: Jason Lowe <jlowe@nvidia.com>
Co-authored-by: Robert (Bobby) Evans <bobby@apache.org>
nartal1 pushed a commit to nartal1/spark-rapids that referenced this pull request Jun 9, 2021
* Benchmark runner script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add argument for number of iterations

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Fix docs

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* add license

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* improve documentation for the configuration files

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add missing line-continuation symbol in example

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required.

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update benchmarking guide to link to the benchmark python script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add --template to example and fix markdown header

Signed-off-by: Andy Grove <andygrove@nvidia.com>
nartal1 pushed a commit to nartal1/spark-rapids that referenced this pull request Jun 9, 2021
* Add some more checks to databricks build scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* remove extra newline

* use the right -gt for bash

* Add new python file for databricks cluster utils

* Fix up scripts

* databricks scripts working

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Pass in sshkey

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* cluster creation script mods

* fix

* fix pub key

* fix missing quote

* fix $

* update public key to be param

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Add public key value

* clenaup

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* modify permissions

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* change loc cluster id file

* fix extra /

* quote public key

* try different setting cluster id

* debug

* try again

* try readfile

* try again

* try quotes

* cleanup

* Add option to control number of partitions when converting from CSV to Parquet (NVIDIA#915)

* Add command-line arguments for applying coalesce and repartition on a per-table basis

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Move command-line validation logic and address other feedback

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update copyright years and fix import order

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update docs/benchmarks.md

Co-authored-by: Jason Lowe <jlowe@nvidia.com>

* Remove withPartitioning option from TPC-H and TPC-xBB file conversion

Signed-off-by: Andy Grove <andygrove@nvidia.com>

Co-authored-by: Jason Lowe <jlowe@nvidia.com>

* Benchmark runner script (NVIDIA#918)

* Benchmark runner script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add argument for number of iterations

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Fix docs

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* add license

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* improve documentation for the configuration files

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add missing line-continuation symbol in example

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required.

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update benchmarking guide to link to the benchmark python script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add --template to example and fix markdown header

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add legacy config to clear active Spark 3.1.0 session in tests (NVIDIA#970)

Signed-off-by: Jason Lowe <jlowe@nvidia.com>

* XFail tests until final fix can be put in (NVIDIA#968)

Signed-off-by: Robert (Bobby) Evans <bobby@apache.org>

* Stop reporting totalTime metric for GpuShuffleExchangeExec (NVIDIA#973)

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add some more checks to databricks build scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Pass in sshkey

* Add create script, add more parameters, etc

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* add create script

* rework some scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* fix is_cluster_running

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* put slack back in

* update text

* cleanup

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* remove datetime

* send output to stderr

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

Co-authored-by: Andy Grove <andygrove@users.noreply.github.com>
Co-authored-by: Jason Lowe <jlowe@nvidia.com>
Co-authored-by: Robert (Bobby) Evans <bobby@apache.org>
nartal1 pushed a commit to nartal1/spark-rapids that referenced this pull request Jun 9, 2021
* Benchmark runner script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add argument for number of iterations

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Fix docs

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* add license

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* improve documentation for the configuration files

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add missing line-continuation symbol in example

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required.

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update benchmarking guide to link to the benchmark python script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add --template to example and fix markdown header

Signed-off-by: Andy Grove <andygrove@nvidia.com>
nartal1 pushed a commit to nartal1/spark-rapids that referenced this pull request Jun 9, 2021
* Add some more checks to databricks build scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* remove extra newline

* use the right -gt for bash

* Add new python file for databricks cluster utils

* Fix up scripts

* databricks scripts working

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Pass in sshkey

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* cluster creation script mods

* fix

* fix pub key

* fix missing quote

* fix $

* update public key to be param

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Add public key value

* clenaup

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* modify permissions

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* change loc cluster id file

* fix extra /

* quote public key

* try different setting cluster id

* debug

* try again

* try readfile

* try again

* try quotes

* cleanup

* Add option to control number of partitions when converting from CSV to Parquet (NVIDIA#915)

* Add command-line arguments for applying coalesce and repartition on a per-table basis

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Move command-line validation logic and address other feedback

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update copyright years and fix import order

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update docs/benchmarks.md

Co-authored-by: Jason Lowe <jlowe@nvidia.com>

* Remove withPartitioning option from TPC-H and TPC-xBB file conversion

Signed-off-by: Andy Grove <andygrove@nvidia.com>

Co-authored-by: Jason Lowe <jlowe@nvidia.com>

* Benchmark runner script (NVIDIA#918)

* Benchmark runner script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add argument for number of iterations

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Fix docs

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* add license

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* improve documentation for the configuration files

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add missing line-continuation symbol in example

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Remove hard-coded spark-submit-template.txt and add --template argument. Also make all arguments required.

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Update benchmarking guide to link to the benchmark python script

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add --template to example and fix markdown header

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add legacy config to clear active Spark 3.1.0 session in tests (NVIDIA#970)

Signed-off-by: Jason Lowe <jlowe@nvidia.com>

* XFail tests until final fix can be put in (NVIDIA#968)

Signed-off-by: Robert (Bobby) Evans <bobby@apache.org>

* Stop reporting totalTime metric for GpuShuffleExchangeExec (NVIDIA#973)

Signed-off-by: Andy Grove <andygrove@nvidia.com>

* Add some more checks to databricks build scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* Pass in sshkey

* Add create script, add more parameters, etc

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* add create script

* rework some scripts

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* fix is_cluster_running

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* put slack back in

* update text

* cleanup

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

* remove datetime

* send output to stderr

Signed-off-by: Thomas Graves <tgraves@nvidia.com>

Co-authored-by: Andy Grove <andygrove@users.noreply.github.com>
Co-authored-by: Jason Lowe <jlowe@nvidia.com>
Co-authored-by: Robert (Bobby) Evans <bobby@apache.org>
tgravescs pushed a commit to tgravescs/spark-rapids that referenced this pull request Nov 30, 2023
…IDIA#918)

Signed-off-by: spark-rapids automation <70000568+nvauto@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark Benchmarking, benchmarking tools
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants