Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conform "bench_isin" to match generator column names #11549

Merged
merged 3 commits into from
Oct 11, 2022

Conversation

GregoryKimball
Copy link
Contributor

@GregoryKimball GregoryKimball commented Aug 17, 2022

Description

The version of bench_isin merged in #11125 used key and column names of the format f"key{i}" rather than the format f"{string.ascii_lowercase[i]}" as is used in the dataframe generator. As a result the isin benchmark using a dictionary argument short-circuits with no matching keys, and the isin benchmark using a dataframe argument finds no matches.

This PR also adjusts the isin arguments from range(1000) to range(50) to better match the input dataframe cardinality of 100. With range(1000), every element matches but with range(50) only 50% of the elements match.

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@GregoryKimball GregoryKimball requested a review from a team as a code owner August 17, 2022 03:06
@github-actions github-actions bot added the Python Affects Python cuDF API. label Aug 17, 2022
Copy link
Contributor

@bdice bdice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix looks good! One comment on parametrize with GPU data. We can fix that here, or in the future. Either way.

cudf.Series(range(1000)),
range(50),
{f"{string.ascii_lowercase[i]}": range(50) for i in range(10)},
cudf.DataFrame({f"{string.ascii_lowercase[i]}": range(50) for i in range(10)}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This problem isn’t new in this PR, but we should try to avoid constructing objects with GPU-resident data in the parametrize call. Doing so causes GPU allocations while collecting tests, which happens before runtime and occurs for every test regardless of whether it is selected (e.g. with pytest -k). An alternative solution is to pass a class and arguments, then construct the object at runtime. Even passing a parameter that is a lambda like lambda: cudf.DataFrame(…) and calling that in the function body will suffice to make the call lazier and execute at runtime instead of eagerly at collection time.

Copy link
Contributor Author

@GregoryKimball GregoryKimball Aug 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @bdice, is this change that you are proposing?

@benchmark_with_object(cls="dataframe", dtype="int")
@pytest.mark.parametrize(
    "values",
    [
        lambda: range(50),
        lambda: {f"{string.ascii_lowercase[i]}": range(50) for i in range(10)},
        lambda: cudf.DataFrame({f"{string.ascii_lowercase[i]}": range(50) for i in range(10)}),
        lambda: cudf.Series(range(50)),
    ],
)
def bench_isin(benchmark, dataframe, values):
    benchmark(dataframe.isin, values())

isin is one of the few benchmarks with DataFrame or Series as parameters, so I think it makes sense to address the lazy evaluation idea in this PR. The suggestion defers the evaluation of the parameters, but it makes values a callable which makes it different from the other benchmarks.

@vyasr What do you think?

Copy link
Contributor

@bdice bdice Aug 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a sufficient solution for deferring execution from test collection-time to test runtime, yes! (Thank you for reading between the lines here, I wrote the original comment on mobile and couldn't write the actual code.)

In my view, values being a callable here is fine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tl;dr I'm fine with using this suggestion for now. I don't think we should try to address the lazy evaluation question more generally in this PR because there are open ongoing discussions around it that also impact all our tests.

Avoiding collection-time creation of parametrized inputs is one of the many reasons that I advocated the use of pytest-cases, which provides more elegant solutions for this, but we tabled that discussion in #11122 to prioritize getting the rest of that PR merged before my vacation. I plan to pick that discussion up again soon, and depending on where we come down we will either switch to using cases or enshrine some preferred solution for this issue into our developer documentation. I think that your current solution is fine in this instance, but there are more complex cases where the results are unreadable and I would like to figure out a consistent solution to apply across all our tests/benchmarks.

@GregoryKimball GregoryKimball changed the base branch from branch-22.10 to branch-22.12 September 29, 2022 16:13
@GregoryKimball GregoryKimball added non-breaking Non-breaking change tech debt labels Sep 29, 2022
@github-actions github-actions bot added conda libcudf Affects libcudf (C++/CUDA) code. labels Sep 29, 2022
@GregoryKimball GregoryKimball requested review from a team as code owners October 11, 2022 02:15
@github-actions github-actions bot removed gpuCI libcudf Affects libcudf (C++/CUDA) code. labels Oct 11, 2022
@GregoryKimball GregoryKimball added the improvement Improvement / enhancement to an existing function label Oct 11, 2022
@codecov
Copy link

codecov bot commented Oct 11, 2022

Codecov Report

Base: 87.40% // Head: 87.48% // Increases project coverage by +0.07% 🎉

Coverage data is based on head (b80105b) compared to base (f72c4ce).
Patch coverage: 84.03% of modified lines in pull request are covered.

Additional details and impacted files
@@               Coverage Diff                @@
##           branch-22.12   #11549      +/-   ##
================================================
+ Coverage         87.40%   87.48%   +0.07%     
================================================
  Files               133      133              
  Lines             21833    21864      +31     
================================================
+ Hits              19084    19128      +44     
+ Misses             2749     2736      -13     
Impacted Files Coverage Δ
python/cudf/cudf/core/udf/__init__.py 50.00% <ø> (ø)
python/cudf/cudf/io/orc.py 92.94% <ø> (-0.09%) ⬇️
python/cudf/cudf/utils/ioutils.py 79.47% <ø> (ø)
...thon/dask_cudf/dask_cudf/tests/test_distributed.py 18.86% <ø> (+4.94%) ⬆️
python/cudf/cudf/core/_base_index.py 82.20% <43.75%> (-3.35%) ⬇️
python/cudf/cudf/io/text.py 91.66% <66.66%> (-8.34%) ⬇️
python/strings_udf/strings_udf/__init__.py 86.27% <76.00%> (-10.61%) ⬇️
python/cudf/cudf/core/index.py 92.91% <95.08%> (+0.27%) ⬆️
python/cudf/cudf/__init__.py 90.69% <100.00%> (ø)
python/cudf/cudf/core/column/categorical.py 89.34% <100.00%> (ø)
... and 7 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@GregoryKimball
Copy link
Contributor Author

@gpucibot merge

@rapids-bot rapids-bot bot merged commit 566b3d1 into rapidsai:branch-22.12 Oct 11, 2022
@GregoryKimball GregoryKimball deleted the adjust_bench_isin branch October 11, 2022 15:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improvement / enhancement to an existing function non-breaking Non-breaking change Python Affects Python cuDF API.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants