Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use fast cuda mutex available in numba 0.57 #1212

Merged
merged 1 commit into from
May 15, 2023
Merged

Use fast cuda mutex available in numba 0.57 #1212

merged 1 commit into from
May 15, 2023

Conversation

ianthomas23
Copy link
Member

Fixes #1211.

For numba >= 0.57 there is a CUDA mutex that is lockable on a per-pixel basis whereas earlier numba doesn't have the required functionality so there is a single lock shared between all pixels. Complicated CUDA reductions such as max_n need the mutex so that they work atomically; the new mutex allows the CUDA code to work in parallel very efficiently whereas the old solution forces datashader to run mostly serially.

There is very little code changed here. The numba.__version__ is used to select the appropriate mutex lock and unlock functions, and the size of the cupy array used for the mutex. Other than that there is a new argument to the info function as the canvas shape is needed. Although this argument is only needed for CUDA code running a complicated reduction such as max_n, I have chosen to pass it for every call rather than dynamically determine at the point of calling whether it is needed or not.

As an illustration of performance, demo code:

import cudf, datashader as ds, numba as nb, numpy as np, pandas as pd

n = 1_000_000
rng = np.random.default_rng(95123)
df = pd.DataFrame(dict(
    x = rng.random(size=n, dtype=np.float32),
    y = rng.random(size=n, dtype=np.float32),
    value = rng.random(size=n, dtype=np.float32),
))
df = cudf.DataFrame.from_pandas(df)

canvas = ds.Canvas(plot_height=300, plot_width=600)
agg = canvas.points(source=df, x="x", y="y", agg=ds.max_n("value", 3))

and time the canvas.points call, discarding the first timing as this includes numba compilation. This gives the following timings (using numba 0.56.3 and numba 0.57 for the slow and fast mutex):

Screenshot 2023-05-10 at 13 46 06

GPU on test system is an Nvidia Quadro T1000.

@ianthomas23 ianthomas23 added this to the v0.14.5 milestone May 10, 2023
@codecov
Copy link

codecov bot commented May 10, 2023

Codecov Report

Merging #1212 (a95bb1f) into main (31b3182) will decrease coverage by 0.13%.
The diff coverage is 56.60%.

@@            Coverage Diff             @@
##             main    #1212      +/-   ##
==========================================
- Coverage   84.63%   84.50%   -0.13%     
==========================================
  Files          35       35              
  Lines        8354     8368      +14     
==========================================
+ Hits         7070     7071       +1     
- Misses       1284     1297      +13     
Impacted Files Coverage Δ
datashader/reductions.py 83.11% <0.00%> (ø)
datashader/compiler.py 90.56% <16.66%> (-2.34%) ⬇️
datashader/transfer_functions/_cuda_utils.py 22.60% <26.31%> (-0.93%) ⬇️
datashader/glyphs/area.py 79.82% <83.33%> (ø)
datashader/glyphs/line.py 92.84% <100.00%> (ø)
datashader/glyphs/points.py 88.29% <100.00%> (ø)
datashader/glyphs/polygon.py 94.80% <100.00%> (ø)
datashader/glyphs/quadmesh.py 83.83% <100.00%> (ø)
datashader/glyphs/trimesh.py 92.36% <100.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@ianthomas23 ianthomas23 merged commit 044c31b into holoviz:main May 15, 2023
@ianthomas23 ianthomas23 deleted the 1211_fast_cuda_mutex branch May 15, 2023 07:38
@jbednar
Copy link
Member

jbednar commented May 15, 2023

Excellent, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement fast CUDA mutex
2 participants