Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
Co-authored-by: Mark Harris <783069+harrism@users.noreply.github.com>
  • Loading branch information
galipremsagar and harrism authored Jul 29, 2024
1 parent 4cb20da commit 7425336
Showing 1 changed file with 13 additions and 8 deletions.
21 changes: 13 additions & 8 deletions docs/cudf/source/cudf_pandas/how-it-works.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,14 +38,19 @@ mode](api.options) is automatically enabled, ensuring consistency with
pandas-specific semantics like default sort ordering.


`cudf.pandas` uses a managed pool memory by default, that will also enable
`cudf.pandas` uses a managed memory pool by default. This allows `cudf.pandas` to process datasets larger than the memory of the GPU(s) it is running on. Managed memory prefetching is also enabled by default to improve memory access performance. For more information on CUDA Unified Memory (managed memory), performance and prefetching, see [this NVIDIA Developer blog post](https://developer.nvidia.com/blog/improving-gpu-memory-oversubscription-performance/).
prefetching <<LINK to prefetch tutorial/demo>>.

There are various memory allocators that can be used by changing the environment
variable `CUDF_PANDAS_RMM_MODE`. It supports:
Other memory allocators can be used by changing the environment
variable `CUDF_PANDAS_RMM_MODE` to one of the following.

1. "pool"
2. "async"
3. "managed" (default)
4. "managed_pool"
5. "cuda"
Pool allocators improve allocation performance. Without using one, memory
allocation may be a bottleneck depending on the workload. Managed memory
enables oversubscribing GPU memory. This allows cudf.pandas to process
data larger than GPU memory in many cases, without CPU (Pandas) fallback.

1. "managed_pool" (default): CUDA Unified Memory (managed memory) with RMM's asynchronous pool allocator.
2. "managed": CUDA Unified Memory, (managed memory) with no pool allocator.
3. "async": CUDA's built-in pool asynchronous pool allocator with normal CUDA device memory.
4. "pool": RMM's asynchronous pool allocator with normal CUDA device memory.
5. "cuda": normal CUDA device memory with no pool allocator.

0 comments on commit 7425336

Please sign in to comment.