diff --git a/docs/additional-functionality/rapids-udfs.md b/docs/additional-functionality/rapids-udfs.md index 00fea1301f2..b4bcdfe67d2 100644 --- a/docs/additional-functionality/rapids-udfs.md +++ b/docs/additional-functionality/rapids-udfs.md @@ -270,10 +270,10 @@ The following configuration settings are also relevant for GPU scheduling for Pa +- GpuArrowEvalPython ``` This means each Spark task will trigger 2 Python processes. In this case, if we set - `spark.rapids.sql.concurrentGpuTasks=2` and `concurrentPythonWorkers=2`, it will also probably - result in a hang as we allow 2 tasks running and each of them has 2 Python processes. Let's say - Task_1_Process_1 and Task_2_Process_1 acquired the semaphore, but neither of them are going to - proceed becasue both of them are waiting for their second semaphore. + `concurrentPythonWorkers=2`, it will also probably result in a hang as we allow 2 tasks running + and each of them has 2 Python processes. Let's say Task_1_Process_1 and Task_2_Process_1 + acquired the semaphore, but neither of them are going to proceed becasue both of them are + waiting for their second semaphore. To find details on the above Python configuration settings, please see the [RAPIDS Accelerator for Apache Spark Configuration Guide](../configs.md). Search 'pandas' for a quick navigation jump.