Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW] Adding RAPIDS <-> DLFrameworks Jupyter Notebook #266

Open
wants to merge 4 commits into
base: branch-0.12
Choose a base branch
from

Conversation

awthomp
Copy link

@awthomp awthomp commented Jan 17, 2020

This notebook shows how to pass data between __cuda_array_interface__ supporting libraries (CuPy and Numba, for this demonstration) and both PyTorch and TensorFlow. When using PyTorch, we can simply call:

torch.as_tensor(foo) on an existing array that accepts the __cuda_array_interface__ but for TensorFlow, we leverage the nascent tfdlpack package described in this RFC.

There is a bug in tfdlpack.to_dlpack() that is documented here: VoVAllen/tf-dlpack#12 and also in this notebook.

@awthomp
Copy link
Author

awthomp commented Jan 17, 2020

The blog for this PR hasn't been written yet, and its dependent on both tfdlpack and PyTorch 1.4. Do these notebooks assume that the user has configured libraries appropriately?

@awthomp awthomp changed the title [REVIEW] Adding RAPIDS <-> DLFrameworks Jupyter Notebook [WIP] Adding RAPIDS <-> DLFrameworks Jupyter Notebook Jan 19, 2020
@awthomp awthomp changed the title [WIP] Adding RAPIDS <-> DLFrameworks Jupyter Notebook [REVIEW] Adding RAPIDS <-> DLFrameworks Jupyter Notebook Jan 21, 2020
@taureandyernv
Copy link
Contributor

taureandyernv commented Jan 24, 2020

The blog for this PR hasn't been written yet, and its dependent on both tfdlpack and PyTorch 1.4. Do these notebooks assume that the user has configured libraries appropriately?

@awthomp we should not assume that the user has configured these libraries properly. Also, to pass CI, we need the notebooks to be able to run unattended. I had spent some time trying to get tfdlpack-gpu to install and work in the docker container in an unattended, CI compliant way last Friday, but am yet to be successful.

We can also move this to the advanced notebooks section and ask to delay CI on it. Thoughts?

Copy link
Contributor

@taureandyernv taureandyernv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tfdlpack-gpu install needs to work unattended to pass CI. Want to work together on this?

@awthomp
Copy link
Author

awthomp commented Jan 24, 2020

@awthomp we should not assume that the user has configured these libraries properly. Also, to pass CI, we need the notebooks to be able to run unattended. I had spent some time trying to get tfdlpack-gpu to install and work in the docker container in an unattended, CI compliant way last Friday, but am yet to be successful.

We can also move this to the advanced notebooks section and ask to delay CI on it. Thoughts?

@taureandyernv I'd be happy to help on the tfdlpack-gpu install issues in your docker container. Are you able to share the specific error? Was the package able to be found? I noticed with Python 3.8 and Conda that a pip install tfdlpack-gpu would not work (i.e. could not find the package), but if you downgrade to 3.7, there were no issues.

Let's work on addressing the tfdlpack-gpu issue before making a decision on moving the notebook and pausing CI/CD. Sound good?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants