Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add warp transducer #1099

Closed
wants to merge 7 commits into from
Closed

Conversation

vincentqb
Copy link
Contributor

@vincentqb vincentqb commented Dec 17, 2020

This pull request introduces rnnt_loss and RNNTLoss in torchaudio.prototype.transducer using HawkAaron's warp-transducer.

Import the warp-transducer repository

Suggested options to load the repository with torchbind replacing pybind:

CUDA

This pull request can optionally build with GPU support. Environment variables need to be set before installing, as mentioned here and colab notebook.

export CUDA_HOME=/usr/local/cuda
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export LD_LIBRARY_PATH="$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH"
export LIBRARY_PATH=$CUDA_HOME/lib64:$LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export CFLAGS="-I$CUDA_HOME/include $CFLAGS"
BUILD_CUDA_WT=ON python setup.py install

There is also a a unit test available for GPU.

cc @astaff, internal

@vincentqb vincentqb force-pushed the warp-transducer branch 2 times, most recently from f939ad7 to 97de03b Compare December 21, 2020 23:45
@vincentqb vincentqb force-pushed the warp-transducer branch 2 times, most recently from afd3965 to 7663661 Compare December 23, 2020 15:39
@vincentqb vincentqb force-pushed the warp-transducer branch 3 times, most recently from f0445a5 to 1cf829f Compare December 23, 2020 16:26
@vincentqb vincentqb changed the title add warp transducer as submodule add warp transducer Dec 23, 2020
@@ -83,7 +84,8 @@ def run(self):
packages=find_packages(exclude=["build*", "test*", "torchaudio.csrc*", "third_party*", "build_tools*"]),
ext_modules=setup_helpers.get_ext_modules(),
cmdclass={
'build_ext': setup_helpers.BuildExtension.with_options(no_python_abi_suffix=True)
'build_ext': setup_helpers.BuildExtension.with_options(no_python_abi_suffix=True),
'clean': clean,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed?


try:
torch.ops.warprnnt_pytorch_warp_rnnt.gpu_rnnt
_CUDA_TRANSDUCER = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd make this switch dependent on whether CUDA is available at all.

@vincentqb vincentqb closed this Dec 30, 2020
mthrok pushed a commit to mthrok/audio that referenced this pull request Dec 13, 2022
Co-authored-by: holly1238 <77758406+holly1238@users.noreply.github.com>
mpc001 pushed a commit to mpc001/audio that referenced this pull request Aug 4, 2023
…PI and deprecate the old one (pytorch#1099)

* [PT-D][Tensor Parallel] Update the example for TP to use DTensor and new TP API
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants