Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UnboundLocalError: local variable 'model_snapshot_path' referenced before assignment #3335

Open
johnathanchiu opened this issue Oct 1, 2024 · 0 comments

Comments

@johnathanchiu
Copy link
Contributor

🐛 Describe the bug

The changes made in this commit is causing the following issues.

Error logs

Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/venv/lib/python3.9/site-packages/ts/llm_launcher.py", line 286, in
main(args)
File "/home/venv/lib/python3.9/site-packages/ts/llm_launcher.py", line 174, in main
with create_mar_file(args, model_snapshot_path):
UnboundLocalError: local variable 'model_snapshot_path' referenced before assignment

Installation instructions

docker build . -f docker/Dockerfile.vllm -t ts/vllm
docker run --rm -ti --shm-size 1g --gpus all -e HUGGING_FACE_HUB_TOKEN=$token -e VLLM_WORKER_MULTIPROC_METHOD=spawn -e enforce-eager=True -p 8080:8080 -v data:/data ts/vllm --model_id meta-llama/Meta-Llama-3.1-70B-Instruct --disable_token_auth

Model Packaging

I cloned the torchserve repo as is.

config.properties

No response

Versions

------------------------------------------------------------------------------------------
Environment headers
------------------------------------------------------------------------------------------
Torchserve branch: 

**Warning: torchserve not installed ..
**Warning: torch-model-archiver not installed ..

Python version: 3.12 (64-bit runtime)
Python executable: /home/paperspace/miniconda3/bin/python

Versions of relevant python libraries:
requests==2.32.3
wheel==0.43.0
**Warning: torch not present ..
**Warning: torchtext not present ..
**Warning: torchvision not present ..
**Warning: torchaudio not present ..

Java Version:


OS: Ubuntu 22.04.3 LTS
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: N/A
CMake version: version 3.28.20231121-g773fd7e

Environment:
library_path (LD_/DYLD_): /usr/local/cuda/lib64

Repro instructions

docker build . -f docker/Dockerfile.vllm -t ts/vllm
docker run --rm -ti --shm-size 1g --gpus all -e HUGGING_FACE_HUB_TOKEN=$token -e VLLM_WORKER_MULTIPROC_METHOD=spawn -e enforce-eager=True -p 8080:8080 -v data:/data ts/vllm --model_id meta-llama/Meta-Llama-3.1-70B-Instruct --disable_token_auth

Possible Solution

model_snapshot_path should exist for vllm runtime:

serve/ts/llm_launcher.py

Lines 171 to 174 in 6bdb1ba

if args.engine == "trt_llm":
model_snapshot_path = download_model(args.model_id)
with create_mar_file(args, model_snapshot_path):

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant