Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: kservev2 batching issue and missing parameters #3341

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

pkluska
Copy link

@pkluska pkluska commented Oct 4, 2024

Description

Fix KServeV2 picking up only the first item from the list. As a result it is able to dynamically batch the requests. Updated that if the model returns a dictionary it picks up the keys as the names of the outputs.
Update the envelope to handle request parameters and input parameters

Fixes #2158

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • torchserve_sanity.py
    OK

  • test_mnist_kf.py

Run completed, parsing output
./ts/torch_handler/unit_tests/test_mnist_kf.py::test_handle Passed

./ts/torch_handler/unit_tests/test_mnist_kf.py::test_handle_kf Passed

./ts/torch_handler/unit_tests/test_mnist_kf.py::test_handle_kfv2 Passed

Total number of tests expected to run: 3
Total number of tests run: 3
Total number of tests passed: 3
Total number of tests failed: 0
Total number of tests failed with errors: 0
Total number of tests skipped: 0
Total number of tests with no result data: 0
Finished running tests!

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

Fix KServeV2 picking up only the first item from the list.
As a result it is able to dynamically batch the requests.
Updated that if the model returns a dictionary it picks up the keys
as the names of the outputs.
Update the envelope to handle request parameters and input parameters
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

TorchServe with Kserve_wrapper v2 throws 'message': 'number of batch response mismatched'
1 participant