Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make MaxTokens optional #1362

Closed
anthonypuppo opened this issue Jun 7, 2023 · 1 comment · Fixed by #1367
Closed

Make MaxTokens optional #1362

anthonypuppo opened this issue Jun 7, 2023 · 1 comment · Fixed by #1367
Labels
kernel Issues or pull requests impacting the core kernel .NET Issue or Pull requests regarding .NET code

Comments

@anthonypuppo
Copy link
Contributor

Some LLM's, such as ChatGPT, default to using the remaining context window if the maximum amount of tokens is not included in the request. For such models it would be beneficial to not pass the max tokens argument every time if the intention is to get the most elaborate response possible. This is of course possible by calculating remaining tokens, but is a pain (and performance hit) to constantly calculate the remaining tokens (assuming you even get it right).

@anthonypuppo anthonypuppo changed the title Make MaxTokens an optional argument Make MaxTokens optional Jun 7, 2023
@alexchaomander alexchaomander added .NET Issue or Pull requests regarding .NET code kernel Issues or pull requests impacting the core kernel labels Jun 8, 2023
@alexchaomander
Copy link
Contributor

Thanks for making the PR! And for the great discussion about this during Office Hours!

github-merge-queue bot pushed a commit that referenced this issue Jul 8, 2023
### Motivation and Context
<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
  1. Why is this change required?
  2. What problem does it solve?
  3. What scenario does it contribute to?
  4. If it fixes an open issue, please link to the issue here.
-->
Some LLM API's have the "max tokens" parameter as optional (OpenAI,
Azure OpenAI, etc.) and will default to the remaining tokens left in the
models context window. This PR changes the default value of max tokens
from an explicitly defined amount to null. Note that skill specific max
token amounts are unchanged, this PR aims to just change the defaults
for library consumers right now.
Fixes #1362 

### Description
<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->
Change max tokens to a nullable int type (`int` to `int?`).

### Contribution Checklist
<!-- Before submitting this PR, please make sure: -->
- [✔️] The code builds clean without any errors or warnings
- [✔️] The PR follows SK Contribution Guidelines
(https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
- [✔️] The code follows the .NET coding conventions
(https://learn.microsoft.com/dotnet/csharp/fundamentals/coding-style/coding-conventions)
verified with `dotnet format`
- [✔️] All unit tests pass, and I have added new tests where possible
- [✔️] I didn't break anyone 😄

Co-authored-by: Lee Miller <lemiller@microsoft.com>
Co-authored-by: Shawn Callegari <36091529+shawncal@users.noreply.github.com>
Co-authored-by: Roger Barreto <19890735+RogerBarreto@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kernel Issues or pull requests impacting the core kernel .NET Issue or Pull requests regarding .NET code
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

2 participants