Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(GoogleGemini): Added CountTokens #331

Merged
merged 1 commit into from
Jun 6, 2024
Merged

feat(GoogleGemini): Added CountTokens #331

merged 1 commit into from
Jun 6, 2024

Conversation

gunpal5
Copy link
Contributor

@gunpal5 gunpal5 commented Jun 6, 2024

feat(GoogleGemini): Added CountTokens
fix: fixed usages to correctly return the request usages.

Summary by CodeRabbit

  • New Features

    • Introduced token counting functionality for text messages using Google APIs.
  • Enhancements

    • Updated the model in Gemini15ProModel to use Gemini15Pro for improved performance.
    • Improved usage tracking in GoogleChatModel and OpenAiChatModel for more accurate time and resource management.
  • Dependencies

    • Upgraded Google_GenerativeAI package from version 1.0.0 to 1.0.1 for better stability and new features.

Copy link
Contributor

coderabbitai bot commented Jun 6, 2024

Walkthrough

The recent updates primarily focus on refining functionality and improving accuracy in token counting, usage tracking, and model selection for Google and OpenAI providers. Key changes include updating the Google_GenerativeAI package version, enhancing token counting in GoogleChatModel, adjusting usage tracking in both GoogleChatModel and OpenAiChatModel, and updating the model used in Gemini15ProModel.

Changes

File Path Change Summary
src/Directory.Packages.props Updated Google_GenerativeAI package version from 1.0.0 to 1.0.1.
src/.../GoogleChatModel.Tokens.cs Changed CountTokens method to take IEnumerable<Message> instead of string.
src/.../GoogleChatModel.cs Adjusted ToMessage method to correctly update usage variable for accurate time tracking.
src/.../Predefined/GeminiModels.cs Updated model in Gemini15ProModel class from Gemini15Flash to Gemini15Pro.
src/.../OpenAiChatModel.cs Modified GetUsage method to use usage2 variable and updated subsequent method calls accordingly.

Sequence Diagram(s) (Beta)

sequenceDiagram
    participant User
    participant GoogleChatModel
    participant GoogleAPI
    participant OpenAiChatModel

    User ->> GoogleChatModel: CountTokens(messages)
    GoogleChatModel ->> GoogleAPI: Send messages for token count
    GoogleAPI -->> GoogleChatModel: Return token count
    GoogleChatModel -->> User: Return token count

    User ->> GoogleChatModel: SendMessage
    GoogleChatModel ->> GoogleAPI: Process message
    GoogleAPI -->> GoogleChatModel: Return processed message
    GoogleChatModel ->> GoogleChatModel: Update usage tracking
    GoogleChatModel -->> User: Return processed message

    User ->> OpenAiChatModel: SendMessage
    OpenAiChatModel ->> OpenAIAPI: Process message
    OpenAIAPI -->> OpenAiChatModel: Return processed message
    OpenAiChatModel ->> OpenAiChatModel: Update usage2 tracking
    OpenAiChatModel ->> OpenAiChatModel: Add usage2 to usage
    OpenAiChatModel -->> User: Return processed message
Loading

Poem

In the world of code, where bytes do dance,
We've made some tweaks to enhance,
Tokens now count with greater ease,
Usage tracks with newfound peace.
Models updated, versions refined,
In this code, perfection you'll find.
🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@HavenDV HavenDV enabled auto-merge (rebase) June 6, 2024 17:24
@HavenDV HavenDV merged commit 08d535a into tryAGI:main Jun 6, 2024
3 checks passed
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between ec08718 and 28db041.

Files selected for processing (5)
  • src/Directory.Packages.props (1 hunks)
  • src/Providers/Google/src/GoogleChatModel.Tokens.cs (1 hunks)
  • src/Providers/Google/src/GoogleChatModel.cs (3 hunks)
  • src/Providers/Google/src/Predefined/GeminiModels.cs (1 hunks)
  • src/Providers/OpenAI/src/Chat/OpenAiChatModel.cs (1 hunks)
Files skipped from review due to trivial changes (1)
  • src/Directory.Packages.props
Additional comments not posted (4)
src/Providers/Google/src/GoogleChatModel.Tokens.cs (2)

12-14: LGTM! The method now acts as a wrapper to maintain backward compatibility.


17-22: LGTM! Proper use of ConfigureAwait(false) to avoid deadlocks in library code.

src/Providers/Google/src/Predefined/GeminiModels.cs (1)

27-27: Model upgrade in Gemini15ProModel looks good. Please verify the impact on resource usage due to increased parameters.

src/Providers/OpenAI/src/Chat/OpenAiChatModel.cs (1)

269-275: Renaming of usage variable to usage2 in GenerateAsync method is correctly implemented. Please verify integration with the billing system to ensure accurate tracking.

Comment on lines +130 to +139
var usage = Usage.Empty;

if (usedSettings.UseStreaming == true)
{
var message = await StreamCompletionAsync(messages, cancellationToken).ConfigureAwait(false);
messages.Add(message);
usage += Usage.Empty with
{
Time = watch.Elapsed
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes to usage tracking in GenerateAsync method are correctly implemented. Consider adding comments to explain the logic behind usage calculations for future maintainability.

+ // Initialize usage tracking
+ // Update usage with elapsed time
+ // Add usage to provider

Also applies to: 153-160, 193-199

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
var usage = Usage.Empty;
if (usedSettings.UseStreaming == true)
{
var message = await StreamCompletionAsync(messages, cancellationToken).ConfigureAwait(false);
messages.Add(message);
usage += Usage.Empty with
{
Time = watch.Elapsed
};
var usage = Usage.Empty;
// Initialize usage tracking
if (usedSettings.UseStreaming == true)
{
var message = await StreamCompletionAsync(messages, cancellationToken).ConfigureAwait(false);
messages.Add(message);
usage += Usage.Empty with
{
// Update usage with elapsed time
Time = watch.Elapsed
};

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants