Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for remote models (OpenAI, ...) #5

Open
cztomsik opened this issue Dec 11, 2023 · 4 comments
Open

Add support for remote models (OpenAI, ...) #5

cztomsik opened this issue Dec 11, 2023 · 4 comments

Comments

@cztomsik
Copy link
Owner

  • Add OPEN_AI_KEY, etc. in Settings
  • Update <ModelSelect, to check if this is filled and to include OpenAI models
  • If any OpenAI model is selected make it visually distinctive, that you are using remote model
  • Hide "partial" completion in <EditMessage if remote model is selected
  • call remote endpoints
    • these are different to what we are doing right now, so maybe we can first add /api/chat/completions endpoint which will just wrap what we do in client-side and then if we are using remote endpoint, we can just proxy
@prabirshrestha
Copy link

Would be great if Ava supported remote openai api. This would allow us to reuse the server and avoid loading the model multiple times if we are using it in a different app.

@sammcj
Copy link

sammcj commented Mar 23, 2024

Would be really great if Ava supported using an already running Ollama instance via its API!

@cztomsik
Copy link
Owner Author

Would be really great if Ava supported using an already running Ollama instance via its API!

Yes, this is in the works, but not finished yet.

@cztomsik
Copy link
Owner Author

cztomsik commented May 3, 2024

Just a small update, the UI part has been rewritten and we now have /api/chat/completions endpoint which is mostly compatible with openai, so hopefully, we are really close to closing this.

What's missing:

  • add a new field to the Settings page for OpenAI API key
  • if this is filled in, we should offer gpt models in the <ModelSelect (it's not yet clear which ones, and how to configure that)
  • decide if the real api request is going to happen from the browser (simple) or from our /api endpoint (more work), because the first option would make the api key visible in the web browser devtools panel
  • disable some features if openai model is selected

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants