Skip to content

pamelafox/python-openai-demos

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python OpenAI demos

This repository contains a collection of Python scripts that demonstrate how to use the OpenAI API to generate chat completions.

In increasing order of complexity, the scripts are:

  1. chat.py: A simple script that demonstrates how to use the OpenAI API to generate chat completions.
  2. chat_stream.py: Adds stream=True to the API call to return a generator that streams the completion as it is being generated.
  3. chat_history.py: Adds a back-and-forth chat interface using input() which keeps track of past messages and sends them with each chat completion call.
  4. chat_history_stream.py: The same idea, but with stream=True enabled.

Plus these scripts to demonstrate additional features:

Setting up the environment

If you open this up in a Dev Container or GitHub Codespaces, everything will be setup for you. If not, follow these steps:

  1. Set up a Python virtual environment and activate it.

  2. Install the required packages:

python -m pip install -r requirements.txt

Configuring the OpenAI environment variables

These scripts can be run with Azure OpenAI account, OpenAI.com, local Ollama server, or GitHub models, depending on the environment variables you set.

  1. Copy the .env.sample file to a new file called .env:

    cp .env.sample .env
  2. For Azure OpenAI, create an Azure OpenAI gpt-3.5 or gpt-4 deployment (perhaps using this template), and customize the .env file with your Azure OpenAI endpoint and deployment id.

    API_HOST=azure
    AZURE_OPENAI_ENDPOINT=https://YOUR-AZURE-OPENAI-SERVICE-NAME.openai.azure.com
    AZURE_OPENAI_DEPLOYMENT=YOUR-AZURE-DEPLOYMENT-NAME
    AZURE_OPENAI_VERSION=2024-03-01-preview

    If you are not yet logged into the Azure account associated with that deployment, run this command to log in:

    az login
  3. For OpenAI.com, customize the .env file with your OpenAI API key and desired model name.

    API_HOST=openai
    OPENAI_KEY=YOUR-OPENAI-API-KEY
    OPENAI_MODEL=gpt-3.5-turbo
  4. For Ollama, customize the .env file with your Ollama endpoint and model name (any model you've pulled).

    API_HOST=ollama
    OLLAMA_ENDPOINT=http://localhost:11434/v1
    OLLAMA_MODEL=llama2

    If you're running inside the Dev Container, replace localhost with host.docker.internal.

  5. For GitHub models, customize the .env file with your GitHub model name.

    API_HOST=github
    GITHUB_MODEL=gpt-4o

    You'll need a GITHUB_TOKEN environment variable that stores a GitHub personal access token. If you're running this inside a GitHub Codespace, the token will be automatically available. If not, generate a new personal access token and run this command to set the GITHUB_TOKEN environment variable:

    export GITHUB_TOKEN="<your-github-token-goes-here>"

About

A series of short examples using the OpenAI SDK

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages