How to call OpenAI Chat GPT API with python

How to call OpenAI Chat GPT API with python

OpenAI chatbot called "Chat GPT" (https://chat.openai.com/chat) turns out to be rather capable of not only carrying casual conversations, but also brainstorming ideas, translation to other languages and even generating code. See full list of available features here https://beta.openai.com/examples.

You can interact with the chatbot manually via web browser, but for bulk requests it is more convenient to write a script that will connect to OpenAI API.

Register for your openai API key at https://beta.openai.com/account/api-keys. For more info about the API see https://beta.openai.com/docs/quickstart/build-your-application.

The total amount of so called tokens handled within one API call (request + response) has maximum limit, that is currently set to 2048 tokens (which should be around 1500 words). A token is essentially a part of word.

Keep in mind that free API trial has limited amount of tokens. After you exceed the free tier limit, you will be charged for API calls. The free usage does not require your credit card info (at the time of writing).

To connect to the OpenAI GPT API, you will need to sign up for an API key on the OpenAI website and install the openai Python package. Here is an example of how you can use the openai package to call the GPT API in Python:

First, create python virtual environment and install the openai package using pip:

coil@coil-VM:~/Desktop/openai_API$ python3 -m venv .env
coil@coil-VM:~/Desktop/openai_API$ source .env/bin/activate
(.env) coil@coil-VM:~/Desktop/openai_API$ pip install openai

Create a config.py file with the openai private API key and add following contents into it:
OPENAI_API_KEY = "YOUR_API_KEY_HERE"

You will need to replace "YOUR_API_KEY_HERE" with your actual API key, which you can obtain by signing up for an OpenAI account and creating an API key on the OpenAI website.

Next, import the openai module and set your API key in the main script:

import openai

openai.api_key = config.OPENAI_API_KEY

Now you can call the GPT API using the openai.Completion.create() method. This function takes a variety of parameters, including the prompt (the text you want to complete), the model to use (e.g. "text-davinci-003"), and the max_tokens (the maximum number of tokens to generate in the completion). At the time of writing the most advanced model is text-davinci-003.
Here's an example of how you can use the openai.Completion.create() function to generate a completion for a given prompt:

import os
import openai

import config  # config.py file in the same directory as our app.py

openai.api_key = config.OPENAI_API_KEY

prompt = "Translate 'business trip' to to Spanish language."
model = "text-davinci-003"
max_tokens = 100

response = openai.Completion.create(
    engine=model, prompt=prompt, max_tokens=max_tokens, n=1, stop=None, temperature=0.5
)

print('--- Response payload: ---')
print(response)
print()

print('--- Prompt: ---')
print(prompt)
print()

print('--- Response text: ---')
print(response["choices"][0]["text"].strip())

This code will generate a completion of up to n tokens based on the given prompt using the "text-davinci-003" model. The completion.text property contains the generated text.

Detailed article about possible model parameters that we can experiment with:
https://towardsdatascience.com/gpt-3-parameters-and-prompt-design-1a595dc5b405.

Example of API call with wider set of parameters:

response = openai.Completion.create(
    engine=model, 
    prompt=prompt, 
    max_tokens=max_tokens, 
    n=1, 
    stop=None, 
    temperature=0.5,
    top_p=1.0,
    frequency_penalty=0.8,
    presence_penalty=0.0,
)

In very simplified manner we can say that temperature and top_p parameters have an effect on creativity of the model (1 means neutral, use only one of those parameters and set the other to neutral case) and penalty parameters are restricting content in the answer based on frequency of certain tokens, so the AI does not repeat itself in the answers.

The API response will send JSON payload. We can treat it as regular dictionary in python.

{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "text": "\n\nViaje de negocios"
    }
  ],
  "created": 1672438737,
  "id": "cmpl-6TI0HKCoZgbUm5e1AyCPu31LrYSPN",
  "model": "text-davinci-003",
  "object": "text_completion",
  "usage": {
    "completion_tokens": 8,
    "prompt_tokens": 11,
    "total_tokens": 19
  }
}

To get the text from the payload we can access it as
print(response["choices"][0]["text"].strip())

Run the app as python3 app.py.

(.env) coil@coil-VM:~/Desktop/openai_API$ python3 app.py
--- Response payload: ---
{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "text": "\n\n\ucd9c\uc7a5"
    }
  ],
  "created": 1672438618,
  "id": "cmpl-6THyMq3YmDiFROfpHwl05LJi2bJQ3",
  "model": "text-davinci-003",
  "object": "text_completion",
  "usage": {
    "completion_tokens": 8,
    "prompt_tokens": 11,
    "total_tokens": 19
  }
}

--- Prompt: ---
Translate 'business trip' to to Korean language.

--- Response text: ---
좜μž₯
(.env) coil@coil-VM:~/Desktop/openai_API$

You can monitor your API usage via https://beta.openai.com/account/usage.

Note: Codex model (code-davinci-002) might be interesting choice for code generation in the future, currently in limited beta version. Should be efficient in translating natural language descriptions directly into code.
Has high limit of total tokens - several thousands, so should be able to generate larger amount of code.

For more info see

Source