Skip to content

Llms

openai_call(prompt, use_gpt4=False, temperature=0.5, max_tokens=100)

Calls OpenAI API to generate a response to a given prompt.

Parameters:

Name Type Description Default
prompt str

The prompt to generate a response to.

required
use_gpt4 bool

Whether to use GPT-4 or GPT-3.5. Defaults to False.

False
temperature float

The temperature of the response. Defaults to 0.5.

0.5
max_tokens int

The maximum number of tokens to generate. Defaults to 100.

100

Returns:

Name Type Description
str

The generated response.

Examples:

>>> openai_call("Hello, how are you?")
"I'm doing great, thanks for asking!"
Notes

The OpenAI API key must be set in the environment variable OPENAI_API_KEY.

Source code in autoresearcher/llms/openai.py
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
def openai_call(
    prompt: str, use_gpt4: bool = False, temperature: float = 0.5, max_tokens: int = 100
):
    """
    Calls OpenAI API to generate a response to a given prompt.
    Args:
      prompt (str): The prompt to generate a response to.
      use_gpt4 (bool, optional): Whether to use GPT-4 or GPT-3.5. Defaults to False.
      temperature (float, optional): The temperature of the response. Defaults to 0.5.
      max_tokens (int, optional): The maximum number of tokens to generate. Defaults to 100.
    Returns:
      str: The generated response.
    Examples:
      >>> openai_call("Hello, how are you?")
      "I'm doing great, thanks for asking!"
    Notes:
      The OpenAI API key must be set in the environment variable OPENAI_API_KEY.
    """
    if not use_gpt4:
        # Call GPT-3.5 turbo model
        messages = [{"role": "user", "content": prompt}]
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=messages,
            temperature=temperature,
            max_tokens=max_tokens,
            top_p=1,
            frequency_penalty=0,
            presence_penalty=0,
        )
        return response.choices[0].message.content.strip()
    else:
        # Call GPT-4 chat model
        messages = [{"role": "user", "content": prompt}]
        response = openai.ChatCompletion.create(
            model="gpt-4",
            messages=messages,
            temperature=temperature,
            max_tokens=max_tokens,
            n=1,
            stop=None,
        )
        return response.choices[0].message.content.strip()