OpenAI

Basic Usage

const llm = createLlmOpenAi({ // OpenAIOptions
  modelName: "gpt-3.5-turbo",
  openAIApiKey: "your-open-ai-key", // optional.
  maxTokens: 500, // optional.
  temperature: 0, // optional.
});

Generally you pass the LLM instance off to an LLM Executor and call that. However, it is possible to interact with the LLM object directly, if you wanted.

// given array of chat messages, calls chat completion
await llm.chat([]);

// given string prompt, calls completion
await llm.completion("");

TIP

Note: The OpenAILlm checks to make sure you are using the correct prompt type when using chat vs completion, and will throw an error if you try to use the wrong prompt type with the wrong model.

Options

OptionTypeDefaultDescription
openAIApiKeystringundefinedAPI key for OpenAI. Optionally can be set using process.env.OPEN_AI_API_KEY
modelstringgpt-3.5-turboThe model to use. Can be any one of: gpt-4, gpt-3.5-turbo, davinci, text-curie-001, text-babbage-001, text-ada-001
temperaturenumber0See OpenAI Docs
maxTokensnumber500See OpenAI Docs
topPnumber | nullnullSee OpenAI Docs
nnumber | nullnullSee OpenAI Docs
streamboolean | nullnullSee OpenAI Docs. Note: Not supported yet.
stop?nullSee OpenAI Docs
presencePenaltynumber | nullnullSee OpenAI Docs
frequencyPenaltynumber | nullnullSee OpenAI Docs
logitBiasobject | nullnullSee OpenAI Docs
userstring | nullnullSee OpenAI Docs

OpenAI Docs: linkopen in new window

OpenApi LLM Methods

chat Calls chat completions endpoint. Must by using text prompt and one of these models: gpt-4, gpt-3.5-turbo.

completion Calls completions endpoint. Must by using text prompt and one of these models: davinci, text-curie-001, text-babbage-001.

getMetrics() Get the total prompt and completion tokens across all calls to the API. Returns an object with total prompt and completion tokens.

calculatePrice() Calculate the API call cost based on model used and input and output tokens. @param input_tokens - The number of input tokens. @param output_tokens - The number of output tokens (defaults to 0). @returns An object for input/output tokens and cost.

logMetrics() Log a table containing usage metrics for the OpenAI API.

Last Updated:
Contributors: Greg Reindel