Store Chat Messages & State Without Managing Infrastructure.Check Out DialogueDB
Skip to content

OpenAI

When using OpenAi models, llm-exe will make POST requests to https://api.openai.com/v1/chat/completions. All models are supported if you pass openai.chat.v1 as the first argument, and then specify a model in the options.

Basic Usage

OpenAi Chat

ts
const llm = useLlm("openai.chat.v1", {
  model: "gpt-4o", // specify a model
});

OpenAi Chat By Model

ts
const llm = useLlm("openai.gpt-4o", {
  // other options,
  // no model needed, using gpt-4o
});
INFO
You can use the following models using this shorthand:
  • openai.gpt-5.2
  • openai.gpt-5-mini
  • openai.gpt-5-nano
  • openai.gpt-4.1
  • openai.gpt-4.1-mini
  • openai.gpt-4.1-nano
  • openai.o3
  • openai.o4-mini
  • openai.gpt-4o
  • openai.gpt-4o-mini

Authentication

To authenticate, you need to provide an OpenAi API Key. You can provide the API key various ways, depending on your use case.

  1. Pass in as execute options using openAIApiKey
  2. Pass in as setup options using openAIApiKey
  3. Use a default key by setting an environment variable of OPENAI_API_KEY

Generally you pass the LLM instance off to an LLM Executor and call that. However, it is possible to interact with the LLM object directly, if you wanted.

ts
// call the LLM directly with a prompt
await llm.call(prompt);

OpenAi-Specific Options

In addition to the generic options, the following options are OpenAi-specific and can be passed in when creating a llm function.

OptionTypeDefaultDescription
modelstringgpt-4o-miniThe model to use. Can be any valid chat model. See OpenAI Docs
openAIApiKeystringundefinedAPI key for OpenAi. See authentication
topPnumberundefinedMaps to top_p. See OpenAI Docs
useJsonbooleanundefinedWhen true, sets response_format to json_object
effortstringundefinedMaps to reasoning_effort. Valid values: "minimal", "low", "medium", "high". Only supported with reasoning models (e.g. o-series).

See OpenAI API Reference for details on these parameters.