v2.3.6-1774486742
OpenAI
When using OpenAi models, llm-exe will make POST requests to https://api.openai.com/v1/chat/completions. All models are supported if you pass openai.chat.v1 as the first argument, and then specify a model in the options.
Basic Usage
OpenAi Chat
ts
const llm = useLlm("openai.chat.v1", {
model: "gpt-4o", // specify a model
});OpenAi Chat By Model
ts
const llm = useLlm("openai.gpt-4o", {
// other options,
// no model needed, using gpt-4o
});INFO
You can use the following models using this shorthand:
openai.gpt-5.2openai.gpt-5-miniopenai.gpt-5-nanoopenai.gpt-4.1openai.gpt-4.1-miniopenai.gpt-4.1-nanoopenai.o3openai.o4-miniopenai.gpt-4oopenai.gpt-4o-mini
Authentication
To authenticate, you need to provide an OpenAi API Key. You can provide the API key various ways, depending on your use case.
- Pass in as execute options using
openAIApiKey - Pass in as setup options using
openAIApiKey - Use a default key by setting an environment variable of
OPENAI_API_KEY
Generally you pass the LLM instance off to an LLM Executor and call that. However, it is possible to interact with the LLM object directly, if you wanted.
ts
// call the LLM directly with a prompt
await llm.call(prompt);OpenAi-Specific Options
In addition to the generic options, the following options are OpenAi-specific and can be passed in when creating a llm function.
| Option | Type | Default | Description |
|---|---|---|---|
| model | string | gpt-4o-mini | The model to use. Can be any valid chat model. See OpenAI Docs |
| openAIApiKey | string | undefined | API key for OpenAi. See authentication |
| topP | number | undefined | Maps to top_p. See OpenAI Docs |
| useJson | boolean | undefined | When true, sets response_format to json_object |
| effort | string | undefined | Maps to reasoning_effort. Valid values: "minimal", "low", "medium", "high". Only supported with reasoning models (e.g. o-series). |
See OpenAI API Reference for details on these parameters.
