Store Chat Messages & State Without Managing Infrastructure.Check Out DialogueDB
Skip to content

llm-exe

Build type-safe LLM agents and AI functions with modular TypeScript components. Works with any provider, no framework lock-in.

Every LLM API Works Differently
Every LLM project starts like this: debugging JSON errors, writing boilerplate retries, juggling timeouts, and praying your parse didn’t break. It sucks.
  • JSON.parse() with fingers crossed
  • Everything is type any
  • Manual validation for every response
  • All this and you only support one provider
typescript
// Every LLM project starts like this...
const response = await openai.chat.completions.create({
  model: "gpt-4",
  messages: [{ role: "user", content: makePrompt(data) }],
  response_format: { type: "json_object" },
});
const text = response.choices[0].message.content;
const parsed = JSON.parse(text); // 🤞 hope it's valid JSON

// Type safety? lol?
const category = parsed.category; // any
const items = parsed.items; // undefined? array? who knows

// Oh right, need to validate this somehow
if (!["bug", "feature", "question"].includes(category)) {
  // Model hallucinated a new category. Now what?
}

// TODO: Add retries
// TODO: Add tests
// TODO: Switch to Claude when this fails
What if LLM Calls Were Just Normal Functions?
What if every LLM call was as reliable as calling a regular function? Type-safe inputs, validated outputs, built-in retries. Just async functions that happen to be powered by AI.
  • Real TypeScript types, no more any/unknown
  • Validated outputs that match your schema
  • Just import and call, like any other function
  • One-line provider switching
typescript
import {
  useLlm,
  createChatPrompt,
  createParser,
  createLlmExecutor,
} from "llm-exe";

// Define once, use everywhere
async function llmClassifier(text: string) {
  return createLlmExecutor({
    llm: useLlm("openai.gpt-4o-mini"),
    prompt: createChatPrompt<{ text: string }>(
      "Classify this as 'bug', 'feature', or 'question': {{text}}"
    ),
    parser: createParser("stringExtract", {
      enum: ["bug", "feature", "question"],
    }),
  }).execute({ text });
}

// It's just a typed function now
const category = await llmClassifier(userInput);
// category is typed as "bug" | "feature" | "question" ✨
Build Complex Flows from Simple Parts
Build complex AI workflows from simple, modular functions. Each executor is self-contained—swap models, update prompts, or change parsers without touching the rest of your code.
  • Prompt + LLM + Parser = Executor
  • Each piece is swappable
  • Chain them together
  • Test them separately
typescript
// Each piece does one thing well
const summarizer = createLlmExecutor({
  llm: useLlm("openai.gpt-4o-mini"),
  prompt: createChatPrompt("Summarize: {{text}}"),
  parser: createParser("string"),
});

const translator = createLlmExecutor({
  llm: useLlm("anthropic.claude-3-5-haiku"),
  prompt: createChatPrompt("Translate to {{language}}: {{text}}"),
  parser: createParser("string"),
});

// Combine them naturally
const summary = await summarizer.execute({ text: article });
const spanish = await translator.execute({
  text: summary,
  language: "Spanish",
});
Production-Ready Out of the Box
No more manual retry loops. No more parsing prayers. Built-in error handling, timeouts, and hooks. If the output doesn't match your schema, you'll know immediately.
  • Automatic retries and timeouts
  • Schema validation that throws on mismatch
  • Hooks for logging and monitoring
  • Did I mention change LLM's with one line?
typescript
const analyst = createLlmExecutor(
  {
    llm: useLlm("openai.gpt-4o"),
    prompt: createChatPrompt<{ data: any }>(
      "Analyze this data and return insights as JSON: {{data}}"
    ),
    parser: createParser("json", {
      schema: {
        insights: { type: "array", items: { type: "string" } },
        score: { type: "number", min: 0, max: 100 },
      },
    }),
  },
  {
    // Built-in retry, timeout, hooks
    maxRetries: 3,
    timeout: 30000,
    hooks: {
      onSuccess: (result) => logger.info("Analysis complete", result),
      onError: (error) => logger.error("Analysis failed", error),
    },
  }
);

// Guaranteed to match schema or throw
const { insights, score } = await analyst.execute({ data: salesData });

// You can also bind events to an executor!
analyst.on("complete", (result) => {
  logger.info("Analysis complete", result);
});
Build Agents with Your Functions
Transform any function into an agent capability. Build autonomous AI agents that can use your database, APIs, and business logic—even with models that don't natively support function calling. You control what agents can do.
  • Works with ALL models, even without native function calling
  • The LLM plans what to do, you control execution
  • Build agents without complex frameworks
  • You control the execution flow and security
typescript
import { createCallableExecutor, useExecutors } from "llm-exe";

// Your existing code becomes LLM-callable
const queryDB = createCallableExecutor({
  name: "query_database",
  description: "Query our PostgreSQL database",
  input: "SQL query to execute",
  handler: async ({ input }) => {
    const results = await db.query(input); // Your actual DB!
    return { result: results.rows };
  },
});

const sendEmail = createCallableExecutor({
  name: "send_email",
  description: "Send email via our email service",
  input: "JSON with 'to', 'subject', 'body'",
  handler: async ({ input }) => {
    const { to, subject, body } = JSON.parse(input);
    await emailService.send({ to, subject, body }); // Real emails!
    return { result: "Email sent successfully" };
  },
});

// Let the LLM use your tools
const assistant = createLlmExecutor({
  llm: useLlm("openai.gpt-4o"),
  prompt: createChatPrompt(`Help the user with their request.
You can query the database and send emails.`),
  parser: createParser("json"),
});

const tools = useExecutors([queryDB, sendEmail]);

// LLM decides what to do and calls YOUR functions
const plan = await assistant.execute({
  request: "Send our top 5 customers a thank you email",
});
// LLM might return: { action: "query_database", input: "SELECT email FROM customers ORDER BY revenue DESC LIMIT 5" }

const result = await tools.callFunction(plan.action, plan.input);
One-Line to Switch Providers
typescript
// Change ONE line to switch providers
const llm = useLlm("openai.gpt-4o");
// const llm = useLlm("anthropic.claude-3-5-sonnet");
// const llm = useLlm("google.gemini-2.0-flash");
// const llm = useLlm("xai.grok-2");
// const llm = useLlm("ollama.llama-3.3-70b");

// Everything else stays exactly the same ✨

Why Developers Love llm-exe

"Finally, LLM calls that don't feel like stringly-typed nightmares."
— A Developer Really Said This
"Switched from OpenAI to Claude in literally one line. Everything just worked."
— Tech Lead, Series B Fintech
"The type safety alone saved us hours of debugging. The composability changed how we build."
— Principal Engineer, Fortune 500
"As an AI, I shouldn't play favorites... but being able to switch providers with one line means developers can always choose the best model for the job. Even if it's not me."
— Claude, Anthropic

Ready to Build Something Incredible?

Stop wrestling with LLM APIs. Start shipping AI features that actually work.