Skip to content

Introduction

When writing llm-powered functions, you'll end up repeating a lot of code, and will end up needing some structure. A prompt may seem simple, but as your instructions grow, you may end up needing to add some more advanced abstractions. Likewise, if you're calling an llm if various functions in your code, you'll end up writing some wrapper around the llm so that you can share some functionality. like logging, collecting metrics, handling failures/timeouts/retry. Furthermore, the LLM will be returning a string that you may need to validate or parse into a usable data type. Will you be duplicating code? Or creating sharable output parsers.

This package aims to be those lightweight abstractions.

Take the example below. In the first block, there is a function that takes an input, and responds yes or no

Example: A llm-powered function - Yes/No bot.

NOTE

This example does not use llm-exe

This example uses the OpenAi nodejs package directly. Its likely where you'd start. It works, but as I'll explain below, its more of a proof of concept.

typescript
const { Configuration, OpenAIApi } = require("openai");

const openAiClient = new OpenAIApi(
  new Configuration({
    apiKey: process.env.OPENAI_API_KEY,
  })
);

export async function YesOrNoBot(
  question: string
): Promise<{ response: string }> {
  const model = "gpt-4o-mini";

  const messages = [
    {
      role: "system",
      content: `You are not an assistant, I need you to reply with only 
  'yes' or 'no' as an answer to the question below. Do not explain yourself 
  or ask questions. Answer with only yes or no.`;,
    },
    {
      role: "user",
      content: question,
    },
    {
      role: "system",
      content: `yes or no:`,
    },
  ];

  const response = await openAiClient.createChatCompletion({
    model: model,
    messages,
    temperature: 0,
    max_tokens: 160,
  });

  const { data } = response;
  const [choice] = data.choices;
  const { message, finish_reason } = choice;
  const { content = "" } = message;
  if (finish_reason !== "stop") {
    console.log("error finish reason");
  }
  let cleanResponse = content.trim();
  return { response: cleanResponse };
}

Example (with llm-exe) : A llm-exe-powered function Yes/No bot.

TIP

This example does use llm-exe!

This example uses llm-exe to accomplish the same task.

ts
export const instruction = `You are not an assistant, I need you to reply with only 
  'yes' or 'no' as an answer to the question below. Do not explain yourself 
  or ask questions. Answer with only yes or no.`;

export const prompt = createChatPrompt(instruction)
    .addUserMessage(input)
    .addSystemMessage(`yes or no:`);

export async function YesOrNoBot<I extends string>(
  input: I
): Promise<{ response: string }> {
  const llm = useLlm("openai.chat.v1", {
    model: "gpt-4o-mini",
  });

  const parser = createParser("stringExtract", { enum: ["yes", "no"]});
  return createLlmExecutor({ llm, prompt, parser }).execute({
    body,
  });
}

This is a simple example, but to highlight some key differences:

  • The example uses openai, but you could use a different model from a different vendor.
  • The llm-exe version is structured in a way that each component could be testable.
  • Extra configuration details are hidden away.
  • Parsing the output - the llm-exe version enforces the output we desire, and makes sure it is well-typed, without needing to deal with the response.