LLM Executor Function Syntax

llm-exe aims to be building blocks, allowing you to use the pieces as you wish. Below are suggestions on how to contain an LLM executor in a way that can be used throughout your application. Ultimately, do what you want.

LLM Executor Function

All examples below will share this prompt instruction and input interface.

const PROMPT = `Based on the user input and time of day,
come up with a list of greetings we could say to the user.

You must reply with a list of relevant greetings. For example:
- <greeting option 1>
- <greeting option 2>
- <greeting option 3>

The time of day is: {{timeOfDay}}

The user said: {{userInput}}`;

type TimeOfDay = "morning" | "afternoon" | "evening";

interface ExampleInput {
  userInput: string;
  timeOfDay: TimeOfDay;
}

Return an Executor

One way to structure the function is to have the function return the executor its self. This is useful in that it returns an instance of LlmExecutor. You can attach hooks, listen for events, etc.

export function llmExecutorExample<I extends ExampleInput>() {
  const llm = new OpenAIMock({});
  const prompt = createChatPrompt<I>(PROMPT);
  const parser = createParser("listToArray");
  return createLlmExecutor({
    prompt,
    parser,
    llm,
  });

When using this approach, whenever you import this function to use, you will need to call execute on it. For example:

import { llmExecutorExample } from "example-above"

const executor = llmExecutorExample()

const result = await executor.execute({
    userInput: "",
    timeOfDay: ""
})




 



Return a Value

With this approach, you encapsulate the executor and execution. Note the main differences:

  1. The function is now an async function
  2. The function requires an input (what gets passed into .execute())
  3. The LLM executor is executed and the value is returned on every call, rather than the executor.
export async function llmExecutorExampleExecute<I extends ExampleInput>(
  input: I
) {
  const llm = new OpenAIMock({});
  const prompt = createChatPrompt<I>(PROMPT);
  const parser = createParser("listToArray");
  return createLlmExecutor({
    prompt,
    parser,
    llm,
  }).execute(input);
}
 









 

import { llmExecutorExample } from "example-above"

const result = await llmExecutorExample({
    userInput: "",
    timeOfDay: ""
})

Structuring Files

When dealing with LLM Executors that have elaborate prompts, custom output parsers, or both, it may be useful to break the components out into different files. For example:

llms
- intent-bot
  - index.ts // executor lives here
  - parser.ts // export your parser
  - prompt.ts // export your prompt
  - types.ts // types

index.ts

import { CustomInputType} from "./types"
import { parser } from "./parser"
import { prompt } from "./prompt"

export async function myLlmExecutorExecute(input: CustomInputType){
  return createLlmExecutor({
    prompt,
    parser,
    llm,
  }).execute(input);
}

TIP

Allowing prompt and parsers to be imported/exported makes them testable components!

Other Notes

Its reasonable to pass an llm around.

export async function llmExecutorExampleNeedsLlm<I extends ExampleInput>(
  llm: BaseLlm,
  input: I
) {
  const prompt = createChatPrompt<I>(PROMPT);
  const parser = createParser("listToArray");
  return createLlmExecutor({
    prompt,
    parser,
    llm,
  }).execute(input);
}

 










Last Updated:
Contributors: Greg Reindel