RΞASON is a backend open-source Typescript framework for building great LLM apps. It uses your Typescript type information to force structured output from a LLM:

The `hello world` of RΞASON

Using it

To get started, install using this command:

npx use-reason

Then create a new file src/entrypoints/joke.ts with:

import { reasonStream } from 'tryreason'

interface Joke {
  joke: string;
  topics: string[]
}

export async function* GET() {
  return reasonStream<Joke>('tell me a joke')
}

And run:

npm run dev

This blog post will go through some RΞASON’s features and the design philoshopy.

If you looking to learn about RΞASON, please refer to our docs — the best place to learn how you can use it.

Core beliefs

We believe that LLMs are a new programming primitive — just like databases were in the 70s (check this), the web was in the 2000s, etc. — that allows programmers to have reasoning in their programs; this is awesome as it opens the door to a whole new set of problems that were previously impossible for computers. They are not a new way to program.

Many attempts have been made to create a framework to work with LLMs, however, we believe the right abstractions have not been found yet — and that what we aim to create with RΞASON.

There are five principles we think a good abstraction for working with LLMs should align with:

  1. LLMs should interop with your code
  2. Prompting & retrieval is the developer’s job
  3. Streaming just text is not enough
  4. Observbility is key
  5. LLM apps will be built in Javascript/Typescript

LLMs should interop with your code

LLMs receive text as input and output text. While this is great for human-to-LLM interactions, it isn’t great for code-to-LLM interactions; handling text (strings) inside your code is not the best thing ever. String parsing is hard and leads to hard-to-maintain code.

So there’s a need to somehow get structured output from LLMs. There are some attempts at this all with their respective downfalls. Here’s how RΞASON tackles it:

We use already established concepts that developers already know: Typescript’s interface and JSDoc comments.

import { reason } from 'tryreason'

interface Joke {
  /** Use this property to indicate the age rating of the joke */
  rating: number;
  joke: string;

  /** Use this property to explain the joke to those who did not understood it */
  explanation: string;
}

const joke = await reason<Joke>('tell me a really spicy joke')

reason() is a function that calls a LLM and gets structured output from it. Here’s the value of joke:

{
  "joke": "I'd tell you a chemistry joke but I know I wouldn't get a reaction.",
  "rating": 18,
  "explanation": "This joke is a play on words. The term 'reaction' refers to both a chemical process and a response from someone. The humor comes from the double meaning, implying that the joke might not be funny enough to elicit a response."
}

Another important way LLMs need to interop with you code is with agents.

Agents interoperability

We define an agent as:

An agent is just a LLM that selects which action to take from predefined pool of actions in order to accomplish a certain objective.

Prompting & retrieval should be up to the developer

A framework should only help in areas that do not differentiate your business/app. Prompting & retrieval are key areas to the sucess of your LLM app, and, precisely because of that you should be the one in charge of it — not the framework.

By virtue of being a new primitive, any library that try to offer out-of-the-box prompts/agents/retrieval will either:

  • become outdated as new techniques are constantly being invented;
  • or become bloated as it will have to add new prompts/agents/retrieval while needing to support the ones already added.

To be clear, prompting & retrieval are key areas to the sucess of your LLM app, and, precisely because of that you should be the one in charge of it — not the framework.

Was this page helpful?