Quickstart
Learn the fundamentals in 10 minutes
This guide is designed for those with a basic understanding of Large Language Models (LLMs). If you’re new to LLMs, please start with this introduction first.
Installation
Let’s begin by creating your first RΞASON application. Execute the following command:
This will initialize a RΞASON project. Your project’s directory structure will be as follow:
The directory structure
Directory structure
Let’s understand how a RΞASON project is structured:
src/entrypoints
: Is where all your (routes) are defined. We’ll talk more about entrypoints below.src/entrypoints/hello.ts
: An accessible entrypoint (route/endpoint) at http://localhost:1704/hello..eslintrc.json
: ESLint configuration file used to configure the RΞASON ESLint plugin..reason.config.js
: Configuration file for RΞASON, to set your OpenAI Key and default model, amongst other things.src/actions
: Is where all your actions are defined. We talk more about actions here.src/agents
: Is where all your agents are defined. We talk more about agents here.
File-based system
RΞASON uses a file-based system that is heavily inspired by Next.js for defining three important pieces of your app:
This means that all .ts
files created under /entrypoints
, /agents
and /actions
will be respectively treated as entrypoints, agents or actions.
For instance, if you want to create a new entrypoint called qa-agent
you just need to create the file /entrypoints/qa-agent.ts
.
When you run npx use-reason
, the project that is created for you has no agents & actions, but it has a single entrypoint /entrypoints/hello.ts
— which is what we’ll explore now.
Running your app
Setting your API Key
If you didn’t add your OpenAI API key during the initial setup, you should do it now:
Setting your key
Setting your key
Go into .reason.config.js
and add your key there:
With your API Key added, go ahead and launch your RΞASON project using:
After executing the command, RΞASON will start and the RΞASON Playground will automatically open in your browser:
RΞASON Playground initial screen
RΞASON Playground
The RΞASON Playground is a web tool for testing your entrypoints. You can make requests and see their outputs.
With the RΞASON Playground open, navigate to the hello
entrypoint and send a request. You will see something like:
POST /hello response
Why have a Playground at all?
Why have a Playground at all?
Because: streaming!
Almost all LLM apps should stream their responses back to the client and most HTTP API testing tools simply do not support HTTP streaming — either completly (i.e.: simply not showing the response) or partially (i.e.: by waiting for the stream to finish to then show to the user the response).
Since testing is an integral part of creating a great end-user experience and the mission behind RΞASON is to allow developers to make great LLM apps, creating a Playground became a necessity.
However, even if HTTP streaming were to be added in, for example, Postman, the Playground would still be needed. Why?
Because, by default, RΞASON streams to the client in its own encoding format.
This may seem weird but we have a pretty good reason for it. We go in-depth about it here.
As you can see above, the POST /hello
entrypoint returns a JSON object with some cool points of interest about the city you are located at. How does it work?
The basics
Whenever a request is made to POST
http://localhost:1704/hello, the entrypoints/hello.ts
file is called and the function POST()
is executed. Here’s how the file looks:
Let’s break it down:
export async function* POST() {}
POST()
: We are defining that this function is responsible for handlingPOST
requests;function*
: By adding*
we are telling RΞASON this function returns a streaming response;
await fetch('http://ip-api.com/json/')
fetches the user’s location from their IP address;reasonStream<City>('Tell me about ${city}')
reasonStream()
is a RΞASON function that prompts a LLM and streams the response;'Tell me about ${city}'
is the prompt passed to the LLM;reasonStream<City>
: By passing aninterface
toreasonStream()
, RΞASON ensures the response conforms to theCity
interface structure. More details here.
- JSDoc comment: The
/** A two sentence description of the city */
comment is passed directly to the LLM along your prompt. You can think of it as the prompt for the property that the comment is above (in this case, thedescription
property).
About JSDoc
About JSDoc
Have you ever noticed that some functions have a description when you hover over them?
Description when you hover over `fs.readFileSync()`
Well, the way those are defined is through JSDoc! Let’s take a peek at the fs.readFileSync()
definition:
The definition of `fs.readFileSync()`
As you may have noticed, JSDoc comment are not like normal code comments:
- They need to be defined as
/** comment */
; - Your IDE uses them to show code hints — showing them when you hover (and some other times);
- And RΞASON uses them as part of your prompt to the LLM.
We’ll learn more about them here.
While this might seem complex, the core functionality lies in the reason()
and reasonStream()
functions, which we’ll explore more in-depth later.
Next, we’ll modify the hello
entrypoint to enhance its functionality.
Spicing it up
Let’s imagine we are building a website that a user can acess and get cool places to visit in the city he’s currently in. Cool! We already have the backend for it in the /hello
entrypoint.
However, what if we want to pin the points of interest in an actual map? Since the address
property is just a string this would be a bit hard.
Extending City
interface
But what if we extended the address
property to have latitude & longitude as well?
Now if we make a request to POST /hello
using the Playground we should see the new address
property:
Output from `POST /hello`
Awesome! As you can see, by just changing the interface we pass to reasonStream()
we change the output as well.
This was just an example, in reality, getting the latitude & longitude from a LLM is probably not the best idea ever.
Taking up a notch
Now let’s say we want to return the distance from the user to the point of interest — something like: “Golden Gate Bridge is 12km away from you”.
Since we already have the latitude & longitude of the points of interest, if we could just get the longitude & latitude from the user, calculing the distance would be trivial…
Luckly for us the HTTP request we’re already making to http://ip-api.com/json/
returns that as well!
Awesome! So we can get the latitude & longitude from the response of http://ip-api.com/json/
:
Now it just a matter of calculing the distance from two pairs of latitude & longitude. To calculate it, we can use the Haversine formula.
Let’s create a new action at src/actions/getDistance.ts
that will calculate the distance for us:
Cool! So far we have:
- the latitude & longitude of the user and points of interest;
- and a function that calculates the distance between two pairs of latitude/longitude (in meters).
What we need now to do is:
- call the
getDistance()
function for each point of interest to get the distance of the user from that location; - somehow return the distance in the streaming response of the
POST /hello
entrypoint.
In order to do that, we’ll need to use reasonStream()
as an actual generator — which is somewhat advanced and may look complicated, but we’ll over it step-by-step in the next page.
Iterating through reasonStream()
reasonStream()
is an async generator — which means there is much more we can do with it othen than directly return it (which is what we are doing now).
We can, for instance:
Which will log the following:
As you can see, the description
property was filled overtime.
Import to note that while we specified in our City
interface a single description
property that is a string, reasonStream
returned a object that has done
& value
. These are called StreamableObject
and we’ll go in-depth in them later.
For now, what is important to know is that every property that you specified in your interface will be wrapped in a { done: boolean, value: actualValue }
.
For instance, description: string
became an object with { done: boolean, value: string }
.
Calling getDistance()
Now that we know how to iterate through reasonStream()
, let’s take a step back and remember our current problem.
We now want to return the distance from the user to the point of interest — something like: “Golden Gate Bridge is 12km away from you”. For that we have:
- created a
getDistance()
function that uses the Haversine formula to calculate the distance between two pairs of latitude/longitude. - got the user’s latitude/longitude from the ip-api API.
- got each point of interest’s latitude/longitude from the LLM itself.
What we need now to do is:
- call the
getDistance()
function for each point of interest to get the distance of the user from that location;- by iterating through
reasonStream()
, we can check whenever apoint_of_interest
has had itslatitude
&longitude
returned and then calculate the distance from that to the user’s latitude/longitude.
- by iterating through
- somehow return the distance in the streaming response of the
POST /hello
entrypoint.- each iteration of
reasonSream()
, we’ll stream back the valuereasonStream()
yielded to us — eventually, when we calculate the distance of eachpoint_of_interest
, we’ll stream that as well.
- each iteration of
Let’s do it! Again, this may look complicated now, but we’ll go through each step in-depth in the following doc page.
And here’s the output:
Output from `POST /hello` with the distance property
Bonus section: Observability
If you are curious about how observability works in RΞASON, feel free to explore this section. However, if you’re not feeling going through observability now: no problem! We’ll be going through it in-depth later.
Obversability
Obversability
What is observability?
Observability refers to the ability to monitor & understand the internals of a software system — it’s crucial for detecting & diagnosing issues.
In the context of LLM app development, observability is even more relevant. That’s because LLMs part of building a great LLM app is, at least, 80% trying new prompts, new RAG pipelines, different agent behaviours, etc.
RΞASON follows the OpenTelemetry standard. The OpenTelemetry is a open standard for obversability that prevents vendor lock-in and interops with a bunch of different tools — allowing you to choose the best tool for you.
Why OpenTelemetry is a big deal?
OpenTelemetry is a standard in the industry, having your app data in it, means you can use any tool you want to monitor your app. You are not locked-in into whatever tool RΞASON supports or into depending that RΞASON integrates with a provider you want to use.
If you already have data pipelines configured, you don’t need to change a single thing. Your RΞASON app data will automatically work there too.
How to use it
There are a bunch of tools that integrate with OpenTelemetry data — both free & paid. You can use whichever you prefer. However, by default RΞASON uses Zipkin: a free open-source tracing tool developed by Twitter.
What if you don’t want to use Zipkin? This is covered in the Obvsersability page here.
You just need to start a Zipkin instance to access your observability data — don’t need to change a thing in RΞASON. And the easies way to start Zipkin is through Docker:
You can now access Zipkin at http://localhost:9411/zipkin/:
Zipkin dashboard
If we make a new request to our hello
entrypoint using the RΞASON Playground and hit the RUN QUERY
button in Zipkin, it will appear there:
Zipkin dashboard with a trace
Clicking SHOW
will open the trace:
A Zipkin trace
A lot is going on in the screenshot above, it shows the what happens during the call of the POST /hello
— it is an icicle graph.
It shows the time each took during your entrypoint execution. It also shows the inputs & outputs.
Why is there many getDistance()
?
Looking at the trace for our entrypoint we can see we are calling getDistance()
multiple times per each point_of_interest
. This not necessary and is wasteful because there is no need to calculate the distance of a point_of_interest
more than once.
Try to think how you can solve that.
In any case, a solution is below:
A solution
A solution
Conclusion
This was the RΞASON quickstart, in it you learnt the fundamentals behind RΞASON. There’s a lot more to learn though — agents & observability are two key features of RΞASON.
Be sure to save the code you created during this walkthrough as it will be utilized during the rest of RΞASON’s docs.
Next we’ll go in-depth about some of the fundamentals of RΞASON — starting with entrypoints.