Chris Padilla/Blog


My passion project! Posts spanning music, art, software, books, and more. Equal parts journal, sketchbook, mixtape, dev diary, and commonplace book.


    Getting Started with LangGraph

    LangChain is emerging as a popular choice for Reactive AI applications. However, when you need a higher degree of control and flexibility in a project, LangGraph offers exactly that. All the while, still providing guide rails and tooling for quick iteration and development.

    Below, I'll share the absolute essentials needed to get started with LangGraph! With this toy app, we'll cover all the major concepts for developing a graph. Here, I'll do so with a Joke Telling AI application. While it's a simple app, this should demonstrate a foundation for developing your own RAG applications.

    Setting Annotations

    LangGraph is really a state machine at the end of the day. To get started, you'll want to define the state that will persist and change through your graph. These defenitions are referred to as Annotations in LangGraph.

    Below, I'm creating an Annotation with two pieces of state: messages and selectedModel. I want to be able to add and keep track of messages. Additionally, I want to be able to select which model to invoke.

    import {Annotation} from "@langchain/langgraph";
    import {BaseMessage, HumanMessage, SystemMessage} from "@langchain/core/messages";
    
    export const GraphAnnotation = Annotation.Root({
        messages: Annotation<BaseMessage[]>({
            reducer: (current, update) => current.concat(update as BaseMessage[]),
            default: () => [],
        }),
        selectedModel: Annotation<string>({
            reducer: (current, update) => update,
            default: () => "",
        }),
    });

    Defining the Workflow

    Once you have defined your Annotation, you can then outline the flow of your graph. Graphs are composed of two elements: Nodes and Edges. A Node is a function that will run. An Edge is the direction taken following a Node's completion.

    Additionally, we can define Conditional Edges. These are steps in the graph that will assess which Node to access next.

    Before getting into the details, let's outline a simple graph:

    import {StateGraph} from "@langchain/langgraph";
    
    const workflow = new StateGraph(GraphAnnotation)
        .addNode("OpenAI", callOpenAI)
        .addNode("Anthropic", callAnthropic)
        .addConditionalEdges("__start__", selectModel)
        .addEdge("OpenAI", "__end__")
        .addEdge("Anthropic", "__end__");

    My graph here defines two Nodes, each invoking a 3rd party LLM. Below that, I'm defining a Conditional Edge. And below that, I'm adding simple Edges to the end of the application.

    Creating the Nodes

    Nodes are simply functions that are called. Their expected output is the state we want to change in the graph. For example, when calling a model, I want the AI response to be added to my array of messages. Here's what both of those Nodes will look like:

    import {ChatOpenAI} from "@langchain/openai";
    import {ChatAnthropic} from "@langchain/anthropic";
    
    const callOpenAI = async (state: typeof GraphAnnotation.State) => {
        const model = new ChatOpenAI({temperature: 0});
    
        const messages = state.messages;
        messages.splice(messages.length - 2, 0, new SystemMessage(prompt));
        const response = await model.invoke(messages);
    
        return {messages: [response]};
    };
    
    const callAnthropic = async (state: typeof GraphAnnotation.State) => {
        const model = new ChatAnthropic({temperature: 0});
    
        const messages = state.messages;
        messages.splice(messages.length - 2, 0, new SystemMessage(prompt));
        const response = await model.invoke(messages);
    
        return {messages: [response]};
    };

    Notice that I'm adding a SystemMessage before invoking each model. This is where I can provide my prompt:

    const prompt = "You are a hilarious comedian! When prompted, tell a joke.";

    Routing With the Conditional Edge

    Earlier we defined in our Annotation a selectedModel state. In our Conditional Edge, we'll make use of it to route to the preffered model:

    const selectModel = async (state: {selectedModel: string}) => {
        if (state.selectedModel === "OpenAI") {
            return "OpenAI";
        }
    
        return "Anthropic";
    };

    Note that I'm returning the name of the Node that I'd like the graph to traverse to next.

    Persistence

    Persistence is a larger topic in LangGraph. For today, we'll be making use of the in-memory saver. Know, here, that you can use your own plugin for strategies that utilize SQL Databases, MongoDB, Redis, or any custom solution:

    import {MemorySaver} from "@langchain/langgraph";
    
    const checkpointer = new MemorySaver();

    Calling the Graph

    With all of this set, we're ready to use the graph!

    Below, I'll compile the graph with the checkpointer I created above. Once I've done that, I'll create a config object (the thread_id is a unique identifier for a conversation had with the user and graph. It's hardcoded here for simplicity.) With both of these, I'll invoke the graph, passing the initial state as well as my config object.

    import {RunnableConfig} from "@langchain/core/runnables";
    
    
    export const compiledGraph = workflow.compile({checkpointer});
    
    const runGraph = async () => {
        const config = {configurable: {thread_id: "123"}} as RunnableConfig;
        const {messages} = await compiledGraph.invoke(
                // Initial updates to State
                {selectedModel: "OpenAI", messages: [new HumanMessage("Tell me a joke!")]},
                // RunnableConfig
                config,
        );
        console.log(messages[messages.length - 1].content);
    };
    
    runGraph();
    
    // Logs the following:
    // Why couldn't the bicycle stand up by itself?
    //
    // Because it was two tired!

    There you have it! With that, you're off and away on developing with AI! 🚴


    A Little Tear

    Listen on Youtube

    The Sarah Vaughan recording of this is absolute magic, my goodness.


    Snow Trek Home

    ☃️🏠🏔️

    ❄️


    Mostly Cloudy

    🌤️

    Clear water, cloudy sky. Squeezing this in between moving this weekend!


    Kyle Webster and Why the Work Matters

    Why Bother? From Kyle Webster's newsletter:

    It’s not that your work, itself, will change the world; no, only a few people in history will create something that resonates so strongly that it forces people to stand up, pay attention, and actually act on the feelings your masterpiece has stirred within them.

    Instead, it’s the mini-milestones you achieve while doing your work that matter because each of these little ‘wins’ makes you feel good. Feeling good is the foundation for doing good. Positive emotions facilitate cooperation, unity and understanding in a community.

    It's not explicitly said, but I'll add how wonderful it is that skill does not necessarily make a difference. The doing and growing are what gives life and bouyancy. Engagement > Product.


    Dindi

    Listen on Youtube

    Sky, so vast is the sky, with far away clouds just wandering by~
    Where do they go? Oh I don't know...


    Twilight View From the Park

    🌅

    ☁️


    How Long Is a Piece of String?

    How much time does it really take to get fulfillment out of a creative practice?

    I love taking on big projects. It's thrilling to see something grow over time, a song come together note by note, or a drawing take shape stroke by stroke.

    There have been times in my life where a project has been all consuming, where my thoughts were on it night and day. Going on the hunt is a thrill of it's own.

    When things get busy, though, I have my go-to's for simply keeping my hand in it.

    Some days it's just pecking at the piano. Others it's plucking strings. Sometimes it's a sketch over break. 5 minutes here, another 10 there.

    I'm grateful for that. When the days fill up with obligation, and I sit down to play a few notes right before bed, that's enough to make the magic happen. I'm continually surprised by how little it takes to tap into something bigger.


    James Gurney and Art as an Expression of Nature

    A wonderful read in it's entirety on James Gurney's Substack. From "Should Art Be About Personal Expression?":

    Many of the greatest works of art have come from enigmatic individuals like Shakespeare, Vermeer, and Homer, about whom we know very little. And perhaps it doesn’t matter. The miracle of their work is that the range of their emotional expression seems to extend beyond the scope of a single person’s experience.

    Each of these creators looked into themselves, but in so doing, they saw beyond themselves.

    Ultimately, we end up starting from a place where we're trying to express what feels like is uniquely our's. But, the further and further you go, the more you start to see yourself more as a vessel. What pours out of the brush and pen and piano and terminal are alignments with a greater Truth.


    Deploying TypeScript to AWS Lambda

    In the early days of TypeScript, one of the larger barriers to entry was the setup required. Setting your configuration and checking if external packages ship with types took upfront work. On top of it all, neither Node nor the browser reads TypeScript directly, so transpiling to JavaScript is required for those environments.

    Much has improved since. Libraries ship with types and spinning up a project has been streamlined.

    Below I'll share some of the tooling that's helped simplify TypeScript setup.

    The Project

    I'll be working on setting up a TypeScript project that will deploy to AWS Lambda. I'll skip the details that are specific to Lambda setup and focus on TypeScript itself.

    For this to work, there are a few things we'll want to make happen:

    1. Setup Type Checking
    2. Setup a Build Process
    3. Optionally: Select a TypeScript Runtime

    Type Checking

    The biggest benefit of TypeScript comes from... well, the static type checking! An editor such as VS Code can lint these for you while you develop. Though, the intended safeguarding comes from compile time type checking.

    TypeScript comes with this out of the box. Here's how you can set it up:

    First, we'll install TypeScript globally through npm:

    npm install -g typescript

    With that comes tsc, the TypeScript Compiler.

    If you haven't already, you'll want to initialize your project with a tsconfig.json file. This command gets you started:

    tsc --init

    Here's a starting place for your ts config:

    {
      "compilerOptions": {
        "target": "es2020",
        "module": "es2020",
        "strict": true,
        "skipLibCheck": true,
      },
      "ts-node": {
        "compilerOptions": {
          "module": "commonjs"
        }
      },
      "exclude": ["node_modules", "**/*.test.ts"]
    }

    Lastly, to compile, it's as simple as this command:

    tsc index.ts

    This will spit out a corrseponding JavaScript file in your project with the types stripped out.

    Worth noting: You can also check for types without compiling with the --noEmit flag.

    tsc index.ts --noEmit

    Testing Locally

    You may notice above the ts-node option in my config. ts-node is an engine for executing TS files using the node runtime — without having to transpile your code first.

    What we would have to do without ts-node is generate our JS files as we did above, such as with tsc index.ts. An index.js file would then be generated. From there, we would run node index.js.

    Instead, with ts-node, we would simply call ts-node index.ts.

    ts-node comes with many more features, but a single-command way of running TS files from the CLI is the quickest benefit.

    Bundling with ESBuild

    Typically, we reach for bundling solutions with client side JavaScript and TypeScript to minimize our file sizes, speeding up site load times. While you wouldn't normally need to bundle server side code, the current AWS Lambda limit is 250 MBs. The node_modules directory would easily eat that up without a bundling strategy!

    The library of choice today is ESbuild, which handles TypeScript, JSX, ESM & CommonJS modules, and more.

    You might ask: If you're going to bundle your code, why did we bother looking at compiling with tsc?

    There are several tools that will run and build TypeScript without actually validating your types, and ESBuild is one of them! When developing your build pipeline, it's likely that you'll need a separate step to validate the types with tsc.

    Here is what the build script looks like using ESBuild:

    esbuild ./src/index.ts --bundle --sourcemap --platform=neutral --target=es2020 --outfile=dist/index.js

    A couple of options to explain:

    • sourcemap: This generates a .map.js file which is used for error handling. This makes sane debugging possible even after bundling and minifying.
    • platform=neutral: Sets default output to esm, using the export syntax.
    • target=es20202: Targets a specific JS spec, also including esm modules.

    Picking a Runtime

    If including esm modules in your generated JS files, be sure you're using a runtime that supports them. For example, Node 13 can handle them out of the box, while earlier versions require an experimental flag.

    When deploying to Lambda, Node is a first class citizen when it comes to support. While not quite as blazingly fast as Rust, a Lambda function running node will still be highly performant.

    If you're interested in delightful DX and native TypeScript support however, you may reach for Deno or Bun.

    I'll baton pass this portion of the article to two relevant docs: The AWS Lambda Developer Guide on Building with TypeScript and the Bun Lambda Layer package. Whichever you chose, both should be great starting places for deploying your runtime of choice.


    Peg O My Heart

    Listen on Youtube

    Another swing at chord melody!


    City Sunset

    🌆

    Thinking back to a visit to Chicago...


    The Retrieval-Augmented Generation Pattern for AI Development

    Yes ladies and gentleman, a post about developing with AI!

    If you're team is looking to incorporate an LLM into your services, the first challenge to overcome is how to do so in a cost effective way.

    Chances are, your business is already focused on a specific product domain, with resources targeted towards building that solution. This already is going to point you towards finding an off the shelf solution to integrate with an API.

    With your flavor of LLM picked, the next set of challenges center around getting it to respond to questions in a way that meaningfully provides answers from your business data. LLM's need to be informed on how to respond to requests, what data to utilize when considering their answers, and even what to do if they're tempted to guess.

    The way forward is through prompt engineering, with the help of Retrieval-Augmented Generation

    Retrieval-Augmented Generation

    The simplified procedure for RAG goes as follows:

    1. Request is made to your app with the message "How many Tex-Mex Restaurants are in Dallas?"
    2. Your application gathers context. For example, we may make a query to our DB for a summary of all restaurants in the area.
    3. We'll provide a summary of the context and instructions to the LLM with a prompt.
    4. We send along the response to the user.

    That's an overly simplified walk through, but it should already get you thinking about the details involved in those steps depending on your use case.

    Another benefit to this is that requests to an API are not inherently stateful. The chat window of an AI app will remember our previous messages. But my API request to that third party does not automatically. We have to store and retrieve that context.

    AI Agents

    It's worth noting that step 2 may even require an LLM to parse the question and then interact with an API to gather data. There's a fair amount of complexity to still parse in developing these solutions. This is where you may be leaning on an AI Agent. An agent is an LLM that will parse a result and determine if a tool is required, such as pinging your internal APIs.

    Prompt Engineering is emerging as a role and craft all of it's own, and there are many nuances to doing it well.

    LangChain

    The workflow is already so common that there's a framework at the ready to spin up and take care of the heavy lifting for you. LangChain (stylized as 🦜⛓️‍💥) is just that tool.

    For a hands on experience building a RAG application on rails, their docs on building a chatbot are a good starting place.

    For a more complex agentive tool, LangGraph opens up the hood on LangChain for more control and plays nicely with LangChain when needed.


    Campfire Folk Intro

    Listen on Youtube

    Gather round, and listen to this tale...

    Just a bit of noodling between practicing longer pieces.


    Calm Sky

    🦋

    It's grey out this time of year. But behind the clouds, there's always a blue sky. 🌤️