LangGraph Javascript logo

langgraph javascript

Stateful agent orchestration and workflow management framework

$ npx docs2skills add langgraph-python
SKILL.md

LangGraph

Stateful agent orchestration and workflow management framework

What this skill does

LangGraph is a low-level orchestration framework for building, managing, and deploying long-running, stateful agents. Unlike higher-level agent frameworks that provide pre-built architectures, LangGraph focuses on the underlying orchestration infrastructure that enables sophisticated agent workflows.

The framework provides durable execution that persists through failures, human-in-the-loop capabilities for oversight and intervention, and comprehensive memory management for both short-term reasoning and long-term session persistence. LangGraph models workflows as state graphs where nodes represent computations and edges define transitions, inspired by Google's Pregel distributed computing framework.

Trusted by companies like Klarna, Replit, and Elastic, LangGraph enables complex agent behaviors that require state persistence, conditional branching, parallel execution, and recovery mechanisms—capabilities essential for production agent systems that must handle real-world complexity and reliability requirements.

Prerequisites

  • Node.js 18+ runtime environment
  • TypeScript knowledge for type safety (recommended)
  • Basic understanding of state machines and graph concepts
  • Familiarity with LLM models and tool-calling patterns
  • LangChain knowledge helpful but not required

Quick start

npm install @langchain/langgraph
import { StateSchema, MessagesValue, GraphNode, StateGraph, START, END } from "@langchain/langgraph";

const State = new StateSchema({
  messages: MessagesValue,
});

const mockLlm: GraphNode<typeof State> = (state) => {
  return { messages: [{ role: "ai", content: "hello world" }] };
};

const graph = new StateGraph(State)
  .addNode("mock_llm", mockLlm)
  .addEdge(START, "mock_llm")
  .addEdge("mock_llm", END)
  .compile();

const result = await graph.invoke({ messages: [{ role: "user", content: "hi!" }] });

Core concepts

State Management: LangGraph workflows center around a shared state object that flows between nodes. Each node can read from and write to this state, enabling coordination between different parts of your agent. State schemas define the structure and types of data that flows through your graph.

Graph Structure: Workflows are modeled as directed graphs with nodes (computation units) and edges (transitions). Special START and END nodes mark entry and exit points. Conditional edges enable dynamic routing based on state content, while parallel edges allow concurrent execution paths.

Durable Execution: Unlike traditional function calls, LangGraph executions can persist across process restarts, network failures, and extended time periods. State is automatically checkpointed, allowing workflows to resume exactly where they stopped.

Human-in-the-Loop: Agents can pause execution at any point for human review or intervention. Humans can inspect current state, modify it, and either approve continuation or redirect the workflow entirely.

Key API surface

APIDescription
StateGraph(schema)Creates graph with typed state schema
.addNode(name, func)Adds computation node to graph
.addEdge(from, to)Adds unconditional transition
.addConditionalEdges(source, condition)Adds conditional routing
.compile()Builds executable graph
.invoke(input)Runs graph synchronously
.stream(input)Streams intermediate results
.checkpointerConfigures state persistence
StateSchema({...})Defines state structure
MessagesValueBuilt-in message list reducer
interrupt()Pauses for human intervention
.updateState()Modifies state during execution

Common patterns

Sequential Agent Chain: Chain multiple LLM calls with state accumulation.

const agent = new StateGraph(State)
  .addNode("analyze", analyzeInput)
  .addNode("plan", createPlan) 
  .addNode("execute", executePlan)
  .addEdge(START, "analyze")
  .addEdge("analyze", "plan")
  .addEdge("plan", "execute")
  .addEdge("execute", END);

Conditional Routing: Route based on agent decisions or external conditions.

const router = (state) => {
  if (state.confidence > 0.8) return "confident_path";
  return "uncertain_path";
};

graph.addConditionalEdges("classifier", router, {
  "confident_path": "direct_answer",
  "uncertain_path": "research_more"
});

Human-in-the-Loop Review: Pause execution for human oversight.

const reviewNode: GraphNode<typeof State> = (state) => {
  if (state.requiresReview) {
    interrupt("Please review the proposed action");
  }
  return { approved: true };
};

Parallel Execution: Run multiple operations concurrently.

graph
  .addNode("fetch_data", fetchData)
  .addNode("process_a", processA)
  .addNode("process_b", processB)
  .addEdge("fetch_data", "process_a")
  .addEdge("fetch_data", "process_b")
  .addEdge(["process_a", "process_b"], "combine");

Configuration

Checkpointer Setup: Configure state persistence backend.

import { SqliteSaver } from "@langchain/langgraph/checkpoint/sqlite";

const checkpointer = new SqliteSaver("agent_state.db");
const graph = workflow.compile({ checkpointer });

Thread Management: Control execution threads for parallel workflows.

const config = {
  configurable: { thread_id: "user_123" },
  recursion_limit: 50
};

Memory Configuration: Set up different memory types.

const State = new StateSchema({
  messages: MessagesValue,        // Short-term conversation
  facts: new Set(),              // Long-term knowledge
  working_memory: Object         // Scratch space
});

Best practices

State Design: Keep state minimal and well-typed. Use reducers for accumulating data like messages or results. Avoid storing large objects that don't need persistence.

Error Handling: Implement error recovery nodes rather than relying on exceptions. Use conditional edges to route to error handling paths when operations fail.

Node Granularity: Make nodes focused and atomic. Each node should have a single responsibility and clear input/output contract with the state.

Conditional Logic: Use conditional edges for routing rather than embedding complex logic inside nodes. This makes workflows more readable and debuggable.

Memory Efficiency: Clear temporary state between major workflow phases. Use working_memory patterns for intermediate computations that don't need long-term persistence.

Testing Strategy: Test individual nodes in isolation first, then integration test the full workflow. Use mock nodes to simulate expensive operations during development.

Gotchas and common mistakes

State Mutation: Nodes must return new state objects, not mutate the input state directly. LangGraph uses immutable state patterns for consistency.

Async Operations: All node functions should be async if they perform I/O operations. Forgetting async/await can cause race conditions in complex workflows.

Circular Dependencies: Adding edges that create cycles without proper exit conditions will cause infinite loops. Always ensure graphs have clear termination paths.

Thread Safety: When using parallel execution, ensure nodes don't have conflicting state updates. Use different state keys or implement proper locking mechanisms.

Checkpointer Lifecycle: Checkpointers must be properly closed to avoid database connection leaks. Use try/finally blocks or proper cleanup.

Conditional Edge Coverage: Conditional edge functions must handle all possible state values. Missing cases will cause runtime errors when unexpected states occur.

Memory Limits: Long-running agents can accumulate large state objects. Implement state pruning strategies for production deployments.

Interrupt Handling: Interrupts pause the entire workflow. Don't use them in performance-critical paths where latency matters.

Recursion Limits: Default recursion limits may be too low for complex workflows. Set appropriate limits based on your workflow depth.

State Schema Changes: Changing state schemas breaks existing checkpoints. Implement migration strategies for production systems with persistent state.