langchain deep agents js
Complex multi-step AI agents with built-in task planning, memory, and subagent delegation
$ npx docs2skills add langchain-deep-agents-jsLangChain Deep Agents
Complex multi-step AI agents with built-in task planning, memory, and subagent delegation
What this skill does
Deep Agents provides an "agent harness" - a pre-built framework for creating AI agents that can handle complex, multi-step tasks requiring planning, context management, and delegation. Unlike basic LLM wrappers, Deep Agents includes built-in capabilities for task decomposition, file system tools for managing large contexts, spawning specialized subagents for isolated work, and persistent memory across conversations.
The library sits on top of LangChain's core agent building blocks and uses LangGraph for production-ready agent execution. It's designed for scenarios where simple tool calling isn't enough - when you need agents that can break down complex requests, maintain state across multiple interactions, delegate specialized work to subagents, and manage large amounts of contextual information through file system operations.
Deep Agents bridges the gap between simple chatbots and complex agentic systems, providing the infrastructure needed for agents that can handle enterprise-level workflows, multi-session projects, and tasks requiring sophisticated planning and coordination.
Prerequisites
- Node.js 18+ (ES modules support required)
- TypeScript support recommended
- LangChain API key or compatible LLM provider (OpenAI, Anthropic, etc.)
- Basic familiarity with LangChain tool calling patterns
- Understanding of async/await patterns in JavaScript
Quick start
npm install deepagents langchain @langchain/core
import * as z from "zod";
import { createDeepAgent } from "deepagents";
import { tool } from "langchain";
const getWeather = tool(
({ city }) => `It's always sunny in ${city}!`,
{
name: "get_weather",
description: "Get the weather for a given city",
schema: z.object({
city: z.string(),
}),
},
);
const agent = createDeepAgent({
tools: [getWeather],
system: "You are a helpful assistant",
});
console.log(
await agent.invoke({
messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
})
);
Core concepts
Agent Harness Architecture: Deep Agents implements the core tool calling loop with enhanced capabilities. Unlike raw LangChain agents, it provides built-in tools for file management, memory persistence, and subagent orchestration. The harness manages the execution flow while allowing custom tool integration.
Task Planning and Decomposition: Agents automatically break down complex requests into manageable subtasks. The planning system analyzes user requests and creates execution strategies, handling dependencies and sequencing without manual workflow definition.
File System Context Management: Built-in file system tools allow agents to read, write, and manage files for context persistence. This enables handling of large documents, code projects, and multi-session workflows where context exceeds token limits.
Subagent Spawning: Agents can create specialized subagents for specific tasks, providing context isolation and parallel processing capabilities. Subagents inherit relevant context while maintaining separation of concerns.
Persistent Memory: Memory systems persist across conversations and sessions, enabling agents to learn user preferences, remember project details, and maintain continuity across multiple interactions.
Key API surface
| Function | Description |
|---|---|
createDeepAgent(config) | Creates a new deep agent with specified tools and system prompt |
agent.invoke({ messages }) | Executes agent with message array, returns response |
agent.stream({ messages }) | Streams agent execution for real-time responses |
config.tools | Array of LangChain tools available to the agent |
config.system | System prompt defining agent behavior and role |
config.memory | Memory configuration for persistence across sessions |
config.fileSystem | File system tool configuration for context management |
config.subagents | Subagent spawning and delegation settings |
config.llm | Language model configuration and provider settings |
Common patterns
Multi-step workflow agent:
const workflowAgent = createDeepAgent({
tools: [analyzeData, generateReport, sendEmail],
system: "Break down complex data analysis tasks into steps: analyze, report, notify",
});
File-based context agent:
const codeAgent = createDeepAgent({
tools: [readFile, writeFile, executeCode],
system: "You can read and modify files in the current directory for coding tasks",
fileSystem: { enabled: true, basePath: "./workspace" }
});
Memory-persistent assistant:
const personalAgent = createDeepAgent({
tools: [searchWeb, saveNote],
system: "Remember user preferences and project details across conversations",
memory: { persist: true, sessionId: "user-123" }
});
Subagent delegation pattern:
const managerAgent = createDeepAgent({
tools: [createSubagent, delegateTask],
system: "Delegate specialized tasks to expert subagents when needed",
subagents: { enabled: true, maxConcurrent: 3 }
});
Configuration
| Option | Default | Description |
|---|---|---|
tools | [] | Array of LangChain tools for agent capabilities |
system | "" | System prompt defining agent role and behavior |
llm | Auto-detected | Language model instance or configuration |
memory.persist | false | Enable persistent memory across sessions |
memory.sessionId | null | Unique identifier for memory isolation |
fileSystem.enabled | false | Enable built-in file system tools |
fileSystem.basePath | "./" | Base directory for file operations |
subagents.enabled | false | Allow spawning of specialized subagents |
subagents.maxConcurrent | 1 | Maximum number of concurrent subagents |
planning.enabled | true | Enable automatic task planning and decomposition |
Best practices
Design system prompts for planning: Write system prompts that encourage breaking down complex tasks. Include phrases like "analyze the request step-by-step" and "break complex tasks into manageable parts".
Use file system tools for large contexts: When working with codebases, documents, or multi-file projects, enable file system tools rather than trying to fit everything in message context.
Implement proper error handling for subagents: Subagent failures should be handled gracefully with fallback strategies. Always implement timeout and retry logic for delegated tasks.
Structure tools for composability: Design custom tools that work well together and can be chained by the planning system. Include clear descriptions and schema validation.
Leverage memory for user preferences: Store user coding style preferences, project conventions, and frequently used patterns in persistent memory to improve agent effectiveness over time.
Set appropriate concurrency limits: Start with low maxConcurrent values for subagents to avoid resource exhaustion, especially in serverless environments.
Use session isolation: Always set unique sessionId values when serving multiple users to prevent memory cross-contamination.
Gotchas and common mistakes
Memory persistence requires explicit session IDs: Forgetting to set sessionId causes memory to leak between different conversations and users.
File system tools respect basePath restrictions: Agents cannot access files outside the configured basePath for security. Plan directory structure accordingly.
Subagent spawning has overhead: Each subagent creates new LLM calls. Don't enable subagents for simple tasks that could be handled by the main agent.
Tool descriptions affect planning quality: Vague or incomplete tool descriptions lead to poor task decomposition. Write detailed, action-oriented descriptions.
Stream vs invoke usage patterns differ: stream() requires handling partial responses and state management, while invoke() returns complete responses. Don't mix patterns carelessly.
Large file operations can exceed token limits: Even with file system tools, loading massive files can cause context overflow. Implement chunking strategies for large documents.
Memory growth requires management: Persistent memory grows over time. Implement cleanup strategies for long-running applications.
Planning can create unnecessary complexity: For simple tool calling scenarios, Deep Agents may be overkill. Use standard LangChain agents for straightforward tasks.
Async tool execution needs proper error handling: Tools that perform I/O operations must handle network failures, permission errors, and timeouts gracefully.
System prompt conflicts with built-in behaviors: Overly restrictive system prompts can interfere with built-in planning and delegation capabilities. Keep system prompts focused on role and domain rather than execution constraints.