langchain javascript
Open-source framework for building custom agents and LLM applications in TypeScript
$ npx docs2skills add langchain-jsLangChain
Open-source framework for building custom agents and LLM applications in TypeScript
What this skill does
LangChain provides a standardized interface for building AI agents and applications that work across multiple LLM providers. Instead of writing provider-specific code for OpenAI, Anthropic, Google, or other models, LangChain abstracts these differences behind a unified API. This prevents vendor lock-in and enables seamless model switching.
The framework centers on agents—autonomous systems that can reason, use tools, and maintain conversation state. LangChain handles the orchestration between models and tools, message formatting, streaming responses, and execution persistence. Built on top of LangGraph for durability and human-in-the-loop workflows, it provides production-ready agent infrastructure out of the box.
LangChain sits between your application logic and raw LLM APIs, providing higher-level abstractions for common patterns like tool calling, structured output, context management, and multi-turn conversations. It's designed for developers who need to prototype quickly but require production flexibility.
Prerequisites
- Node.js 18+ (ES modules support required)
- TypeScript 4.9+ recommended
- API keys for chosen model providers (OpenAI, Anthropic, etc.)
- Zod for schema validation (peer dependency)
- Works with any TypeScript/JavaScript runtime (Node.js, Deno, Edge Runtime)
Quick start
npm install langchain @langchain/anthropic zod
import * as z from "zod";
import { createAgent, tool } from "langchain";
const getWeather = tool(
({ city }) => `It's always sunny in ${city}!`,
{
name: "get_weather",
description: "Get the weather for a given city",
schema: z.object({
city: z.string(),
}),
},
);
const agent = createAgent({
model: "claude-sonnet-4-5-20250929",
tools: [getWeather],
});
const result = await agent.invoke({
messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
});
console.log(result);
Core concepts
Agents are the primary abstraction—they coordinate between models and tools to accomplish tasks. An agent receives messages, decides what actions to take, executes tools, and formulates responses. LangChain agents are built on LangGraph for durable execution and state management.
Models represent the standardized interface to LLM providers. LangChain normalizes different provider APIs so you can swap between OpenAI, Anthropic, Google, and others without changing your code. Models handle tokenization, streaming, and response formatting consistently.
Tools are functions that agents can call to interact with external systems—APIs, databases, file systems, or any custom logic. Tools are defined with Zod schemas for automatic validation and structured calling.
Messages follow a standardized format with roles (user, assistant, system, tool) that works across all providers. LangChain handles the translation between its message format and each provider's specific requirements.
Middleware allows you to intercept and modify requests/responses at various points in the agent execution flow, enabling logging, authentication, rate limiting, and custom processing.
Key API surface
| Function/Class | Signature | Description |
|---|---|---|
createAgent() | (config: AgentConfig) => Agent | Creates a new agent with model and tools |
tool() | (fn: Function, config: ToolConfig) => Tool | Defines a tool with schema validation |
agent.invoke() | (input: { messages: Message[] }) => Promise<Response> | Executes agent with message history |
agent.stream() | (input: { messages: Message[] }) => AsyncIterator<Chunk> | Streams agent responses |
withStructuredOutput() | (schema: ZodSchema) => Agent | Forces structured JSON responses |
ChatAnthropic | new ChatAnthropic({ apiKey, model }) | Anthropic model instance |
ChatOpenAI | new ChatOpenAI({ apiKey, model }) | OpenAI model instance |
HumanMessage | new HumanMessage(content) | User message constructor |
AIMessage | new AIMessage(content) | Assistant message constructor |
SystemMessage | new SystemMessage(content) | System prompt constructor |
Common patterns
Tool-using agent with multiple functions:
const calculator = tool(({ a, b, op }) => {
if (op === 'add') return a + b;
if (op === 'multiply') return a * b;
return 'Invalid operation';
}, {
name: 'calculator',
description: 'Perform basic math',
schema: z.object({
a: z.number(),
b: z.number(),
op: z.enum(['add', 'multiply'])
})
});
const agent = createAgent({
model: 'gpt-4',
tools: [calculator, getWeather],
systemMessage: 'You are a helpful assistant.'
});
Structured output for data extraction:
const extractionAgent = createAgent({
model: 'claude-3-sonnet-20240229'
}).withStructuredOutput(
z.object({
name: z.string(),
age: z.number(),
skills: z.array(z.string())
})
);
Multi-turn conversation with memory:
const conversationHistory = [];
const response1 = await agent.invoke({
messages: [...conversationHistory, { role: 'user', content: 'My name is Alice' }]
});
conversationHistory.push(
{ role: 'user', content: 'My name is Alice' },
{ role: 'assistant', content: response1.content }
);
const response2 = await agent.invoke({
messages: [...conversationHistory, { role: 'user', content: 'What is my name?' }]
});
Streaming responses:
const stream = await agent.stream({
messages: [{ role: 'user', content: 'Write a story' }]
});
for await (const chunk of stream) {
if (chunk.content) {
process.stdout.write(chunk.content);
}
}
Configuration
model: Model identifier string or model instance (required)tools: Array of tool definitions (optional)systemMessage: System prompt string (optional)temperature: Response randomness 0-1 (default: provider-specific)maxTokens: Maximum response length (default: provider-specific)streaming: Enable streaming responses (default: false)middleware: Array of middleware functions (optional)memory: Conversation persistence config (optional)
Environment variables:
OPENAI_API_KEY: OpenAI authenticationANTHROPIC_API_KEY: Anthropic authenticationGOOGLE_API_KEY: Google AI authenticationLANGSMITH_API_KEY: LangSmith tracing (optional)
Best practices
Use provider-specific packages (@langchain/openai, @langchain/anthropic) instead of the generic langchain package for better tree-shaking and type safety.
Define tool schemas carefully—the quality of tool descriptions and schemas directly impacts agent performance. Be specific about parameters and expected outputs.
Implement proper error handling around tool execution, as external APIs can fail. Tools should return error messages rather than throwing exceptions.
Use streaming for long responses to improve user experience. Handle partial responses and connection interruptions gracefully.
Implement conversation memory management—don't let message histories grow unbounded. Consider summarization or sliding window approaches for long conversations.
Structure system messages clearly with specific instructions, examples, and behavioral guidelines. System messages significantly influence agent behavior.
Use middleware for cross-cutting concerns like logging, authentication, and rate limiting rather than embedding this logic in individual tools.
Gotchas and common mistakes
Tool naming conflicts: Multiple tools with the same name will cause unpredictable behavior. Ensure unique tool names across your agent.
Schema validation failures: Tools with invalid Zod schemas will fail at runtime. Test tool schemas thoroughly with various inputs.
Provider-specific limitations: Not all models support tool calling or structured output. Check provider capabilities before assuming features work.
Message format inconsistencies: Some providers have strict message alternation requirements (user/assistant/user/assistant). Use provider-specific adapters when needed.
Memory leaks with streaming: Always close streaming iterators properly or use for await loops to prevent memory leaks in long-running applications.
Tool execution timeouts: Long-running tools can cause agent timeouts. Implement timeouts and cancellation in tool functions.
Environment variable precedence: Provider packages check for API keys in specific environment variable names. Setting the wrong variable name will cause authentication failures.
Model identifier formats: Different providers use different model naming conventions. Use exact model identifiers from provider documentation.
Concurrent agent execution: Agents maintain state during execution. Don't share agent instances across concurrent requests—create separate instances or use proper state isolation.
Token limit exceeded: Large conversation histories can exceed model context limits. Implement conversation summarization or truncation strategies.
Tool response format: Tools should return strings or serializable objects. Returning complex objects or functions will cause serialization errors.