Vercel AI SDK logo

vercel ai sdk

TypeScript toolkit for AI-powered applications

$ npx docs2skills add vercel-ai-sdk
SKILL.md

Vercel AI SDK

TypeScript toolkit for AI-powered applications

What this skill does

The Vercel AI SDK is a unified TypeScript toolkit that standardizes AI model integration across 20+ providers (OpenAI, Anthropic, Google, AWS Bedrock, etc.). Instead of learning each provider's unique API, you use one consistent interface for text generation, structured data extraction, tool calling, streaming, and building AI agents.

The SDK has two main libraries: AI SDK Core (server-side LLM operations) and AI SDK UI (framework-agnostic React hooks for chat interfaces and generative UI). It eliminates vendor lock-in and reduces integration complexity while providing advanced features like streaming, tool calling, and multi-step agent workflows.

Prerequisites

  • Node.js 16+ or compatible runtime
  • TypeScript support recommended
  • API keys for your chosen providers (OpenAI, Anthropic, etc.)
  • Framework: React, Next.js, Vue, Svelte, or Node.js

Quick start

npm install ai @ai-sdk/openai
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const { text } = await generateText({
  model: openai('gpt-4'),
  prompt: 'What is TypeScript?',
});

console.log(text);

For chat interfaces (React):

import { useChat } from 'ai/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();
  
  return (
    <div>
      {messages.map(m => <div key={m.id}>{m.content}</div>)}
      <form onSubmit={handleSubmit}>
        <input value={input} onChange={handleInputChange} />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

Core concepts

Unified Model Interface: All providers use the same API - generateText(), streamText(), etc. Switch providers by changing the model parameter.

Streaming by Default: Built-in support for real-time streaming responses with backpressure handling.

Tools & Agents: LLMs can call functions (tools) and chain multiple calls automatically for complex workflows.

Structured Generation: Generate typed objects with Zod schemas instead of parsing text.

Framework Adapters: Provider-agnostic hooks for React, Vue, Svelte that handle state management and streaming.

Key API surface

AI SDK Core

  • generateText(options) - Generate text with a model
  • streamText(options) - Stream text generation
  • generateObject(options) - Generate structured data with schema
  • streamObject(options) - Stream object generation
  • generateImage(options) - Generate images
  • embed(options) - Generate embeddings
  • experimental_createAgent(options) - Create AI agents

AI SDK UI (React)

  • useChat(options) - Chat interface with message state
  • useCompletion(options) - Text completion interface
  • useObject(options) - Stream structured object generation
  • useAssistant(options) - Assistant-style conversations

Framework Imports

  • 'ai' - Core functions
  • 'ai/react' - React hooks
  • 'ai/vue' - Vue composables
  • 'ai/svelte' - Svelte stores

Common patterns

Multi-provider setup

import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';

const model = process.env.NODE_ENV === 'development' 
  ? openai('gpt-3.5-turbo') 
  : anthropic('claude-3-sonnet-20240229');

const { text } = await generateText({
  model,
  prompt: 'Hello world',
});

Tool calling

import { generateText, tool } from 'ai';
import { z } from 'zod';

const { text } = await generateText({
  model: openai('gpt-4'),
  prompt: 'What is the weather like in San Francisco?',
  tools: {
    getWeather: tool({
      description: 'Get weather for a location',
      parameters: z.object({
        location: z.string().describe('The city name'),
      }),
      execute: async ({ location }) => {
        return `Weather in ${location}: 72°F and sunny`;
      },
    }),
  },
});

Structured data generation

import { generateObject } from 'ai';
import { z } from 'zod';

const { object } = await generateObject({
  model: openai('gpt-4'),
  schema: z.object({
    name: z.string(),
    age: z.number(),
    occupation: z.string(),
  }),
  prompt: 'Generate a random person profile',
});
// object is fully typed based on schema

Chat with persistence

const { messages, append } = useChat({
  api: '/api/chat',
  initialMessages: [
    { id: '1', role: 'user', content: 'Hello' }
  ],
  onFinish: (message) => {
    // Save to database
    saveMessage(message);
  },
});

Streaming with custom data

import { streamText, StreamData } from 'ai';

const data = new StreamData();
data.append({ timestamp: Date.now() });

const result = await streamText({
  model: openai('gpt-4'),
  prompt: 'Tell me about TypeScript',
  onFinish() {
    data.close();
  },
});

return result.toDataStreamResponse({ data });

Configuration

Environment variables

OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_GENERATIVE_AI_API_KEY=...

Provider setup

import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';

// Custom base URL
const customOpenAI = openai({
  baseURL: 'https://api.custom.com/v1',
  apiKey: process.env.CUSTOM_API_KEY,
});

// Model-specific settings
const { text } = await generateText({
  model: openai('gpt-4', {
    temperature: 0.7,
    maxTokens: 1000,
  }),
  prompt: 'Hello',
});

Middleware

import { experimental_wrapLanguageModel as wrapLanguageModel } from 'ai';

const wrappedModel = wrapLanguageModel({
  model: openai('gpt-4'),
  middleware: {
    transformParams: async ({ params }) => ({
      ...params,
      headers: { 'Custom-Header': 'value' }
    }),
  },
});

Best practices

  1. Use TypeScript: Leverage full type safety for schemas and tool parameters
  2. Handle streaming properly: Always handle stream errors and cleanup
  3. Implement rate limiting: Use middleware or edge functions for API protection
  4. Cache embeddings: Store expensive embedding generations in vector databases
  5. Validate tool outputs: Always validate external API responses in tool functions
  6. Use structured generation: Prefer generateObject() over text parsing for data extraction
  7. Implement proper error boundaries: Handle model failures gracefully in UI components
  8. Optimize token usage: Use appropriate models for tasks (GPT-3.5 for simple, GPT-4 for complex)
  9. Stream for better UX: Always use streaming for user-facing text generation
  10. Secure API routes: Never expose API keys in client code, use server-side endpoints

Gotchas and common mistakes

Model string format: Use provider-specific format - openai('gpt-4') not 'gpt-4' directly

Streaming requires proper setup: Must use toDataStreamResponse() in API routes for streaming to work with UI hooks

Tool execution is automatic: Tools execute immediately when called by model - implement proper validation and error handling

Schema validation failures: generateObject() will retry if model output doesn't match schema, potentially using extra tokens

Framework-specific imports: Import from 'ai/react', 'ai/vue', etc. - not from 'ai' for UI hooks

API route streaming: Next.js App Router requires specific response format - use result.toDataStreamResponse()

Environment variables: Some providers require specific env var names - check provider docs

Token limits: Different models have different context windows - monitor token usage in long conversations

Concurrent requests: Rate limits apply per API key - implement proper queuing for high-volume apps

Type inference: Zod schema types must be properly inferred - use z.infer<typeof schema> for complex types

Memory management: Long chat histories consume tokens - implement conversation pruning

Error propagation: Streaming errors need special handling - use error boundaries in React components

CORS issues: API routes must handle CORS properly for cross-origin requests

Tool calling loops: Agents can get stuck in tool calling loops - implement max iteration limits

Provider switching: Different providers have different capabilities - check feature support before switching