Trigger.dev logo

trigger.dev

Background jobs and scheduled tasks framework

$ npx docs2skills add trigger-dev-background-jobs
SKILL.md

Trigger.dev

Background jobs and scheduled tasks framework for reliable long-running workflows

What this skill does

Trigger.dev is an open source background jobs framework that lets you write reliable workflows in plain async code. It handles long-running AI tasks, complex background jobs, and scheduled work with built-in queuing, automatic retries, and real-time monitoring. Unlike traditional job queues, you write tasks as regular async functions without timeouts, get elastic scaling, and zero infrastructure management.

The framework provides everything needed for production background processing: a CLI and SDK for writing tasks in your existing codebase, support for regular and scheduled tasks, full observability through a dashboard, and a Realtime API with React hooks for showing task status in your frontend. You can use Trigger.dev Cloud or self-host on your own infrastructure.

Prerequisites

  • Node.js 18+ or Bun
  • TypeScript support recommended
  • Trigger.dev account (cloud) or self-hosted instance
  • API key from Trigger.dev dashboard

Quick start

# Install CLI globally
npm install -g @trigger.dev/cli@latest

# Initialize in existing project
npx trigger.dev@latest init

# Create your first task
// src/trigger/example.ts
import { task } from "@trigger.dev/sdk/v3";

export const helloWorldTask = task({
  id: "hello-world",
  run: async (payload: { message: string }) => {
    console.log("Hello World!", payload.message);
    
    // Simulate work
    await new Promise(resolve => setTimeout(resolve, 5000));
    
    return {
      message: "Task completed successfully!",
    };
  },
});
# Start dev server
npx trigger.dev@latest dev

# Trigger from your app
// In your application
import { helloWorldTask } from "./trigger/example";

const handle = await helloWorldTask.trigger({
  message: "This is a test"
});

console.log("Task triggered:", handle.id);

Core concepts

Tasks are the fundamental unit - async functions that can be triggered, queued, retried, and monitored. Tasks run in isolated environments with automatic error handling and observability.

Runs are task executions. Each trigger creates a run with a unique ID, status tracking, logs, and metadata. Runs can be queued, retried, cancelled, or replayed.

Triggering happens from your application code using the task's trigger() method. Tasks can also be scheduled with cron expressions or triggered by webhooks.

Queues and Concurrency control how many runs execute simultaneously. Configure global limits, per-task concurrency, and priority queues for different workload patterns.

Environments separate dev, staging, and production. Each environment has its own API keys, runs, and configuration.

Key API surface

// Task definition
task(options: {
  id: string;
  run: (payload, { ctx, logger }) => Promise<any>;
  queue?: { concurrencyLimit: number };
  retry?: { maxAttempts: number };
  machine?: { preset: string };
})

// Triggering
task.trigger(payload, options?)
task.batchTrigger(items)
task.triggerAndWait(payload, options?)

// Scheduled tasks
schedules.task(options: {
  id: string;
  cron: string;
  run: (payload) => Promise<any>;
})

// Waiting and delays
wait.for({ seconds: 30 })
wait.until({ date: new Date() })
wait.forEvent("user.signup")
wait.forToken({ tokenId: "approval-123" })

// Logging and context
logger.info("Processing started", { userId: payload.userId })
ctx.run.id
ctx.run.isTest
ctx.environment.slug

// Realtime API
runs.retrieve(runId)
runs.cancel(runId)
runs.replay(runId)

Common patterns

Long-running AI workflows:

export const processVideoTask = task({
  id: "process-video",
  machine: { preset: "large-1x" },
  run: async (payload: { videoUrl: string }) => {
    // Download video
    const video = await downloadVideo(payload.videoUrl);
    
    // Process with AI
    const transcript = await transcribeAudio(video.audioTrack);
    const summary = await generateSummary(transcript);
    
    // Upload results
    await uploadResults({ transcript, summary });
    
    return { processed: true, summary };
  },
});

Email sequences with delays:

export const welcomeSequence = task({
  id: "welcome-sequence",
  run: async (payload: { userId: string, email: string }) => {
    // Send welcome email
    await sendEmail({
      to: payload.email,
      template: "welcome"
    });
    
    // Wait 3 days
    await wait.for({ days: 3 });
    
    // Send follow-up
    await sendEmail({
      to: payload.email,
      template: "tips"
    });
    
    // Wait 1 week
    await wait.for({ weeks: 1 });
    
    // Send final email
    await sendEmail({
      to: payload.email,
      template: "upgrade"
    });
  },
});

Batch processing with concurrency:

export const processBatch = task({
  id: "process-batch",
  queue: { concurrencyLimit: 5 },
  run: async (payload: { items: string[] }) => {
    const results = [];
    
    for (const item of payload.items) {
      const result = await processItem(item);
      results.push(result);
      
      // Log progress
      logger.info(`Processed ${results.length}/${payload.items.length}`);
    }
    
    return results;
  },
});

Scheduled data sync:

export const syncDataTask = schedules.task({
  id: "sync-external-data",
  cron: "0 */6 * * *", // Every 6 hours
  run: async () => {
    const data = await fetchExternalAPI();
    
    for (const record of data) {
      await database.upsert(record);
    }
    
    logger.info(`Synced ${data.length} records`);
  },
});

Human-in-the-loop approval:

export const approvalWorkflow = task({
  id: "approval-workflow",
  run: async (payload: { requestId: string }) => {
    // Process initial request
    const request = await processRequest(payload.requestId);
    
    // Wait for approval
    await wait.forToken({
      tokenId: `approval-${payload.requestId}`,
      timeoutInSeconds: 86400 // 24 hours
    });
    
    // Continue after approval
    await finalizeRequest(request);
  },
});

Configuration

trigger.config.ts:

import { defineConfig } from "@trigger.dev/sdk/v3";

export default defineConfig({
  project: "proj_1234567890",
  logLevel: "info",
  retries: {
    enabledInDev: true,
    default: {
      maxAttempts: 3,
      minTimeoutInMs: 1000,
      maxTimeoutInMs: 10000,
      factor: 2,
      randomize: true,
    },
  },
  machine: {
    preset: "small-1x", // small-1x, medium-1x, large-1x
  },
  build: {
    extensions: [
      prismaExtension(),
      pythonExtension(),
    ],
  },
});

Environment variables:

TRIGGER_API_KEY=tr_dev_1234567890
TRIGGER_API_URL=https://api.trigger.dev
TRIGGER_PROJECT_ID=proj_1234567890

Best practices

  1. Use descriptive task IDs - They appear in logs, URLs, and monitoring. Use kebab-case like send-welcome-email or process-payment.

  2. Structure payload types - Define TypeScript interfaces for task payloads to catch errors early and improve developer experience.

  3. Log meaningful progress - Use structured logging with context objects. Logs are searchable in the dashboard.

  4. Handle idempotency - Tasks may retry. Use idempotency keys for external API calls and database operations.

  5. Set appropriate concurrency limits - Don't overwhelm external APIs. Use queue concurrency to control resource usage.

  6. Use machine presets wisely - Match compute resources to task requirements. AI tasks need large-1x, simple tasks use small-1x.

  7. Implement proper error handling - Catch and handle expected errors. Let unexpected errors bubble up for automatic retries.

  8. Use wait strategically - Prefer wait.for() over setTimeout() for long delays. It's more reliable and visible.

  9. Test with triggerAndWait - In tests, use triggerAndWait() to get results synchronously.

  10. Monitor run usage - Check the dashboard for run duration, costs, and failure patterns.

Gotchas and common mistakes

Task IDs must be unique globally across your project. Changing a task ID creates a new task - existing runs continue with the old ID.

Payload size limits - Maximum 10MB per payload. For large data, use file uploads or database references instead.

Environment separation - Dev and prod environments are completely separate. API keys, runs, and schedules don't cross over.

Retry behavior - Failed tasks retry automatically unless you throw AbortTaskRunError. Network timeouts and crashes trigger retries, but logic errors might not.

Wait vs setTimeout - Never use setTimeout() in tasks. Use wait.for() which survives process restarts and is visible in the dashboard.

Concurrency limits - Default concurrency is unlimited. Set queue.concurrencyLimit to prevent overwhelming external services.

Machine presets - Tasks run on shared infrastructure by default. Heavy workloads need explicit machine configuration.

Import restrictions - Tasks run in isolated environments. Avoid importing large dependencies or native modules without proper build extensions.

Deployment timing - Changes to task code require deployment to take effect. Use trigger.dev deploy for production.

Token expiration - wait.forToken() tokens can expire. Handle timeout scenarios gracefully.

Real-time limitations - Realtime API has connection limits. Don't subscribe to too many runs simultaneously.

Cron timezone - Scheduled tasks use UTC by default. Specify timezone explicitly for local scheduling.

Development vs production - Dev mode uses local execution. Production runs on Trigger.dev infrastructure with different performance characteristics.