Upstash Redis logo

upstash redis

Scalable Redis database with global replication

$ npx docs2skills add upstash-redis
SKILL.md

Upstash Redis

Serverless Redis database with global edge replication and automatic scaling.

What this skill does

Upstash Redis provides a fully managed, serverless Redis-compatible database that scales from zero to billions of operations without infrastructure management. Unlike traditional Redis deployments, it offers durable persistence, automatic backups, and multi-region replication while maintaining sub-millisecond latency.

Developers use it for session storage, caching, real-time analytics, rate limiting, and pub/sub messaging when they need Redis performance without operational overhead. It's particularly valuable for serverless applications, global apps requiring low latency, and teams wanting enterprise Redis features without self-hosting complexity.

Prerequisites

  • Upstash account (free tier available)
  • Redis client library or redis-cli
  • Environment variables for connection credentials
  • TLS support (required for connections)

Quick start

npm install @upstash/redis
import { Redis } from '@upstash/redis'

const redis = new Redis({
  url: process.env.UPSTASH_REDIS_REST_URL,
  token: process.env.UPSTASH_REDIS_REST_TOKEN,
})

// Basic operations
await redis.set('key', 'value')
const value = await redis.get('key')
await redis.incr('counter')

Using redis-cli:

redis-cli --tls -a PASSWORD -h ENDPOINT -p PORT
> set user:123 "john@example.com"
> get user:123
> hset profile:123 name "John" age 30

Core concepts

REST API First: Unlike traditional Redis, Upstash prioritizes HTTP REST API over TCP connections, making it serverless-friendly and eliminating connection pooling issues.

Global Replication: Primary region handles writes, read regions serve local reads with eventual consistency. Choose primary region closest to write traffic.

Automatic Scaling: Database scales compute and storage automatically. No memory limits or connection limits to manage.

Durable by Default: Data persists automatically with point-in-time backups, unlike traditional Redis's optional persistence.

Key API surface

// Basic operations
await redis.set(key, value, { ex: 3600 }) // with TTL
await redis.get(key)
await redis.del(key)
await redis.exists(key)

// Data structures
await redis.hset('hash', { field1: 'value1', field2: 'value2' })
await redis.hget('hash', 'field1')
await redis.lpush('list', 'item1', 'item2')
await redis.sadd('set', 'member1', 'member2')

// Atomic operations
await redis.incr('counter')
await redis.incrby('counter', 5)
await redis.expire('key', 300)

// Advanced
await redis.pipeline().set('key1', 'val1').get('key2').exec()
await redis.eval(script, keys, args)

Common patterns

Session storage with TTL:

// Store session with auto-expiry
await redis.setex(`session:${userId}`, 3600, JSON.stringify(sessionData))

// Extend session on activity
await redis.expire(`session:${userId}`, 3600)

Rate limiting:

const key = `rate_limit:${userId}:${window}`
const current = await redis.incr(key)
if (current === 1) {
  await redis.expire(key, windowSeconds)
}
return current <= maxRequests

Caching with fallback:

const cached = await redis.get(cacheKey)
if (cached) return JSON.parse(cached)

const fresh = await fetchFromDatabase()
await redis.setex(cacheKey, 300, JSON.stringify(fresh))
return fresh

Pub/Sub messaging:

// Publisher
await redis.publish('notifications', JSON.stringify(message))

// Subscriber (use WebSocket or SSE for serverless)
redis.subscribe('notifications', (message) => {
  console.log('Received:', JSON.parse(message))
})

Configuration

Environment variables:

UPSTASH_REDIS_REST_URL=https://xxx.upstash.io
UPSTASH_REDIS_REST_TOKEN=your_token_here
UPSTASH_REDIS_URL=rediss://:password@host:port # for TCP clients

Client options:

const redis = new Redis({
  url: process.env.UPSTASH_REDIS_REST_URL,
  token: process.env.UPSTASH_REDIS_REST_TOKEN,
  retry: { retries: 3, backoff: 'exponential' },
  cache: 'no-cache', // or 'force-cache' for edge caching
})

Best practices

  • Use REST API in serverless: HTTP connections work better than TCP in Lambda/Edge functions
  • Choose primary region carefully: Place it closest to write-heavy traffic, not read traffic
  • Leverage read regions: Add read regions near users for lower latency reads
  • Set appropriate TTLs: Use expiration times to prevent memory bloat and enable auto-cleanup
  • Use pipeline for batching: Group multiple operations to reduce round trips
  • Enable TLS everywhere: All connections require TLS, never use plain Redis protocol
  • Monitor usage patterns: Use Upstash dashboard to optimize region placement and identify hot keys
  • Handle eventual consistency: Read regions may lag behind primary by milliseconds

Database setup and connection gotchas

  • TLS is mandatory: All connections must use TLS. Traditional Redis clients need --tls flag or SSL config
  • Connection pooling unnecessary: REST API doesn't need persistent connections, avoid traditional Redis pooling patterns
  • Read region lag: Read regions have eventual consistency, not suitable for read-your-writes scenarios
  • Command compatibility: Some Redis commands unavailable (MULTI/EXEC, blocking operations, Lua scripts with limitations)
  • Token vs password auth: REST API uses tokens, TCP connections use passwords - don't mix them up
  • Serverless cold starts: First request may be slower due to connection establishment
  • Case sensitivity: Database names and regions are case-sensitive during creation
  • Backup timing: Automatic backups happen during low-traffic periods, not on-demand initially