Redis is not a database you reach for — it's one you reach for first. Once you understand what it does, you'll wonder how you shipped anything without it.
Redis (Remote Dictionary Server) is an open-source, in-memory data structure store. It can function as a database, cache, message broker, and streaming engine — often simultaneously. Unlike traditional databases that live on disk, Redis keeps its entire dataset in RAM, making reads and writes blazingly fast: we're talking sub-millisecond latency at scale.
In a Node.js backend context, Redis is most commonly used for:
Caching expensive database queries or API responses
Session management (storing user sessions instead of keeping them in-memory per process)
Rate limiting (tracking request counts per user per time window)
Job queues (deferring background work with libraries like BullMQ)
Pub/Sub messaging (broadcasting events between services)
Leaderboards and counters (atomic increment operations)
This article walks you through setting up Redis with Node.js from scratch, covering the most important patterns you'll actually encounter in production.
Prerequisites
Before diving in, make sure you have:
Node.js v18 or later
npm or pnpm
Redis running locally or a cloud instance (e.g., Redis Cloud, Upstash)
Running Redis Locally with Docker
The fastest way to get Redis running locally is with Docker:
The official, recommended client for Node.js is ioredis. It's battle-tested, supports TypeScript natively, has built-in retry logic, and handles clustering and Sentinel.
npm install ioredis
Why not redis (the npm package)? The redis npm package (v4+) is the official Node Redis client and is perfectly valid. ioredis tends to be preferred in production for its richer API, better pipeline support, and more mature cluster handling. Both are solid choices.
Redis is a key-value store at heart. Before diving into patterns, let's get familiar with the fundamental operations.
Strings — the simplest structure
// SET a value
await redis.set("greeting", "Hello, Redis!");
// GET a value
const value = await redis.get("greeting");
console.log(value); // "Hello, Redis!"
// SET with expiration (TTL in seconds)
await redis.set("session:abc123", JSON.stringify({ userId: 42 }), "EX", 3600);
// Check if a key exists
const exists = await redis.exists("greeting"); // 1 or 0
// Delete a key
await redis.del("greeting");
// Atomic increment
await redis.set("page:views", 0);
await redis.incr("page:views"); // 1
await redis.incrby("page:views", 5); // 6
Hashes — objects without serialization overhead
// Store a user object
await redis.hset("user:42", {
name: "Ada Lovelace",
email: "ada@example.com",
role: "admin",
});
// Get a single field
const name = await redis.hget("user:42", "name"); // "Ada Lovelace"
// Get all fields
const user = await redis.hgetall("user:42");
// { name: "Ada Lovelace", email: "ada@example.com", role: "admin" }
// Update a single field
await redis.hset("user:42", "role", "superadmin");
// Delete a field
await redis.hdel("user:42", "role");
Lists — queues and stacks
// Push to the right (enqueue)
await redis.rpush("tasks", "send-email", "generate-report");
// Pop from the left (dequeue)
const task = await redis.lpop("tasks"); // "send-email"
// Peek without removing
const all = await redis.lrange("tasks", 0, -1);
// Blocking pop (waits up to 5 seconds for an item)
const item = await redis.blpop("tasks", 5);
Sets — unique collections
// Add members
await redis.sadd("online-users", "user:1", "user:2", "user:3");
// Check membership
const isMember = await redis.sismember("online-users", "user:2"); // 1
// Get all members
const users = await redis.smembers("online-users");
// Remove a member
await redis.srem("online-users", "user:1");
// Set size
const count = await redis.scard("online-users");
Sorted Sets — leaderboards and time-series
// Add members with scores
await redis.zadd("leaderboard", 9500, "player:alice");
await redis.zadd("leaderboard", 8200, "player:bob");
await redis.zadd("leaderboard", 11000, "player:carol");
// Get top 3 (highest scores first)
const top3 = await redis.zrevrange("leaderboard", 0, 2, "WITHSCORES");
// Get a player's rank (0-indexed)
const rank = await redis.zrevrank("leaderboard", "player:alice"); // 1
// Get a player's score
const score = await redis.zscore("leaderboard", "player:carol"); // "11000"
Pattern 1: Response Caching
This is the most common Redis use case in web backends. The idea is simple: before hitting your database or a slow external API, check if the result is already cached.
// lib/cache.ts
import redis from "./redis";
type CacheOptions = {
ttl?: number; // seconds, default 60
};
export async function withCache<T>(
key: string,
fetcher: () => Promise<T>,
options: CacheOptions = {}
): Promise<T> {
const { ttl = 60 } = options;
// 1. Check the cache
const cached = await redis.get(key);
if (cached) {
return JSON.parse(cached) as T;
}
// 2. Cache miss — fetch from source
const data = await fetcher();
// 3. Store in cache with TTL
await redis.set(key, JSON.stringify(data), "EX", ttl);
return data;
}
Using it in a route handler (Express):
// routes/products.ts
import { withCache } from "../lib/cache";
import { db } from "../lib/db";
app.get("/products", async (req, res) => {
const products = await withCache(
"products:all",
() => db.query("SELECT * FROM products"),
{ ttl: 300 } // cache for 5 minutes
);
res.json(products);
});
Cache Invalidation
When the underlying data changes, you need to bust the cache:
Protect your API from abuse by limiting how many requests a client can make in a given time window. Redis's atomic INCR and EXPIRE make this safe under concurrency.
// lib/rateLimiter.ts
import redis from "./redis";
type RateLimitOptions = {
windowSeconds: number;
maxRequests: number;
};
type RateLimitResult = {
allowed: boolean;
remaining: number;
resetAt: number; // Unix timestamp
};
export async function checkRateLimit(
identifier: string, // e.g., IP address or user ID
options: RateLimitOptions
): Promise<RateLimitResult> {
const { windowSeconds, maxRequests } = options;
const key = `ratelimit:${identifier}`;
const now = Math.floor(Date.now() / 1000);
// Atomic pipeline: increment and set TTL in one round-trip
const pipeline = redis.pipeline();
pipeline.incr(key);
pipeline.ttl(key);
const results = await pipeline.exec();
const count = results![0][1] as number;
let ttl = results![1][1] as number;
// Set TTL on first request
if (count === 1) {
await redis.expire(key, windowSeconds);
ttl = windowSeconds;
}
const resetAt = now + ttl;
const remaining = Math.max(0, maxRequests - count);
return {
allowed: count <= maxRequests,
remaining,
resetAt,
};
}
Express middleware wrapping this:
// middleware/rateLimiter.ts
import { checkRateLimit } from "../lib/rateLimiter";
export function rateLimiter(maxRequests: number, windowSeconds: number) {
return async (req, res, next) => {
const identifier = req.ip;
const result = await checkRateLimit(identifier, { maxRequests, windowSeconds });
res.set({
"X-RateLimit-Limit": maxRequests,
"X-RateLimit-Remaining": result.remaining,
"X-RateLimit-Reset": result.resetAt,
});
if (!result.allowed) {
return res.status(429).json({
error: "Too Many Requests",
retryAfter: result.resetAt,
});
}
next();
};
}
// Apply to a route
app.post("/api/auth/login", rateLimiter(10, 60), loginHandler);
Pattern 4: Pub/Sub for Real-Time Events
Redis Pub/Sub lets you broadcast messages between services or processes in real-time. This is useful for invalidating caches across multiple server instances, pushing notifications, or building simple event buses.
Important: ioredis requires a separate connection for Pub/Sub. A client in subscriber mode can't send regular commands.
// lib/pubsub.ts
import Redis from "ioredis";
// Publisher — uses the regular connection
export const publisher = new Redis(process.env.REDIS_URL!);
// Subscriber — dedicated connection
export const subscriber = new Redis(process.env.REDIS_URL!);
Publishing an event:
// When a product is updated
async function updateProduct(id: number, data: Partial<Product>) {
await db.update("products", id, data);
// Notify all instances to invalidate their cache
await publisher.publish(
"cache:invalidate",
JSON.stringify({ pattern: `product:${id}` })
);
}
Subscribing in another process or server instance:
When multiple server instances might try to do the same thing simultaneously (e.g., running a cron job, processing a payment), you need a distributed lock to ensure only one wins.
// lib/lock.ts
import redis from "./redis";
import crypto from "crypto";
type Lock = {
release: () => Promise<void>;
};
export async function acquireLock(
resource: string,
ttlSeconds: number = 10
): Promise<Lock | null> {
const lockKey = `lock:${resource}`;
const lockValue = crypto.randomUUID(); // unique per attempt
// SET NX (only set if not exists) — atomic
const acquired = await redis.set(lockKey, lockValue, "EX", ttlSeconds, "NX");
if (!acquired) return null; // lock is held by someone else
return {
release: async () => {
// Only delete if we still own the lock (Lua script for atomicity)
const script = `
if redis.call("GET", KEYS[1]) == ARGV[1] then
return redis.call("DEL", KEYS[1])
else
return 0
end
`;
await redis.eval(script, 1, lockKey, lockValue);
},
};
}
For production codebases, loose JSON.parse calls are fragile. Use a small abstraction with a schema validator (like Zod) to make Redis reads type-safe:
# Connect to Redis CLI
docker exec -it redis-dev redis-cli
# Server info (memory, connections, keyspace)
INFO
# Real-time command monitor (use carefully in production!)
MONITOR
# Keyspace stats
INFO keyspace
# Memory usage of a specific key
MEMORY USAGE session:abc123
# Slow log (commands that took > 10ms)
SLOWLOG GET 10
Key eviction policies
When Redis runs out of memory, it needs to decide what to evict. Set the policy in your redis.conf or via command:
# Evict least recently used keys when maxmemory is reached
CONFIG SET maxmemory-policy allkeys-lru
# Set a memory limit (e.g., 256 MB)
CONFIG SET maxmemory 256mb
Common policies:
Policy
Description
noeviction
Return errors when memory is full (default)
allkeys-lru
Evict least recently used keys (recommended for caching)
volatile-lru
Evict LRU keys with an expiry set
allkeys-lfu
Evict least frequently used keys
volatile-ttl
Evict keys with shortest TTL first
For a pure cache use case, allkeys-lru is almost always the right choice.
Common Mistakes to Avoid
1. Using KEYS * in production
KEYS is O(N) and blocks the event loop. Use SCAN with a cursor for any pattern-matching operation in a live environment.
2. Storing huge values in Redis
Redis is RAM. Storing multi-megabyte blobs defeats the purpose. If a value exceeds ~100KB, reconsider whether it belongs in Redis or in object storage with just a metadata key in Redis.
3. Not setting TTLs
Every cache key should have a TTL unless you have a very explicit reason not to. Keys without TTLs accumulate indefinitely and will eventually fill your memory.
4. Sharing a connection for Pub/Sub and regular commands
As noted earlier, a subscribed connection can't run regular Redis commands. Always use two separate connections.
5. Forgetting that Redis is eventually consistent
In replicated setups, a write to the primary may not have propagated to replicas immediately. For reads that must reflect the absolute latest write, always read from the primary.
6. Using JSON.stringify on circular references
If your data object has circular references, serialization will throw. Validate your data before caching it.
Production Checklist
Before shipping Redis to production, verify:
Connection uses TLS (rediss:// scheme or tls: {} option)
maxmemory and maxmemory-policy are configured
All cached keys have TTLs set
Retry strategy is configured with exponential backoff
Pub/Sub uses a dedicated subscriber connection
No KEYS commands — only SCAN
Passwords/URLs are stored in environment variables, never hardcoded
Health check endpoint verifies Redis connectivity
Error events on the Redis client are logged (not silently swallowed)
Graceful shutdown closes the Redis connection cleanly
Redis transforms what's possible in a Node.js backend. What starts as "I'll add caching to this one slow query" tends to grow into sessions, rate limiting, queues, and pub/sub — because once Redis is in your stack and you understand its data structures, it starts solving problems you didn't even know you had.
The patterns in this article — caching with TTLs, session management, rate limiting, distributed locks, pub/sub — cover 90% of what most production applications need from Redis. Master them, understand the trade-offs (consistency vs. speed, memory constraints, eviction policies), and you'll have a backend that scales with considerably less pain.
Redis has been around since 2009 and is still one of the most depended-upon pieces of infrastructure in the industry. There's a reason for that.