
Production Logging with Pino: Component-Based Structured Logging
Learn how to implement production-grade logging with Pino using component-based architecture. Achieve 5x faster performance while maintaining observability.
Production Logging with Pino: Component-Based Structured Logging
Your Node.js application is running in production, and something goes wrong. You open the logs and see a wall of unstructured text messages. No context. No correlation. No way to trace the flow of a single request through your system.
Sound like a nightmare? It doesn't have to be.
Structured logging transforms this chaos into queryable, analyzable data. And when it comes to performance and developer experience, Pino stands out as the clear winner in the Node.js ecosystem.
In this guide, we'll build a production-ready logging system using Pino's component-based architecture. We'll cover environment-aware configuration, structured logging patterns for job queues, and integration with cloud logging platforms like Azure Monitor and AWS CloudWatch.
Why Pino Over Winston and Bunyan
Let's start with the elephant in the room: why choose Pino when Winston has been the de facto standard for years?
Performance Benchmarks
Pino is 5-10x faster than Winston and 2-3x faster than Bunyan. Here's why it matters:
Winston (traditional approach):
- Synchronous serialization blocks the event loop
- Heavy object cloning overhead
- Complex plugin architecture adds latency
Pino (optimized approach):
- Asynchronous serialization via worker threads
- Minimal overhead (uses
fast-json-stringifyunder the hood) - Child logger bindings are reused, not cloned
The performance difference becomes critical in high-throughput applications. In a typical Node.js API handling 1000 req/sec, logging overhead can consume 15-30% of CPU time with Winston. With Pino, it drops to 2-5%.
// Performance comparison (operations per second)
// Pino: ~70,000 ops/sec
// Winston: ~8,000 ops/sec
// Bunyan: ~30,000 ops/sec
Developer Experience
Beyond raw performance, Pino offers:
- Zero dependencies in production (pino-pretty is dev-only)
- TypeScript-first API with excellent type inference
- Automatic JSON serialization for objects, errors, and custom types
- Built-in redaction for sensitive data (passwords, tokens, PII)
According to the official Pino documentation, it's designed to be "very low overhead" and "extremely fast" while maintaining simplicity.
Environment-Aware Configuration
The first rule of production logging: never use pretty printing in production. Pretty printers add significant overhead and produce output that's hard for log aggregators to parse.
Here's how to configure Pino for both development and production:
// logger.ts
import pino from 'pino';
const isDevelopment = process.env.NODE_ENV !== 'production';
const logLevel = process.env.LOG_LEVEL || (isDevelopment ? 'debug' : 'info');
/**
* Detect if we're in a safe environment for pino-pretty
*
* pino-pretty uses worker threads (via thread-stream) which don't work in:
* - Production builds (bundlers can't resolve dynamic worker imports)
* - Serverless environments (limited worker thread support)
* - Browser contexts (worker_threads is Node.js server-only)
*
* @see https://github.com/pinojs/pino/issues/1794 - Next.js worker_threads resolution
* @see https://github.com/vercel/next.js/discussions/46987 - pino-pretty with Next.js app directory
*/
const canUsePretty = isDevelopment && typeof process !== 'undefined';
const baseConfig: pino.LoggerOptions = {
level: logLevel,
formatters: {
level: (label) => {
return { level: label };
},
},
timestamp: pino.stdTimeFunctions.isoTime,
base: {
service: 'my-api',
env: process.env.NODE_ENV || 'development',
},
};
export const logger = canUsePretty
? pino({
...baseConfig,
transport: {
target: 'pino-pretty',
options: {
colorize: true,
ignore: 'pid,hostname',
translateTime: 'HH:MM:ss Z',
messageFormat: '{component} {worker} - {msg}',
singleLine: false,
},
},
})
: pino(baseConfig);
Why pino-pretty Breaks in Production Builds
When you use transport: { target: 'pino-pretty' }, Pino spawns a worker thread via the thread-stream package. This architecture provides excellent performance in standard Node.js environments but causes problems in bundled and serverless contexts.
The issues stem from how modern bundlers (webpack, esbuild, SWC) handle dynamic imports. Pino's transport system loads modules dynamically at runtime, and bundlers can't statically analyze these imports. As documented in pino issue #1794, this leads to "Cannot resolve worker_threads" errors because the bundler doesn't include the necessary modules.
In Next.js specifically, this is compounded by the fact that code may run in both browser and server contexts. The worker_threads module is a Node.js server-only feature - it simply doesn't exist in browser environments. When Next.js tries to bundle your logger for client-side use, it fails to resolve the worker thread dependencies.
For serverless platforms like Vercel, AWS Lambda, and Cloudflare Workers, there are additional constraints around worker thread spawning and module resolution that make pino-pretty unreliable.
The solution is simple: only use pino-pretty in development, and output raw JSON in production. This is actually better for production anyway - JSON logs are what your log aggregation tools expect, and skipping the pretty-printing worker thread improves performance.
What's Happening Here?
Base configuration (always applied):
level: Controls which log levels are emitted (trace, debug, info, warn, error, fatal)formatters.level: Customizes how log levels appear in outputtimestamp: Uses ISO 8601 format for easy parsingbase: Adds service name and environment to every log entry
Development mode:
- Uses
pino-prettytransport for human-readable colored output - Custom message format shows component and worker info
- Ignores
pidandhostnameto reduce noise
Production mode:
- Outputs raw JSON (one object per line)
- No transport overhead
- Optimized for log aggregation tools
// Production log output
{
"level": "info",
"time": "2026-01-03T14:23:45.123Z",
"service": "my-api",
"env": "production",
"component": "http",
"method": "POST",
"url": "/api/users",
"statusCode": 201,
"duration": 45,
"msg": "Request completed"
}
Component-Based Child Loggers
The power of structured logging comes from context. Instead of parsing log messages to figure out which part of your system logged something, you can query by component.
Child loggers add persistent context without repetitive code:
// logger.ts (continued)
/**
* Create a child logger with additional context
*
* @example
* const workerLogger = createLogger({ component: 'worker', workerId: '123' });
* workerLogger.info('Processing job');
*/
export function createLogger(bindings: Record<string, unknown>) {
return logger.child(bindings);
}
/**
* Pre-configured component loggers
*/
const components = ['http', 'queue', 'database', 'service', 'cron'] as const;
type Component = (typeof components)[number];
// Create child loggers for each component
export const componentLoggers = Object.fromEntries(
components.map((name) => [name, logger.child({ component: name })]),
) as Record<Component, pino.Logger>;
// Named exports for convenience
export const { http: httpLogger, queue: queueLogger, database: dbLogger } = componentLoggers;
// ... serviceLogger, cronLogger follow same pattern
Why Use Child Loggers?
Without child loggers (anti-pattern):
// ❌ BAD: Repeating context in every log call
logger.info({ component: 'queue', jobId: '123' }, 'Job started');
logger.info({ component: 'queue', jobId: '123' }, 'Job processing');
logger.info({ component: 'queue', jobId: '123' }, 'Job completed');
With child loggers (best practice):
// ✅ GOOD: Context is bound once
const jobLogger = queueLogger.child({ jobId: '123' });
jobLogger.info('Job started');
jobLogger.info('Job processing');
jobLogger.info('Job completed');
Each log automatically includes component: 'queue' and jobId: '123'. This makes it trivial to filter logs in your observability platform:
// Azure Log Analytics query
logs
| where component == "queue"
| where jobId == "123"
| order by timestamp asc
According to the Pino child loggers documentation, child loggers are extremely efficient because bindings are stored as references, not copied.
Structured Logging for Job Queues
Job queues (BullMQ, Bull, Bee-Queue) are a common source of production issues. Tracking job lifecycle with structured logs makes debugging significantly easier.
Here's a complete pattern for job queue logging:
// logger-utils.ts
import type { Logger } from 'pino';
// Simplified Job interface (works with BullMQ, Bull, etc.)
interface Job {
id?: string;
name: string;
attemptsMade: number;
timestamp: number;
}
// Extract common job fields to avoid repetition
function jobContext(job: Job) {
return { jobId: job.id, jobName: job.name };
}
export function logJobStart(logger: Logger, job: Job, extraData?: Record<string, unknown>) {
logger.info(
{ ...jobContext(job), attemptsMade: job.attemptsMade, timestamp: job.timestamp, ...extraData },
'Job started',
);
}
export function logJobComplete(logger: Logger, job: Job, duration: number, extraData?: Record<string, unknown>) {
logger.info({ ...jobContext(job), duration, ...extraData }, 'Job completed successfully');
}
export function logJobError(logger: Logger, job: Job, error: Error, duration: number, extraData?: Record<string, unknown>) {
logger.error(
{ ...jobContext(job), attemptsMade: job.attemptsMade, duration, err: error, stack: error.stack, ...extraData },
'Job failed',
);
}
export function logJobProgress(logger: Logger, job: Job, progress: number, message: string, extraData?: Record<string, unknown>) {
logger.debug({ ...jobContext(job), progress, ...extraData }, message);
}
Using Job Lifecycle Logging
// worker.ts
import { queueLogger } from './logger';
import { logJobStart, logJobComplete, logJobError, logJobProgress } from './logger-utils';
async function processJob(job: Job) {
const startTime = Date.now();
logJobStart(queueLogger, job, {
payload: job.data, // Include relevant job data
});
try {
// Step 1: Fetch data
logJobProgress(queueLogger, job, 25, 'Fetching data from API');
const data = await fetchData(job.data.url);
// Step 2: Process data
logJobProgress(queueLogger, job, 50, 'Processing data');
const processed = await processData(data);
// Step 3: Store results
logJobProgress(queueLogger, job, 75, 'Storing results');
await storeResults(processed);
const duration = Date.now() - startTime;
logJobComplete(queueLogger, job, duration, {
recordsProcessed: processed.length,
});
return processed;
} catch (error) {
const duration = Date.now() - startTime;
logJobError(queueLogger, job, error as Error, duration, {
failureReason: (error as Error).message,
});
throw error;
}
}
Benefits of This Pattern
Consistent structure across all jobs:
- Every job logs the same fields (jobId, jobName, duration)
- Easy to create dashboards and alerts
Queryable metadata:
// Find all failed jobs in the last hour
logs
| where component == "queue"
| where level == "error"
| where timestamp > ago(1h)
| summarize count() by jobName
Progress tracking:
- Debug-level progress logs don't clutter production
- Enable debug logging temporarily when investigating issues
Performance insights:
// Average job duration by job name
logs
| where component == "queue"
| where msg == "Job completed successfully"
| summarize avg(duration) by jobName
Log Levels: When to Use Each
Choosing the right log level is crucial for production observability. Here's a practical guide:
trace (10)
When to use: Extremely verbose debugging, disabled by default
logger.trace({ sql: query, params }, 'Executing database query');
Don't use in production - generates massive log volume
debug (20)
When to use: Detailed debugging information, useful during development and troubleshooting
logger.debug({ userId, permissions }, 'Checking user permissions');
logger.debug({ cacheKey, hit: true }, 'Cache hit');
Production usage: Enable temporarily for specific components when debugging
info (30)
When to use: Normal application flow, significant events
logger.info({ jobId, duration }, 'Job completed successfully');
logger.info({ userId, action: 'login' }, 'User authenticated');
logger.info({ service: 'payment-api', status: 200 }, 'External API call succeeded');
Production default - captures important business events
warn (40)
When to use: Recoverable errors, deprecated API usage, resource pressure
logger.warn({ memoryUsage: 0.85 }, 'High memory usage detected');
logger.warn({ retryCount: 3 }, 'API request failed, will retry');
logger.warn('Deprecated function called, migrate to new API');
Set up alerts for repeated warnings
error (50)
When to use: Application errors that need attention
logger.error({ err: error, userId, action }, 'Failed to process user request');
logger.error({ jobId, err: error }, 'Job processing failed');
Always include:
- Error object (Pino auto-serializes with stack trace)
- Context needed to reproduce the issue
- User/job/request identifier for tracing
fatal (60)
When to use: Unrecoverable errors, application shutdown
logger.fatal({ err: error }, 'Database connection pool exhausted, shutting down');
process.exit(1);
Trigger immediate alerts and on-call notifications
Practical Configuration
// Development: see everything
LOG_LEVEL=debug npm run dev
// Production: info and above
LOG_LEVEL=info npm start
// Troubleshooting: enable debug for specific component
LOG_LEVEL=info DEBUG_COMPONENTS=queue npm start
Integration with Cloud Logging
Pino's JSON output integrates seamlessly with cloud logging platforms. Here's how to configure for popular services:
Azure Monitor (Log Analytics)
Azure Monitor ingests JSON logs automatically. No special configuration needed:
// Azure Container Apps automatically captures stdout/stderr
// Just log to console with Pino
logger.info({ userId, action }, 'User action completed');
Querying in Log Analytics:
ContainerAppConsoleLogs_CL
| where Log_s has "\"component\":\"queue\""
| extend logData = parse_json(Log_s)
| project
timestamp = TimeGenerated,
level = logData.level,
component = logData.component,
jobId = logData.jobId,
duration = logData.duration,
message = logData.msg
| where component == "queue"
| summarize avg(duration) by bin(timestamp, 5m)
AWS CloudWatch
Use the awslambda transport for Lambda functions:
// Lambda function
import pino from 'pino';
import { pinoLambdaDestination } from 'pino-lambda';
export const logger = pino(
{
level: 'info',
formatters: {
level: (label) => ({ level: label }),
},
},
pinoLambdaDestination(),
);
For EC2/ECS, standard JSON output works with CloudWatch Logs:
// No special configuration needed
const logger = pino({ level: 'info' });
logger.info({ requestId, duration }, 'Request completed');
CloudWatch Insights query:
fields @timestamp, component, jobId, duration
| filter component = "queue"
| stats avg(duration) by bin(5m)
Google Cloud Logging
Cloud Run and GKE automatically parse JSON logs with these fields:
// Map Pino levels to Cloud Logging severity
const logger = pino({
formatters: {
level(label, number) {
// Map to Cloud Logging severity
const severity =
{
10: 'DEBUG',
20: 'DEBUG',
30: 'INFO',
40: 'WARNING',
50: 'ERROR',
60: 'CRITICAL',
}[number] || 'INFO';
return { severity };
},
},
messageKey: 'message', // Cloud Logging expects 'message' field
});
Correlation IDs for Distributed Tracing
Add correlation IDs to trace requests across services:
// Middleware to add request ID to all logs
app.use(async (c, next) => {
const requestId = c.req.header('x-request-id') || crypto.randomUUID();
c.set('requestId', requestId);
// Create request-scoped logger
const requestLogger = httpLogger.child({ requestId });
c.set('logger', requestLogger);
await next();
});
// Usage in handlers
app.post('/api/jobs', async (c) => {
const logger = c.get('logger');
logger.info({ action: 'create-job' }, 'Creating new job');
// External service call includes request ID
const response = await fetch(externalApi, {
headers: { 'x-request-id': c.get('requestId') },
});
logger.info({ status: response.status }, 'External API call completed');
});
Advanced Patterns: Utility Functions
Let's expand our logging utilities with additional lifecycle patterns:
// logger-utils.ts (continued)
// Generic operation logger factory - reduces boilerplate for similar logging patterns
function createOperationLogger<T extends Record<string, unknown>>(
defaultMessage: string,
level: 'debug' | 'info' | 'error' = 'info',
) {
return (logger: Logger, context: T, extraData?: Record<string, unknown>, message = defaultMessage) => {
logger[level]({ ...context, ...extraData }, message);
};
}
// Database operations
export const logDbOperation = createOperationLogger<{
operation: string;
table: string;
duration: number;
}>('Database operation', 'debug');
// External API calls - start and response share service/endpoint context
export const logExternalApiCall = createOperationLogger<{
service: string;
endpoint: string;
}>('External API call started');
export function logExternalApiResponse(
logger: Logger,
service: string,
endpoint: string,
duration: number,
status: number,
extraData?: Record<string, unknown>,
) {
const level = status >= 400 ? 'error' : 'info';
logger[level]({ service, endpoint, duration, status, ...extraData }, 'External API call completed');
}
// Cron jobs - start and summary share cronName context
export const logCronExecution = (logger: Logger, cronName: string, extraData?: Record<string, unknown>) =>
logger.info({ cronName, timestamp: new Date().toISOString(), ...extraData }, 'Cron job execution started');
export const logCronSummary = (logger: Logger, cronName: string, summary: Record<string, unknown>) =>
logger.info({ cronName, ...summary }, 'Cron job completed');
Complete Example: API Endpoint with Full Logging
// api/users.ts
import { httpLogger, dbLogger, serviceLogger } from './logger';
import { logDbOperation, logExternalApiCall, logExternalApiResponse } from './logger-utils';
app.post('/api/users', async (c) => {
const logger = c.get('logger'); // Request-scoped logger with requestId
const startTime = Date.now();
try {
const userData = await c.req.json();
logger.info({ action: 'create-user' }, 'Creating new user');
// Database operation
const dbStart = Date.now();
const user = await db.users.create(userData);
logDbOperation(dbLogger, { operation: 'INSERT', table: 'users', duration: Date.now() - dbStart }, { userId: user.id });
// External API call (e.g., send welcome email)
const apiStart = Date.now();
logExternalApiCall(serviceLogger, { service: 'email-service', endpoint: '/send' }, { userId: user.id, template: 'welcome' });
const response = await fetch('https://email-api.example.com/send', {
method: 'POST',
body: JSON.stringify({ to: user.email, template: 'welcome' }),
});
logExternalApiResponse(serviceLogger, 'email-service', '/send', Date.now() - apiStart, response.status, {
userId: user.id,
});
logger.info(
{
userId: user.id,
duration: Date.now() - startTime,
},
'User created successfully',
);
return c.json({ user }, 201);
} catch (error) {
logger.error(
{
err: error,
duration: Date.now() - startTime,
},
'Failed to create user',
);
return c.json({ error: 'Internal server error' }, 500);
}
});
Output Example
In production, this generates structured logs that tell the complete story:
{"level":"info","time":"2026-01-03T14:23:45.100Z","component":"http","requestId":"req-123","action":"create-user","msg":"Creating new user"}
{"level":"debug","time":"2026-01-03T14:23:45.120Z","component":"database","operation":"INSERT","table":"users","duration":20,"userId":"user-456","msg":"Database operation"}
{"level":"info","time":"2026-01-03T14:23:45.125Z","component":"service","service":"email-service","endpoint":"/send","userId":"user-456","template":"welcome","msg":"External API call started"}
{"level":"info","time":"2026-01-03T14:23:45.350Z","component":"service","service":"email-service","endpoint":"/send","duration":225,"status":200,"userId":"user-456","msg":"External API call completed"}
{"level":"info","time":"2026-01-03T14:23:45.355Z","component":"http","requestId":"req-123","userId":"user-456","duration":255,"msg":"User created successfully"}
You can trace the entire request flow by searching for requestId: "req-123".
Key Takeaways
- Pino is 5-10x faster than Winston with minimal overhead, making it ideal for high-throughput production systems
- Environment-aware configuration separates pretty printing (dev) from JSON output (production) without code changes
- Component-based child loggers eliminate repetitive context and make logs queryable by service area
- Structured logging for job queues provides consistent lifecycle tracking with jobId, duration, and progress
- Choose log levels strategically: debug for troubleshooting, info for business events, warn for recoverable issues, error for failures
- Cloud logging integration is seamless with JSON output - works with Azure Monitor, AWS CloudWatch, and Google Cloud Logging
- Utility functions ensure consistent logging patterns across your codebase and team
Ready to level up your observability? Start by replacing your current logger with Pino using the component-based patterns shown here. Your future debugging sessions will thank you.
For more information, check out the official Pino documentation or explore Pino's child loggers guide for advanced patterns.