
Drizzle ORM with JSONB: Flexible Schema for AI Analytics
Master PostgreSQL JSONB columns with Drizzle ORM for flexible, type-safe data modeling. Learn when to use JSONB over normalized tables and how to query, index, and validate complex data structures.
Drizzle ORM with JSONB: Flexible Schema for AI Analytics
Your AI analytics platform receives complex, nested data structures that don't fit neatly into traditional SQL columns. Analysis results have varying schemas. User configurations change frequently. You need flexibility without sacrificing type safety.
Enter PostgreSQL JSONB with Drizzle ORM.
While normalized tables remain the gold standard for relational data, JSONB columns provide a powerful tool for semi-structured data. Combined with Drizzle's type-safe API and TypeScript's static typing, you get the best of both worlds: schema flexibility where you need it and rock-solid type safety everywhere else.
In this guide, we'll explore production-ready patterns for using JSONB columns with Drizzle ORM, including when to use them, how to query efficiently, and how to maintain type safety. We'll examine real-world examples from AI analytics platforms handling complex data structures at scale.
When to Use JSONB vs Normalized Tables
The choice between JSONB and normalized tables isn't about "modern" vs "traditional" - it's about picking the right tool for your data model.
Use Normalized Tables When:
- Data has stable, well-defined relationships
- You need referential integrity constraints
- Queries frequently join across entities
- Individual fields require different access patterns
- You need column-level indexes on multiple fields
Use JSONB Columns When:
- Schema varies by record (polymorphic data)
- Data arrives pre-structured from external APIs
- Reading entire objects is more common than filtering
- Storing snapshots of complex state
- Rapid schema evolution during development
- Nested structures map poorly to normalized tables
The Reality: Most applications use both. Core entities like users, products, and orders live in normalized tables. Their dynamic attributes, analysis results, and event metadata go in JSONB columns.
Drizzle Schema Definition with JSONB
Drizzle makes JSONB columns first-class citizens with full TypeScript support. Let's build an analytics platform schema that stores AI analysis results.
import { pgTable, uuid, timestamp, jsonb, text } from 'drizzle-orm/pg-core';
// Define your TypeScript types first
export type AnalysisConfig = {
model: string;
temperature: number;
maxTokens: number;
customPrompts?: string[];
};
export type AnalysisResult = {
score: number;
confidence: number;
findings: {
category: string;
severity: 'low' | 'medium' | 'high';
description: string;
recommendations?: string[];
}[];
metadata: {
processingTime: number;
modelVersion: string;
timestamp: string;
};
};
// Define your table with JSONB columns
export const analysisReports = pgTable('analysis_reports', {
id: uuid().primaryKey().defaultRandom(),
// Regular columns for frequently queried fields
userId: text('user_id').notNull(),
status: text('status').notNull(),
// JSONB columns for complex, nested data
config: jsonb().$type<AnalysisConfig>().notNull(),
results: jsonb().$type<AnalysisResult>(),
// Timestamps
createdAt: timestamp('created_at').defaultNow().notNull(),
updatedAt: timestamp('updated_at')
.defaultNow()
.$onUpdate(() => new Date())
.notNull(),
});
// Infer types for inserts and selects
export type InsertAnalysisReport = typeof analysisReports.$inferInsert;
export type SelectAnalysisReport = typeof analysisReports.$inferSelect;
Key Pattern: Type Safety with .$type()
The .$type<T>() method tells Drizzle what TypeScript type your JSONB column stores. This gives you:
- Compile-time type checking on inserts
- Autocomplete in your IDE
- Type safety when reading
JSONBdata - Refactoring safety across your codebase
Drizzle validates that inserted data matches your type definition, preventing runtime errors from malformed data.
TypeScript Type Safety for JSONB Data
Once you've defined your JSONB types, Drizzle enforces them throughout your application. Here's how type safety flows from schema to queries:
import { db } from './db';
import { analysisReports } from './schema';
// ✅ Type-safe insert
async function createAnalysis(userId: string) {
const report = await db
.insert(analysisReports)
.values({
userId,
status: 'processing',
config: {
model: 'gpt-4',
temperature: 0.7,
maxTokens: 2000,
customPrompts: ['Analyze technical accuracy'],
},
// 'results' is optional, can be null initially
})
.returning();
return report[0];
}
// ✅ Type-safe update
async function updateAnalysisResults(reportId: string, results: AnalysisResult) {
await db
.update(analysisReports)
.set({
results,
status: 'completed',
})
.where(eq(analysisReports.id, reportId));
}
// ✅ Type-safe reads with full autocomplete
async function getAnalysis(reportId: string) {
const report = await db.query.analysisReports.findFirst({
where: eq(analysisReports.id, reportId),
});
if (!report) return null;
// TypeScript knows the exact shape of these objects
const modelUsed = report.config.model; // string
const score = report.results?.score; // number | undefined
const firstFinding = report.results?.findings[0]; // auto-complete available
return report;
}
Runtime Validation with Zod
TypeScript provides compile-time safety, but runtime validation ensures data integrity from external sources. Use Zod schemas alongside Drizzle types:
import { z } from 'zod';
// Define Zod schema for runtime validation
export const analysisConfigSchema = z.object({
model: z.string(),
temperature: z.number().min(0).max(2),
maxTokens: z.number().positive(),
customPrompts: z.array(z.string()).optional(),
});
export const analysisResultSchema = z.object({
score: z.number().min(0).max(100),
confidence: z.number().min(0).max(1),
findings: z.array(
z.object({
category: z.string(),
severity: z.enum(['low', 'medium', 'high']),
description: z.string(),
recommendations: z.array(z.string()).optional(),
}),
),
metadata: z.object({
processingTime: z.number(),
modelVersion: z.string(),
timestamp: z.string().datetime(),
}),
});
// Use in API endpoints or service functions
async function createAnalysisFromAPI(input: unknown) {
// Validate and parse input
const config = analysisConfigSchema.parse(input);
// TypeScript now knows 'config' matches AnalysisConfig
return await db
.insert(analysisReports)
.values({
userId: 'user-123',
status: 'processing',
config, // Type-safe insert
})
.returning();
}
This pattern gives you:
- Compile-time safety from
TypeScript - Runtime validation from
Zod - Database schema from
Drizzle - Single source of truth for your data model
Querying JSONB with Drizzle's Fluent API
Drizzle provides SQL-like operators for querying JSONB columns. Let's explore common patterns:
Basic JSONB Queries
import { eq, sql } from 'drizzle-orm';
import { db } from './db';
import { analysisReports } from './schema';
// Query by top-level JSONB field
async function findByModel(model: string) {
return await db
.select()
.from(analysisReports)
.where(sql`${analysisReports.config}->>'model' = ${model}`);
}
// Query nested JSONB fields
async function findHighSeverityFindings() {
return await db
.select()
.from(analysisReports)
.where(sql`${analysisReports.results}->>'findings' @> '[{"severity": "high"}]'`);
}
// Extract JSONB field as column in SELECT
async function getModelUsageStats() {
return await db
.select({
model: sql<string>`${analysisReports.config}->>'model'`,
count: sql<number>`count(*)`,
})
.from(analysisReports)
.groupBy(sql`${analysisReports.config}->>'model'`);
}
JSONB Operators Reference
PostgreSQL provides specialized operators for JSONB:
->- Get JSON object field by key (returns JSON)->>- Get JSON object field as text@>- Does left JSON contain right JSON?<@- Is left JSON contained in right JSON??- Does JSON object contain key??&- Does JSON object contain all keys??|- Does JSON object contain any key?
Advanced Querying Patterns
// Find reports where config contains specific keys
async function findReportsWithCustomPrompts() {
return await db
.select()
.from(analysisReports)
.where(sql`${analysisReports.config} ? 'customPrompts'`);
}
// Range queries on JSONB numeric fields
async function findHighConfidenceResults() {
return await db
.select()
.from(analysisReports)
.where(sql`(${analysisReports.results}->>'confidence')::numeric > 0.8`);
}
// Array containment queries
async function findReportsByFindingCategory(category: string) {
return await db
.select()
.from(analysisReports)
.where(
sql`EXISTS (
SELECT 1 FROM jsonb_array_elements(${analysisReports.results}->'findings') finding
WHERE finding->>'category' = ${category}
)`,
);
}
// Combine JSONB and regular column conditions
async function findUserReportsWithHighScores(userId: string) {
return await db
.select()
.from(analysisReports)
.where(
sql`${analysisReports.userId} = ${userId}
AND (${analysisReports.results}->>'score')::numeric > 80`,
);
}
Indexing JSONB for Performance
JSONB columns need indexes just like regular columns. PostgreSQL offers specialized index types for JSONB data.
GIN Indexes for JSONB
GIN (Generalized Inverted Index) indexes are ideal for JSONB columns when you query multiple keys or nested paths.
import { pgTable, uuid, jsonb, index } from 'drizzle-orm/pg-core';
import { sql } from 'drizzle-orm';
export const analysisReports = pgTable(
'analysis_reports',
{
id: uuid().primaryKey().defaultRandom(),
config: jsonb().$type<AnalysisConfig>().notNull(),
results: jsonb().$type<AnalysisResult>(),
// ... other columns
},
(table) => ({
// GIN index on entire JSONB column (supports all operators)
configGinIdx: index('config_gin_idx').using('gin', sql`${table.config}`),
// GIN index with jsonb_path_ops (faster, smaller, containment only)
resultsGinPathIdx: index('results_gin_path_idx').using('gin', sql`${table.results} jsonb_path_ops`),
}),
);
When to Use Each GIN Index Type:
Default GIN Index (gin):
- Supports all
JSONBoperators (@>,?,?&,?|) - Larger index size
- Slightly slower queries
- Use when you need full operator support
jsonb_path_ops GIN Index:
- Supports only
@>(containment) operator - Smaller index size (3x smaller in some cases)
- Faster containment queries
- Use for filtered queries on
JSONBstructure
Expression Indexes for Specific Paths
When you frequently query specific JSONB paths, create expression indexes:
export const analysisReports = pgTable(
'analysis_reports',
{
id: uuid().primaryKey().defaultRandom(),
config: jsonb().$type<AnalysisConfig>().notNull(),
results: jsonb().$type<AnalysisResult>(),
},
(table) => ({
// Index specific JSONB field extraction
modelIdx: index('model_idx').on(sql`(${table.config}->>'model')`),
// Index computed JSONB value (cast to appropriate type)
scoreIdx: index('score_idx').on(sql`((${table.results}->>'score')::numeric)`),
// Partial index for common filter + JSONB condition
highScoreIdx: index('high_score_idx')
.on(sql`((${table.results}->>'score')::numeric)`)
.where(sql`(${table.results}->>'score')::numeric > 80`),
}),
);
Index Performance Guidelines
According to PostgreSQL JSONB documentation:
- Use
GINindexes when queryingJSONBkeys, containment, or existence - Use expression indexes for frequently filtered specific paths
- Use partial indexes for commonly filtered subsets
- Avoid indexing large
JSONBvalues that change frequently - Monitor index size -
GINindexes can grow large on high-cardinality data
Testing Index Performance
-- Explain analyze to check index usage
EXPLAIN ANALYZE
SELECT * FROM analysis_reports
WHERE config->>'model' = 'gpt-4';
-- Check index size
SELECT pg_size_pretty(pg_relation_size('config_gin_idx'));
-- List JSONB indexes
SELECT schemaname, tablename, indexname, indexdef
FROM pg_indexes
WHERE indexdef LIKE '%jsonb%';
Migrations with JSONB Columns
Drizzle Kit handles JSONB migrations seamlessly. Here's how to evolve your schema safely:
Initial Schema Creation
// schema.ts - Initial version
export const analysisReports = pgTable('analysis_reports', {
id: uuid().primaryKey().defaultRandom(),
config: jsonb().$type<AnalysisConfig>().notNull(),
createdAt: timestamp('created_at').defaultNow().notNull(),
});
Generate migration:
npx drizzle-kit generate
This creates a migration file:
-- 0001_create_analysis_reports.sql
CREATE TABLE "analysis_reports" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid(),
"config" jsonb NOT NULL,
"created_at" timestamp DEFAULT now() NOT NULL
);
Adding New JSONB Columns
// schema.ts - Add results column
export const analysisReports = pgTable('analysis_reports', {
id: uuid().primaryKey().defaultRandom(),
config: jsonb().$type<AnalysisConfig>().notNull(),
results: jsonb().$type<AnalysisResult>(), // New column
createdAt: timestamp('created_at').defaultNow().notNull(),
});
npx drizzle-kit generate
-- 0002_add_results_column.sql
ALTER TABLE "analysis_reports"
ADD COLUMN "results" jsonb;
Evolving JSONB Structure
The beauty of JSONB is schema evolution happens in TypeScript, not SQL:
// Old type
export type AnalysisResult = {
score: number;
findings: string[];
};
// New type with backward compatibility
export type AnalysisResult = {
score: number;
confidence?: number; // New optional field
findings: {
category: string;
description: string;
}[]; // Evolved from string[] to object[]
// Old records have string[], new records have object[]
};
// Handle both formats in application code
function normalizeFindings(results: AnalysisResult) {
return results.findings.map((finding) => {
// Handle old string format
if (typeof finding === 'string') {
return {
category: 'general',
description: finding,
};
}
// Handle new object format
return finding;
});
}
Data Migrations for JSONB
When you need to transform existing JSONB data:
// migration-script.ts
import { db } from './db';
import { analysisReports } from './schema';
import { sql } from 'drizzle-orm';
async function migrateResultsStructure() {
// Update JSONB structure using PostgreSQL's jsonb functions
await db.execute(
sql`
UPDATE analysis_reports
SET results = jsonb_set(
results,
'{metadata}',
jsonb_build_object(
'processingTime', 0,
'modelVersion', 'v1',
'timestamp', now()::text
)
)
WHERE results IS NOT NULL
AND results->'metadata' IS NULL
`,
);
}
Migration Best Practices
- Make
JSONBchanges additive - Add optional fields, don't remove - Handle
null/missing gracefully - UseTypeScript's optional types - Version your
JSONBschemas - Include version field when structure varies - Test migrations on copies -
JSONBupdates can be expensive at scale - Use transactions - Wrap data migrations in transactions for safety
Learn more about Drizzle migrations in the official documentation.
Validation with Zod for JSONB Data
Runtime validation is critical for JSONB columns since TypeScript can't prevent invalid data at runtime. Zod provides elegant validation that mirrors your TypeScript types.
Schema Definition Pattern
import { z } from 'zod';
// Define Zod schema
export const analysisConfigSchema = z.object({
model: z.enum(['gpt-4', 'gpt-3.5-turbo', 'claude-2']),
temperature: z.number().min(0).max(2),
maxTokens: z.number().int().positive().max(10000),
customPrompts: z.array(z.string().min(1)).optional(),
});
// Infer TypeScript type from Zod schema (single source of truth)
export type AnalysisConfig = z.infer<typeof analysisConfigSchema>;
// Use in Drizzle schema
export const analysisReports = pgTable('analysis_reports', {
id: uuid().primaryKey().defaultRandom(),
config: jsonb().$type<AnalysisConfig>().notNull(),
});
Input Validation
import { analysisConfigSchema, analysisResultSchema } from './schemas';
async function createAnalysisReport(input: unknown) {
// Validate input data
const validatedConfig = analysisConfigSchema.parse(input);
// Type-safe database insert
return await db
.insert(analysisReports)
.values({
userId: 'user-123',
status: 'processing',
config: validatedConfig, // Guaranteed to match schema
})
.returning();
}
// With error handling
async function updateAnalysisResults(reportId: string, input: unknown) {
const parseResult = analysisResultSchema.safeParse(input);
if (!parseResult.success) {
throw new Error(`Invalid analysis results: ${parseResult.error.message}`);
}
await db
.update(analysisReports)
.set({
results: parseResult.data,
status: 'completed',
})
.where(eq(analysisReports.id, reportId));
}
API Integration Pattern
Combine Drizzle, Zod, and TypeScript for end-to-end type safety:
import { Router } from 'express';
import { z } from 'zod';
const router = Router();
// Request validation schema
const createAnalysisRequestSchema = z.object({
userId: z.string().uuid(),
config: analysisConfigSchema,
});
router.post('/analysis', async (req, res) => {
try {
// Validate request body
const { userId, config } = createAnalysisRequestSchema.parse(req.body);
// Type-safe database operation
const report = await db
.insert(analysisReports)
.values({
userId,
status: 'processing',
config,
})
.returning();
res.json({ success: true, reportId: report[0].id });
} catch (error) {
if (error instanceof z.ZodError) {
res.status(400).json({
error: 'Validation failed',
details: error.errors,
});
} else {
res.status(500).json({ error: 'Internal server error' });
}
}
});
Complex Validation Rules
Zod supports advanced validation for complex JSONB structures:
export const analysisResultSchema = z
.object({
score: z.number().min(0).max(100),
confidence: z.number().min(0).max(1),
findings: z
.array(
z.object({
category: z.string(),
severity: z.enum(['low', 'medium', 'high']),
description: z.string().min(10),
recommendations: z.array(z.string()).optional(),
}),
)
.min(1) // Require at least one finding
.refine(
(findings) => {
// Custom validation: high severity must have recommendations
return findings.every(
(f) => f.severity !== 'high' || (f.recommendations && f.recommendations.length > 0),
);
},
{ message: 'High severity findings must include recommendations' },
),
metadata: z.object({
processingTime: z.number().positive(),
modelVersion: z.string().regex(/^v\d+\.\d+$/),
timestamp: z.string().datetime(),
}),
})
.refine(
(data) => {
// Cross-field validation: low confidence should have lower scores
if (data.confidence < 0.5 && data.score > 80) {
return false;
}
return true;
},
{ message: 'High scores require high confidence' },
);
Validation Performance
For high-throughput APIs, consider validation strategies:
// Cache compiled schemas
const compiledSchema = analysisResultSchema;
// Validate only on write, trust on read
async function safeInsert(data: unknown) {
const validated = compiledSchema.parse(data); // Validate
return await db.insert(analysisReports).values({ results: validated });
}
async function safeRead(reportId: string) {
const report = await db.query.analysisReports.findFirst({
where: eq(analysisReports.id, reportId),
});
// Skip validation on read - trust database
return report;
}
Real Use Case: Storing AI Analysis Results
Let's build a complete example: an AI content analysis system that stores complex, varying results using JSONB.
The Challenge
An AI platform analyzes content across multiple dimensions:
- SEO optimization scores
- Technical audit findings
- Sentiment analysis results
- Competitor comparisons
Each analysis type has different structure. Results evolve as AI models improve. Traditional normalized tables would require constant schema changes.
The Solution: JSONB with Drizzle ORM
import { pgTable, uuid, timestamp, jsonb, text, index } from 'drizzle-orm/pg-core';
import { sql } from 'drizzle-orm';
import { z } from 'zod';
// Define schemas with Zod (single source of truth)
export const seoAnalysisSchema = z.object({
overallScore: z.number().min(0).max(100),
metrics: z.object({
technicalSEO: z.number(),
contentQuality: z.number(),
userExperience: z.number(),
}),
improvements: z.array(
z.object({
priority: z.enum(['low', 'medium', 'high', 'critical']),
category: z.string(),
description: z.string(),
estimatedImpact: z.number(),
}),
),
timestamp: z.string().datetime(),
});
export const sentimentAnalysisSchema = z.object({
distribution: z.object({
positive: z.number(),
neutral: z.number(),
negative: z.number(),
}),
breakdown: z.array(
z.object({
source: z.string(),
sentiment: z.enum(['positive', 'neutral', 'negative']),
confidence: z.number(),
excerpt: z.string(),
}),
),
});
export const competitorAnalysisSchema = z.object({
competitors: z.record(
z.string(),
z.object({
mentions: z.number(),
percentage: z.number(),
trends: z.array(
z.object({
period: z.string(),
value: z.number(),
}),
),
}),
),
});
// Infer TypeScript types
export type SEOAnalysis = z.infer<typeof seoAnalysisSchema>;
export type SentimentAnalysis = z.infer<typeof sentimentAnalysisSchema>;
export type CompetitorAnalysis = z.infer<typeof competitorAnalysisSchema>;
// Combined analysis result
export type AnalysisResults = {
seo?: SEOAnalysis;
sentiment?: SentimentAnalysis;
competitors?: CompetitorAnalysis;
};
// Drizzle schema
export const contentReports = pgTable(
'content_reports',
{
id: uuid().primaryKey().defaultRandom(),
// Core fields (normalized)
websiteUrl: text('website_url').notNull(),
status: text('status').notNull().default('processing'),
// Flexible analysis results (JSONB)
results: jsonb().$type<AnalysisResults>(),
// Timestamps
createdAt: timestamp('created_at').defaultNow().notNull(),
updatedAt: timestamp('updated_at')
.defaultNow()
.$onUpdate(() => new Date())
.notNull(),
},
(table) => ({
// Index for querying by URL
websiteUrlIdx: index('website_url_idx').on(table.websiteUrl),
// GIN index for JSONB queries
resultsGinIdx: index('results_gin_idx').using('gin', sql`${table.results}`),
// Expression index for SEO score
seoScoreIdx: index('seo_score_idx').on(sql`((${table.results}->'seo'->>'overallScore')::numeric)`),
}),
);
export type InsertContentReport = typeof contentReports.$inferInsert;
export type SelectContentReport = typeof contentReports.$inferSelect;
Service Implementation
import { db } from './db';
import { contentReports } from './schema';
import { eq, sql } from 'drizzle-orm';
import { seoAnalysisSchema, sentimentAnalysisSchema, competitorAnalysisSchema } from './schemas';
export class ContentAnalysisService {
async createReport(websiteUrl: string) {
const [report] = await db
.insert(contentReports)
.values({
websiteUrl,
status: 'processing',
})
.returning();
return report;
}
async updateSEOAnalysis(reportId: string, data: unknown) {
// Validate incoming data
const seoAnalysis = seoAnalysisSchema.parse(data);
// Update specific JSONB field
await db
.update(contentReports)
.set({
results: sql`
COALESCE(${contentReports.results}, '{}'::jsonb) ||
jsonb_build_object('seo', ${JSON.stringify(seoAnalysis)}::jsonb)
`,
})
.where(eq(contentReports.id, reportId));
}
async updateSentimentAnalysis(reportId: string, data: unknown) {
const sentimentAnalysis = sentimentAnalysisSchema.parse(data);
await db
.update(contentReports)
.set({
results: sql`
COALESCE(${contentReports.results}, '{}'::jsonb) ||
jsonb_build_object('sentiment', ${JSON.stringify(sentimentAnalysis)}::jsonb)
`,
})
.where(eq(contentReports.id, reportId));
}
async updateCompetitorAnalysis(reportId: string, data: unknown) {
const competitorAnalysis = competitorAnalysisSchema.parse(data);
await db
.update(contentReports)
.set({
results: sql`
COALESCE(${contentReports.results}, '{}'::jsonb) ||
jsonb_build_object('competitors', ${JSON.stringify(competitorAnalysis)}::jsonb)
`,
})
.where(eq(contentReports.id, reportId));
}
async completeReport(reportId: string) {
await db.update(contentReports).set({ status: 'completed' }).where(eq(contentReports.id, reportId));
}
async getReport(reportId: string) {
return await db.query.contentReports.findFirst({
where: eq(contentReports.id, reportId),
});
}
async findHighScoringReports(minScore: number) {
return await db
.select()
.from(contentReports)
.where(sql`(${contentReports.results}->'seo'->>'overallScore')::numeric >= ${minScore}`);
}
async getCompetitorMentions(competitorName: string) {
return await db
.select()
.from(contentReports)
.where(sql`${contentReports.results}->'competitors'->'competitors' ? ${competitorName}`);
}
}
Usage Example
const service = new ContentAnalysisService();
// Create new report
const report = await service.createReport('https://example.com');
// Update with SEO analysis (validated)
await service.updateSEOAnalysis(report.id, {
overallScore: 85,
metrics: {
technicalSEO: 90,
contentQuality: 80,
userExperience: 85,
},
improvements: [
{
priority: 'high',
category: 'Performance',
description: 'Optimize image loading',
estimatedImpact: 15,
},
],
timestamp: new Date().toISOString(),
});
// Update with sentiment analysis
await service.updateSentimentAnalysis(report.id, {
distribution: {
positive: 65,
neutral: 25,
negative: 10,
},
breakdown: [
{
source: 'Product Reviews',
sentiment: 'positive',
confidence: 0.92,
excerpt: 'Great product, highly recommend',
},
],
});
// Mark complete
await service.completeReport(report.id);
// Query results
const topReports = await service.findHighScoringReports(80);
Benefits of This Approach
- Flexible Schema: Add new analysis types without schema migrations
- Type Safety: Full
TypeScriptsupport from API to database - Validation: Runtime checks prevent invalid data
- Performance: Indexed
JSONBqueries remain fast - Maintainability: Clear separation between stable (URLs) and dynamic (results) data
Key Takeaways
- Choose wisely: Use
JSONBfor polymorphic, evolving data; normalized tables for stable entities - Type safety matters:
Drizzle's$type<T>()bringsTypeScripttoJSONBcolumns - Validate at runtime: Combine
Zodschemas withDrizzletypes for bulletproof validation - Index strategically:
GINindexes enable fastJSONBqueries; expression indexes optimize specific paths - Evolve gracefully:
JSONBschema changes happen in code, not migrations - Think hybrid: Most apps benefit from normalized tables AND
JSONBcolumns together
JSONB with Drizzle ORM gives you schema flexibility without sacrificing the type safety and query power you expect from TypeScript and SQL. For AI analytics, dynamic configurations, and evolving data models, it's a production-ready pattern that scales.
Ready to dive deeper into Drizzle ORM? Check out the official documentation or explore PostgreSQL JSONB capabilities.