Back to Blog

JSON Repair for LLM Responses: Fixing Malformed AI Output

January 3, 2026
Stefan Mentović
llmjsonerror-handlingtypescriptai

Learn how to handle invalid JSON from LLMs using jsonrepair and Zod. Production-tested techniques for building resilient AI applications.

#JSON Repair for LLM Responses: Fixing Malformed AI Output

You've crafted the perfect prompt. Your LLM returns exactly the data you need. Everything works beautifully... until the API suddenly returns truncated JSON, missing closing braces, or trailing commas that break JSON.parse().

If you've worked with AI APIs—whether OpenAI, Anthropic, Google Gemini, or others — you've encountered this. LLMs are incredibly powerful at generating structured data, but they're not perfect. Even with careful prompting, you'll occasionally receive malformed JSON that crashes your application.

The good news? This is a solved problem. In this guide, we'll explore production-tested techniques for handling malformed JSON from LLMs, using the jsonrepair library combined with schema validation via Zod.

#The Problem: Why LLMs Generate Invalid JSON

LLMs generate text token-by-token, which creates several failure modes for JSON output:

1. Truncation (Max Token Limits)

{
  "results": [
    { "id": 1, "name": "First" },
    { "id": 2, "name": "Seco

Response cut off mid-generation when hitting token limits.

2. Missing Quotes

{
  "status": success,
  "message": "Operation complete"
}

The model "forgot" quotes around the success value.

3. Trailing Commas

{
	"items": [1, 2, 3],
	"total": 3
}

Valid in JavaScript, but illegal in JSON specification.

4. Comments (JSON5 Syntax)

{
	"result": "success", // API call completed
	"timestamp": 1234567890
}

The model mixed JSON with comments, which JSON.parse() rejects.

5. Mixed Quotes

{
	"name": "John",
	"age": 30
}

Inconsistent quote types that violate JSON specification.

#Real-World Frequency

In production systems handling thousands of LLM responses daily, malformed JSON occurs in approximately 2-5% of responses. This varies by:

  • Model provider: Some models are more prone to errors than others
  • Response length: Longer responses increase truncation risk
  • Prompt complexity: Complex structured outputs are harder to format correctly
  • Temperature setting: Higher temperatures increase formatting errors

Even a 2% failure rate means 20 crashed requests per 1,000 API calls. That's unacceptable in production.

#Common JSON Errors by LLM Provider

Different AI providers exhibit different error patterns based on their training and implementation:

#OpenAI (GPT-4, GPT-3.5)

  • Most common: Truncation at token limits
  • Less common: Missing closing braces in nested objects
  • Rare: Quote inconsistencies

#Anthropic Claude

  • Most common: Wrapping JSON in markdown code blocks
  • Less common: Extra explanatory text before/after JSON
  • Rare: Truncation (good token management)

#Google Gemini

  • Most common: Mixed text and JSON in single response
  • Less common: JSON5 syntax (trailing commas, comments)
  • Rare: Missing quotes on property values

#Open-Source Models (Llama, Mistral)

  • Most common: All of the above, especially with smaller models
  • Less common: Valid JSON but incorrect structure
  • Rare: Completely garbled output

Understanding these patterns helps you design better prompts and more robust error handling.

#Introducing jsonrepair: Automatic JSON Fixing

The jsonrepair library by Jos de Jong is purpose-built to fix common JSON errors automatically. It handles:

  • Missing or mismatched quotes
  • Trailing commas
  • Comments
  • Escape character issues
  • Truncated structures
  • Concatenated JSON strings
  • MongoDB data types (ObjectId, Date)
  • Python literals (True/False/None)
  • Special numeric values (NaN, Infinity)

#Installation

npm install jsonrepair

The library is lightweight (8KB minified), has zero dependencies, and works in both Node.js and browsers.

#Basic Usage

import { jsonrepair } from 'jsonrepair';

// Malformed JSON from LLM
const malformedJson = `{
  status: 'success',
  items: [1, 2, 3,],
  total: 3,
}`;

// Repair and parse
const repaired = jsonrepair(malformedJson);
const parsed = JSON.parse(repaired);

console.log(parsed);
// Output: { status: 'success', items: [1, 2, 3], total: 3 }

That's it. jsonrepair transforms invalid JSON into valid JSON that JSON.parse() can handle.

#Extracting JSON from Mixed Responses

LLMs often return JSON wrapped in markdown code blocks or mixed with explanatory text. You need to extract the JSON before repairing it.

#Pattern 1: JSON in Code Blocks

function extractJsonFromCodeBlock(text: string): string | null {
	// Match ```json ... ``` code blocks
	const match = text.match(/```json\s*\n([\s\S]*?)\n```/);

	if (match && match[1]) {
		return match[1];
	}

	return null;
}

// Example usage
const response = `
Here's the data you requested:

\`\`\`json
{
  "name": "Example",
  "value": 42
}
\`\`\`

Let me know if you need more details!
`;

const extracted = extractJsonFromCodeBlock(response);
// Returns: '{\n  "name": "Example",\n  "value": 42\n}'

#Pattern 2: JSON Anywhere in Text

function extractJsonFromText(text: string): string | null {
	// First, try code block extraction
	const codeBlockMatch = text.match(/```json\s*\n([\s\S]*?)\n```/);
	if (codeBlockMatch && codeBlockMatch[1]) {
		return codeBlockMatch[1];
	}

	// Fallback: find anything that looks like JSON
	const jsonMatch = text.match(/\{[\s\S]*\}/);
	if (jsonMatch && jsonMatch[0]) {
		return jsonMatch[0];
	}

	return null;
}

#Pattern 3: Multiple JSON Objects

Some responses contain multiple JSON objects. Handle this based on your use case:

function extractAllJsonObjects(text: string): string[] {
	const results: string[] = [];

	// Match all {...} structures
	const regex = /\{[^{}]*(?:\{[^{}]*\}[^{}]*)*\}/g;
	const matches = text.match(regex);

	if (matches) {
		for (const match of matches) {
			try {
				// Validate it's actually JSON by attempting repair + parse
				const repaired = jsonrepair(match);
				JSON.parse(repaired);
				results.push(match);
			} catch {
				// Not valid JSON, skip
				continue;
			}
		}
	}

	return results;
}

#Validation with Zod After Repair

Fixing malformed JSON is only half the battle. You also need to validate that the repaired JSON matches your expected structure. This is where Zod shines.

#Why Validate After Repair?

Even after repair, you might have:

  1. Valid JSON with wrong structure: {"foo": "bar"} when you expected {"name": "...", "age": ...}
  2. Missing required fields: The LLM omitted a property you need
  3. Wrong types: String instead of number, etc.
  4. Extra fields: Unexpected properties that might indicate misunderstanding

Zod provides type-safe validation with excellent error messages.

#Complete Repair + Validation Pipeline

import { jsonrepair } from 'jsonrepair';
import { z } from 'zod';

// Define your expected structure
const ResponseSchema = z.object({
	status: z.enum(['success', 'error']),
	data: z.object({
		items: z.array(
			z.object({
				id: z.number(),
				name: z.string(),
				score: z.number().min(0).max(100),
			}),
		),
		total: z.number(),
	}),
	message: z.string().optional(),
});

type ValidatedResponse = z.infer<typeof ResponseSchema>;

function parseAndValidateLLMResponse(rawText: string): ValidatedResponse {
	// Step 1: Extract JSON from text
	const extracted = extractJsonFromText(rawText);

	if (!extracted) {
		throw new Error('No JSON found in LLM response');
	}

	// Step 2: Repair malformed JSON
	const repaired = jsonrepair(extracted);

	// Step 3: Parse to object
	const parsed = JSON.parse(repaired);

	// Step 4: Validate structure with Zod
	const validated = ResponseSchema.parse(parsed);

	return validated;
}

// Usage
try {
	const result = parseAndValidateLLMResponse(llmResponseText);

	// TypeScript knows the exact type here
	console.log(result.data.items[0].name);
} catch (error) {
	if (error instanceof z.ZodError) {
		console.error('Validation failed:', error.issues);
		// Handle schema mismatch
	} else {
		console.error('Parsing failed:', error);
		// Handle JSON repair failure
	}
}

#Validation Error Handling

Zod provides detailed error information when validation fails:

import { z } from 'zod';

function parseWithDetailedErrors(rawText: string) {
	try {
		const extracted = extractJsonFromText(rawText);
		if (!extracted) throw new Error('No JSON found');

		const repaired = jsonrepair(extracted);
		const parsed = JSON.parse(repaired);
		const validated = ResponseSchema.parse(parsed);

		return { success: true, data: validated };
	} catch (error) {
		if (error instanceof z.ZodError) {
			// Format Zod errors for logging/debugging
			const issues = error.issues.map((issue) => ({
				path: issue.path.join('.'),
				message: issue.message,
				received: issue.received,
			}));

			return {
				success: false,
				error: 'VALIDATION_ERROR',
				issues,
			};
		}

		return {
			success: false,
			error: 'PARSE_ERROR',
			message: error instanceof Error ? error.message : 'Unknown error',
		};
	}
}

#Fallback Strategies When Repair Fails

Even jsonrepair can't fix everything. You need fallback strategies for truly broken responses.

#Strategy 1: Retry with Modified Prompt

async function queryWithRetry(prompt: string, maxAttempts: number = 3): Promise<ValidatedResponse> {
	for (let attempt = 1; attempt <= maxAttempts; attempt++) {
		try {
			const response = await callLLMAPI(prompt);
			return parseAndValidateLLMResponse(response);
		} catch (error) {
			console.warn(`Attempt ${attempt} failed:`, error);

			if (attempt === maxAttempts) {
				throw new Error('Max retry attempts reached');
			}

			// Modify prompt for retry
			if (error instanceof z.ZodError) {
				// Validation failed - add schema reminder to prompt
				prompt = `${prompt}\n\nIMPORTANT: Return ONLY valid JSON matching this structure: ${JSON.stringify(
					ResponseSchema.shape,
				)}`;
			} else {
				// JSON parsing failed - emphasize format
				prompt = `${prompt}\n\nIMPORTANT: Return ONLY valid JSON with no extra text. Ensure all braces are closed.`;
			}
		}
	}

	throw new Error('Should never reach here');
}

#Strategy 2: Partial Success Handling

Sometimes you can salvage partial data even when full validation fails:

import { z } from 'zod';

// Make all fields optional for partial parsing
const PartialResponseSchema = ResponseSchema.partial().deepPartial();

function parseWithPartialFallback(rawText: string) {
	try {
		// Try full validation first
		return {
			complete: true,
			data: parseAndValidateLLMResponse(rawText),
		};
	} catch (error) {
		// Attempt partial parse
		try {
			const extracted = extractJsonFromText(rawText);
			if (!extracted) throw error;

			const repaired = jsonrepair(extracted);
			const parsed = JSON.parse(repaired);
			const partial = PartialResponseSchema.parse(parsed);

			return {
				complete: false,
				data: partial,
				warning: 'Partial data only - some fields missing or invalid',
			};
		} catch {
			// Complete failure
			throw error;
		}
	}
}

#Strategy 3: Default Values with safeParse

Use Zod's safeParse to gracefully handle failures with defaults:

function parseWithDefaults(rawText: string): ValidatedResponse {
	try {
		const extracted = extractJsonFromText(rawText);
		if (!extracted) throw new Error('No JSON found');

		const repaired = jsonrepair(extracted);
		const parsed = JSON.parse(repaired);

		// Use safeParse instead of parse
		const result = ResponseSchema.safeParse(parsed);

		if (result.success) {
			return result.data;
		}

		// Return safe defaults if validation fails
		console.warn('Using default response due to validation failure');
		return {
			status: 'error',
			data: {
				items: [],
				total: 0,
			},
			message: 'LLM returned invalid response structure',
		};
	} catch (error) {
		// Return defaults for any error
		return {
			status: 'error',
			data: {
				items: [],
				total: 0,
			},
			message: 'Failed to parse LLM response',
		};
	}
}

#Prompt Engineering to Reduce JSON Errors

Prevention is better than cure. Good prompts significantly reduce malformed JSON.

#Technique 1: Explicit Format Instructions

const systemPrompt = `
You are a data extraction assistant. You MUST respond with ONLY valid JSON.

RULES:
1. Return ONLY JSON - no explanations, no markdown code blocks
2. Ensure all braces {} and brackets [] are properly closed
3. Use double quotes for all strings and property names
4. No trailing commas
5. No comments

Example response format:
{"status": "success", "data": {"items": []}}
`;

#Technique 2: Schema in Prompt

Include your Zod schema directly in the prompt:

import { zodToJsonSchema } from 'zod-to-json-schema';

const jsonSchema = zodToJsonSchema(ResponseSchema);

const prompt = `
Extract the following information and return as JSON matching this exact schema:

${JSON.stringify(jsonSchema, null, 2)}

Remember: Valid JSON only, no extra text.
`;

#Technique 3: Few-Shot Examples

Provide examples of correct JSON formatting:

const fewShotPrompt = `
Extract data and return as JSON.

Example 1:
Input: "John is 30 years old"
Output: {"name": "John", "age": 30}

Example 2:
Input: "Sarah scored 95 on the test"
Output: {"name": "Sarah", "score": 95}

Now process:
Input: "${userInput}"
Output:
`;

#Technique 4: Temperature and Token Settings

Configure API parameters to improve consistency:

const apiConfig = {
	temperature: 0.2, // Lower = more consistent formatting
	max_tokens: 2000, // Ensure enough room to complete JSON
	top_p: 0.9, // Reduce randomness
	stop: ['\n\n'], // Stop at double newline (end of JSON)
};

#Technique 5: Structured Output APIs

Some providers offer structured output guarantees:

// OpenAI with response_format (GPT-4 Turbo+)
const response = await openai.chat.completions.create({
  model: 'gpt-4-turbo-preview',
  messages: [...],
  response_format: { type: 'json_object' }, // Guarantees valid JSON
});

// Anthropic Claude with specific instructions
const response = await anthropic.messages.create({
  model: 'claude-3-opus-20240229',
  messages: [...],
  system: 'You must respond with valid JSON only.',
});

#Complete Production Pipeline

Here's a production-ready implementation combining all techniques:

import { jsonrepair } from 'jsonrepair';
import { z } from 'zod';

// Define schema
const LLMResponseSchema = z.object({
	answers: z.array(
		z.object({
			question: z.string(),
			answer: z.string(),
			confidence: z.number().min(0).max(1),
			sources: z.array(z.string()).optional(),
		}),
	),
	metadata: z
		.object({
			model: z.string(),
			timestamp: z.number(),
		})
		.optional(),
});

type LLMResponse = z.infer<typeof LLMResponseSchema>;

// Extract JSON from mixed content
function extractJSON(text: string): string | null {
	// Try code block first
	const codeBlockMatch = text.match(/```json\s*\n([\s\S]*?)\n```/);
	if (codeBlockMatch?.[1]) {
		return codeBlockMatch[1];
	}

	// Try plain JSON
	const jsonMatch = text.match(/\{[\s\S]*\}/);
	if (jsonMatch?.[0]) {
		return jsonMatch[0];
	}

	return null;
}

// Parse and validate with comprehensive error handling
function parseLLMResponse(rawText: string): { success: true; data: LLMResponse } | { success: false; error: string } {
	try {
		// Step 1: Extract JSON
		const extracted = extractJSON(rawText);
		if (!extracted) {
			return {
				success: false,
				error: 'No JSON found in response',
			};
		}

		// Step 2: Repair malformed JSON
		let repaired: string;
		try {
			repaired = jsonrepair(extracted);
		} catch (repairError) {
			return {
				success: false,
				error: `JSON repair failed: ${repairError}`,
			};
		}

		// Step 3: Parse to object
		let parsed: unknown;
		try {
			parsed = JSON.parse(repaired);
		} catch (parseError) {
			return {
				success: false,
				error: `JSON parsing failed: ${parseError}`,
			};
		}

		// Step 4: Validate with Zod
		const result = LLMResponseSchema.safeParse(parsed);

		if (!result.success) {
			const issues = result.error.issues.map((issue) => `${issue.path.join('.')}: ${issue.message}`).join(', ');

			return {
				success: false,
				error: `Validation failed: ${issues}`,
			};
		}

		return {
			success: true,
			data: result.data,
		};
	} catch (error) {
		return {
			success: false,
			error: error instanceof Error ? error.message : 'Unknown error',
		};
	}
}

// Main LLM query function with retry
async function queryLLMWithRetry(
	prompt: string,
	options: {
		maxAttempts?: number;
		onRetry?: (attempt: number, error: string) => void;
	} = {},
): Promise<LLMResponse> {
	const { maxAttempts = 3, onRetry } = options;
	let currentPrompt = prompt;

	for (let attempt = 1; attempt <= maxAttempts; attempt++) {
		try {
			// Call your LLM API
			const rawResponse = await callYourLLMAPI(currentPrompt);

			// Parse and validate
			const result = parseLLMResponse(rawResponse);

			if (result.success) {
				return result.data;
			}

			// Parsing failed
			if (attempt < maxAttempts) {
				onRetry?.(attempt, result.error);

				// Enhance prompt for next attempt
				currentPrompt = `${prompt}

IMPORTANT: Your previous response had errors (${result.error}).
Please return ONLY valid JSON with all required fields. Ensure proper formatting.`;
			} else {
				throw new Error(`Failed after ${maxAttempts} attempts: ${result.error}`);
			}
		} catch (error) {
			if (attempt === maxAttempts) {
				throw error;
			}

			onRetry?.(attempt, error instanceof Error ? error.message : 'Unknown error');
		}
	}

	throw new Error('Should never reach here');
}

// Example usage
async function processUserQuery(userQuery: string): Promise<LLMResponse> {
	const prompt = `
Answer the following questions and return as JSON:

${userQuery}

Format: {"answers": [{"question": "...", "answer": "...", "confidence": 0.95}]}
`;

	return await queryLLMWithRetry(prompt, {
		maxAttempts: 3,
		onRetry: (attempt, error) => {
			console.warn(`Retry attempt ${attempt} due to: ${error}`);
		},
	});
}

// Placeholder for actual LLM API call
async function callYourLLMAPI(prompt: string): Promise<string> {
	// Replace with your actual LLM API call
	// Examples: OpenAI, Anthropic, Google Gemini, etc.
	throw new Error('Not implemented - use your LLM API here');
}

#Monitoring and Metrics

Track JSON repair success rates to identify patterns and optimize prompts:

interface RepairMetrics {
	totalAttempts: number;
	repairSuccesses: number;
	repairFailures: number;
	validationSuccesses: number;
	validationFailures: number;
	averageRetries: number;
}

class LLMResponseMonitor {
	private metrics: RepairMetrics = {
		totalAttempts: 0,
		repairSuccesses: 0,
		repairFailures: 0,
		validationSuccesses: 0,
		validationFailures: 0,
		averageRetries: 0,
	};

	recordAttempt(result: { repaired: boolean; validated: boolean; retries: number }) {
		this.metrics.totalAttempts++;

		if (result.repaired) {
			this.metrics.repairSuccesses++;
		} else {
			this.metrics.repairFailures++;
		}

		if (result.validated) {
			this.metrics.validationSuccesses++;
		} else {
			this.metrics.validationFailures++;
		}

		// Update rolling average
		const n = this.metrics.totalAttempts;
		this.metrics.averageRetries = (this.metrics.averageRetries * (n - 1) + result.retries) / n;
	}

	getMetrics(): RepairMetrics {
		return { ...this.metrics };
	}

	getSuccessRate(): number {
		if (this.metrics.totalAttempts === 0) return 0;
		return this.metrics.validationSuccesses / this.metrics.totalAttempts;
	}
}

// Global monitor instance
const monitor = new LLMResponseMonitor();

// Use in your pipeline
async function monitoredQuery(prompt: string): Promise<LLMResponse> {
	let retries = 0;
	let repaired = false;
	let validated = false;

	try {
		const response = await queryLLMWithRetry(prompt, {
			onRetry: () => retries++,
		});

		repaired = true;
		validated = true;

		return response;
	} catch (error) {
		throw error;
	} finally {
		monitor.recordAttempt({ repaired, validated, retries });
	}
}

// Periodically log metrics
setInterval(() => {
	const metrics = monitor.getMetrics();
	console.log('LLM Response Metrics:', {
		successRate: `${(monitor.getSuccessRate() * 100).toFixed(2)}%`,
		totalProcessed: metrics.totalAttempts,
		averageRetries: metrics.averageRetries.toFixed(2),
	});
}, 60000); // Every minute

#Key Takeaways

  • LLMs produce malformed JSON 2-5% of the time in production, requiring robust error handling
  • Use jsonrepair to automatically fix common issues like missing quotes, trailing commas, and truncation
  • Always validate repaired JSON with Zod to ensure it matches your expected structure
  • Extract JSON from mixed responses before attempting repair (handle code blocks and surrounding text)
  • Implement retry logic with modified prompts when initial parsing fails
  • Improve prompts to reduce errors through explicit format instructions, schemas, and examples
  • Monitor success rates to identify patterns and optimize your pipeline
  • Have fallback strategies for complete failures (defaults, partial success handling)

The combination of extraction → repair → validation → retry gives you a production-ready pipeline that handles the messy reality of LLM responses. With these techniques, you can build reliable AI applications that gracefully handle malformed output without crashing.

Want to dive deeper into working with LLM APIs? Check out our guide on optimizing AI response times or learn about building resilient API integrations.

#Further Reading

Enjoyed this article? Stay updated: