February 24, 2026 · 12 min read★ Featured
Before scaling your Next.js application, you need reliable logging. Learn why unified logging libraries fail in serverless environments and how to build a production-ready logging system with BetterStack that handles both server and client contexts correctly.
You've built a Next.js application. Your components render correctly, your API routes work, and your data fetching is optimized. But when something goes wrong in production, can you actually see what happened?
More specifically: should your server-side logs and client-side logs use the same logging library? Does it matter if logs are sent immediately or batched?
The short answer: yes, it matters significantly. The execution model of serverless functions is fundamentally different from long-lived browser sessions. A logging strategy that works perfectly in the browser can silently drop every server-side log in production.
This article explores logging strategies for Next.js applications using BetterStack (Logtail), focusing on why server and client need different approaches and how to build a reliable logging system. Your rendering patterns don't change based on how you log, but your ability to debug production issues absolutely does.
Before diving into architecture, let's establish why BetterStack (formerly Logtail) is the recommended logging service for Next.js applications:
Developer Experience:
Next.js-Specific Benefits:
@logtail/browser) and Node.js (@logtail/node)Production Ready:
The key insight: BetterStack provides different packages for different environments (@logtail/browser vs @logtail/node), which aligns perfectly with Next.js's split execution model. Other logging services try to use one package everywhere, which leads to the problems we'll explore.
Next.js applications run in two completely different environments, and this difference is critical for understanding logging:
Client-side (Browser):
Server-side (Serverless Functions):
For a traditional server application, this isn't a problem. Everything runs in a long-lived process with plenty of time to flush logs. But in a serverless Next.js deployment, every Server Component, API Route, and Server Action runs in a function that terminates in milliseconds.
The guiding principle for logging is simple: match your logging strategy to the execution model of each environment. Client-side code can afford to batch logs for efficiency. Server-side code cannot afford to lose logs when functions terminate.
Think of it like sending mail from two different locations. In a city with daily postal pickup, you can collect letters in a mailbox and send them once a day. But if you're at a rest stop on a highway, you need to hand your letter directly to someone before you leave, or it won't get sent.
When developers first add logging to Next.js applications, they naturally reach for a single logging library that claims to work "everywhere." This seems elegant: one import, one configuration, consistent API across your entire codebase.
Here's why this fails in practice:
Most logging libraries batch logs for efficiency. They collect multiple log entries in memory, then send them to the logging service in a single HTTP request. This makes perfect sense for browsers:
// Browser batching (works great)
logger.info('User clicked button'); // Queued
logger.info('Form submitted'); // Queued
logger.info('Success message shown'); // Queued
// ... 5 seconds pass
// All three logs sent in one HTTP request ✓But in a serverless function:
// Server Component (logs lost)
export default function Page() {
logger.info('Page rendering started'); // Queued
logger.info('Data fetched'); // Queued
return <div>Hello</div>; // Component returns
}
// Function terminates → Batch never sent! ✗The function terminates before the batch interval completes. Your logs sit in memory, unsent, and are discarded when the function shuts down.
Some developers discover they can manually flush logs:
export default async function Page() {
logger.info('Page rendering');
await logger.flush(); // Force send
return <div>Hello</div>;
}This works, but it has serious limitations:
await logger.flush() and you lose logsYou end up with a fragile system where logs are dropped unless every developer remembers the magic incantation.
The server-side logger needs to handle the unique constraints of serverless functions. Here's the approach:
Don't initialize the logger at module load time. In Next.js 15+ with Turbopack, environment variables might not be ready when modules load, causing crashes.
// ❌ BAD: Crashes in Next.js 15+
const logger = new Logtail(process.env.LOGTAIL_SERVER_SOURCE_TOKEN);
// ✅ GOOD: Initialize on first use
let _instance: Logtail | null = null;
function getLogger() {
if (_instance) return _instance;
const token = process.env.LOGTAIL_SERVER_SOURCE_TOKEN;
if (!token) {
console.warn('Server logging disabled - no token');
return { info: () => {}, warn: () => {}, error: () => {} };
}
_instance = new Logtail(token);
return _instance;
}Different log levels have different delivery requirements:
Info/Warn (Fire-and-Forget):
Error (Immediate Delivery):
export const logServer = {
info(message: string, meta?: Record<string, any>) {
console.log(`[INFO] ${message}`, meta);
getLogger().info(message, meta);
// No flush - developer calls logServer.flush() manually
},
async error(message: string, error?: unknown, meta?: Record<string, any>) {
console.error(`[ERROR] ${message}`, error, meta);
const logger = getLogger();
logger.error(message, { ...meta, error });
await logger.flush(); // ✅ Immediate delivery
},
flush: async () => await getLogger().flush(),
};Catch unhandled errors that would otherwise crash the function silently:
// Singleton pattern to prevent duplicate handlers
if (typeof window === 'undefined' && !(global as any)._loggingInitialized) {
process.on('unhandledRejection', async (reason) => {
const logger = getLogger();
logger.error('Unhandled Promise Rejection', { reason: String(reason) });
await logger.flush();
});
process.on('uncaughtException', async (err) => {
const logger = getLogger();
logger.error('Uncaught Exception', {
message: err.message,
stack: err.stack
});
await logger.flush();
});
(global as any)._loggingInitialized = true;
}Immediate error flushing adds 50-150ms to error responses, but this is acceptable because:
Info/warn batching has essentially zero performance cost:
The client-side logger is simpler because browsers are long-lived:
const browserLogger = new Logtail(
process.env.NEXT_PUBLIC_LOGTAIL_SOURCE_TOKEN || '',
{
batchSize: 25, // Send after 25 logs
batchInterval: 5000, // OR send every 5 seconds
}
);
export const logClient = {
info(message: string, meta?: Record<string, any>) {
console.log(`[INFO] ${message}`, meta);
browserLogger.info(message, meta);
// Automatic batching - no flush needed
},
warn(message: string, meta?: Record<string, any>) {
console.warn(`[WARN] ${message}`, meta);
browserLogger.warn(message, meta);
},
error(message: string, error?: unknown, meta?: Record<string, any>) {
console.error(`[ERROR] ${message}`, error, meta);
browserLogger.error(message, { ...meta, error });
},
};Key differences from server logger:
Batching dramatically reduces HTTP overhead:
Network resilience:
Both loggers should sanitize data the same way. Extract this to a shared module:
// lib/logging/sanitize.ts
function sanitizeValue(value: any): any {
if (value == null) return null;
if (value instanceof Date) return value.toISOString();
if (value instanceof Error) {
return {
name: value.name,
message: value.message,
stack: value.stack,
};
}
if (['string', 'number', 'boolean'].includes(typeof value)) {
return value;
}
try {
return JSON.parse(JSON.stringify(value));
} catch {
return String(value);
}
}
export function sanitizeForLogging(obj: any): Record<string, any> {
if (obj == null) return {};
return Object.fromEntries(
Object.entries(obj)
.filter(([_, v]) => v !== undefined)
.map(([k, v]) => {
if (Array.isArray(v)) return [k, v.map(sanitizeValue)];
if (typeof v === 'object') return [k, sanitizeForLogging(v)];
return [k, sanitizeValue(v)];
})
);
}
export function buildLogContext(meta?: Record<string, any>) {
return sanitizeForLogging({
environment: typeof window === 'undefined' ? 'server' : 'client',
deployment: process.env.NODE_ENV,
timestamp: new Date().toISOString(),
...meta,
});
}This ensures:
With the split logger architecture in place, the key to reliable logging is understanding when to flush. Server-side code requires manual flushing before functions terminate, while client-side code handles batching automatically. Let's look at the patterns that work and the mistakes that cause silent log loss.
import { logServer } from '@/lib/logging/logger.server';
// ✅ CORRECT: Flush before returning
export default async function BlogPost({ params }: { params: { slug: string } }) {
logServer.info('Blog post requested', { slug: params.slug });
const post = await fetchPost(params.slug);
await logServer.flush(); // Critical: ensures logs are sent
return <article>{post.content}</article>;
}
// ❌ WRONG: No flush - logs probably lost
export default function BlogPost({ params }: { params: { slug: string } }) {
logServer.info('Page rendering');
return <div>Hello</div>;
} // Function terminates → logs lostPitfall: Forgetting to flush is the #1 cause of missing server logs. Add a code review checklist or lint rule.
import { logServer } from '@/lib/logging/logger.server';
export async function POST(request: Request) {
try {
logServer.info('API request received');
// ... process request
await logServer.flush(); // ✅ Flush non-error logs
return NextResponse.json({ success: true });
} catch (error) {
await logServer.error('API error', error); // ✅ Auto-flushes
return NextResponse.json({ error: true }, { status: 500 });
}
}Pitfall: If you don't flush errors immediately, critical logs might be lost when functions crash:
// ❌ WRONG: Error might be lost
export const logServer = {
error(message: string, error?: unknown) {
getLogger().error(message, { error });
// No flush! Error lost if function crashes
}
};
// ✅ CORRECT: Guaranteed delivery
export const logServer = {
async error(message: string, error?: unknown) {
getLogger().error(message, { error });
await getLogger().flush(); // Immediate send
}
};'use client';
import { logClient } from '@/lib/logging/logger.client';
export default function ContactForm() {
const handleSubmit = async (e: FormEvent) => {
logClient.info('Form submission started');
try {
await submitForm();
logClient.info('Form submitted successfully');
} catch (error) {
logClient.error('Form submission failed', error);
}
// No flush needed - automatic batching
};
return <form onSubmit={handleSubmit}>...</form>;
}Pitfall: Don't accidentally import the client logger in server code:
// ❌ WRONG: Will crash on server
import { logClient } from '@/lib/logging/logger.client';
export default function ServerComponent() {
logClient.info('This crashes'); // ReferenceError
}Solution: Use file naming conventions (*.server.ts vs *.client.ts) to make it obvious which logger to import.
Keep tokens separate for better security:
# .env.local
NEXT_PUBLIC_LOGTAIL_SOURCE_TOKEN=your_client_token_here # Public
LOGTAIL_SERVER_SOURCE_TOKEN=your_server_token_here # PrivateThis allows you to rotate server tokens independently, set different rate limits, and track server/client logs separately.
Key Patterns:
Logging is infrastructure that should just work. Get your logging architecture right from the start. Split server and client loggers, flush appropriately for each environment and you'll have reliable visibility into production issues.
The patterns we've covered work because they respect the fundamental differences between browser and serverless execution:
Start with the recommended setup:
@logtail/browser for client-side with automatic batching@logtail/node for server-side with manual flush controlVerify your logs actually reach your logging service (test both client and server), then trust that when production issues arise, you'll have the data to debug them.
Your logging strategy should support your development workflow, not complicate it. Split loggers optimized for their environments is the default best practice for Next.js applications, and it's the foundation for reliable production debugging.
Questions about scaling your headless architecture? Reach out via the contact form or connect on LinkedIn!