March 3, 2026 · 9 min read★ Featured
You've added BetterStack logging to your Next.js frontend. Now extend the same visibility to your Wagtail backend without blocking your request thread or losing logs on deploy.
In a previous article, we covered why Next.js server and client logging need fundamentally different approaches. The core insight was simple: match your logging strategy to your execution model.
Next.js serverless functions terminate in milliseconds, so logs must be flushed immediately or they're lost. The browser is long-lived, so batching works safely.
What about a Django/Wagtail backend? The execution model is different again! A long-lived process running behind Gunicorn, not ephemeral functions. This changes the strategy considerably, and getting it wrong doesn't cause silent log loss like in serverless. Instead, it causes something subtler: your logging blocks your request thread, adding latency to every response while logs are being delivered over HTTP.
This article covers a production-ready logging setup for Wagtail using BetterStack (Logtail), deployed on Render.
In Next.js serverless, the problem was logs disappearing because functions terminated too fast. In Django, the problem is the opposite: your process lives long enough that you might be tempted to send logs synchronously, inline with the request.
Here's what synchronous logging looks like in practice:
# ❌ Synchronous logging — blocks request thread
def listing_view(self, request):
handler = LogtailHandler(source_token=token)
handler.emit(record) # HTTP request to BetterStack — 50-150ms
return super().listing_view(request)Every log call waits for BetterStack's HTTP endpoint to respond before your view continues. Under load, this compounds quickly.
The solution is a background queue: your request thread writes to an in-memory queue in microseconds, and a separate daemon thread drains the queue and handles HTTP delivery asynchronously.
Your view Queue (in-memory) Daemon thread
────────────── ───────────────── ─────────────────
logger.info(...) ──► [record]
[record] ──► QueueListener
logger.error(...) ──► [record] │
[record] ▼
(response returns) BetterStack APIYour request returns immediately. BetterStack delivery happens in the background.
Think of it like a restaurant kitchen. When a waiter takes your order, they don't stand at the table waiting for the chef to confirm the dish is understood before moving on. They drop the ticket on the pass and immediately go serve the next table. The chef processes tickets at their own pace. In other words, the waiter's job and the chef's job are decoupled. Your request thread is the waiter. The QueueListener daemon is the chef. BetterStack is the kitchen display system they're both working toward.
Create a new source in the BetterStack dashboard:
Source type: HTTP SourceThat's it, no platform selection needed and you can have a free solution with visibility over past few days. The server auto-detects the Python log record format from the JSON payload that logtail-python sends. Copy the source token and add it as a secret environment variable in your Render dashboard:
LOGTAIL_SERVER_SOURCE_TOKEN your_token_hereInstall the Python package:
pip install logtail-pythonThen build the logging configuration. The key components are a LogtailHandler (which delivers to BetterStack), a Queue (the in-memory buffer), and a QueueListener (the daemon thread that bridges them):
# mysite/settings/logging.py
import atexit
import queue
import logging
import logging.handlers
import logtail
def create_logging_config(source_token: str) -> dict:
betterstack_handler = logtail.LogtailHandler(source_token=source_token)
betterstack_handler.setFormatter(logging.Formatter())
log_queue = queue.Queue(maxsize=10_000) # Cap memory usage
listener = logging.handlers.QueueListener(
log_queue,
betterstack_handler,
respect_handler_level=True,
)
listener.start()
# Flush remaining logs on clean shutdown (SIGTERM from Render)
atexit.register(listener.stop)
return {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"standard": {
"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "standard",
},
"betterstack_queue": {
"class": "logging.handlers.QueueHandler",
"queue": log_queue,
},
},
"root": {
"handlers": ["console", "betterstack_queue"],
"level": "INFO",
},
"loggers": {
"django": {
"handlers": ["console", "betterstack_queue"],
"level": "WARNING",
"propagate": False,
},
"django.request": {
"handlers": ["console", "betterstack_queue"],
"level": "ERROR",
"propagate": False,
},
"urllib3": {"level": "WARNING"},
"PIL": {"level": "WARNING"},
},
}The maxsize=10_000 cap is important. Without it, a BetterStack outage could cause unbounded memory growth as logs pile up. When the queue is full, new records are dropped rather than blocking your app. A deliberate trade-off that protects availability.
The atexit.register(listener.stop) call is the Django equivalent of the Next.js flush-before-terminate pattern. When Render sends SIGTERM on deploy, Gunicorn's graceful shutdown triggers atexit, which drains any remaining queue entries before the worker exits. No logs lost on rolling deploys.
Wire it into your settings:
# mysite/settings/base.py
from mysite.settings.logging import create_logging_config
LOGTAIL_TOKEN = env("LOGTAIL_SERVER_SOURCE_TOKEN", default=None)
if LOGTAIL_TOKEN:
LOGGING = create_logging_config(source_token=LOGTAIL_TOKEN)
else:
# Console-only fallback for local development
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"standard": {"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s"},
},
"handlers": {
"console": {"class": "logging.StreamHandler", "formatter": "standard"},
},
"root": {"handlers": ["console"], "level": "INFO"},
}The if LOGTAIL_TOKEN guard means local development works with no BetterStack dependency in case you are debugging in you local machine. Logs go to the console, and everything else behaves identically.
The difference between synchronous and queue-based logging isn't theoretical. Here's what each approach costs per request:
Synchronous logging (no queue)
──────────────────────────────
logger.info(...) → DNS lookup + TCP + TLS + HTTP = 50–150ms added latency
logger.error(...) → same = 50–150ms added latency
Multiple log calls → costs stack up per call
Queue-based logging
───────────────────
logger.info(...) → in-memory queue.put() = ~1–5μs (microseconds)
logger.error(...) → same = ~1–5μs
Multiple log calls → flat cost regardless of countUnder normal load this might seem acceptable, one extra log call adding 100ms isn't catastrophic. But under sustained traffic, every concurrent request pays that cost simultaneously, and database or network slowdowns on BetterStack's side multiply the impact across your entire response time.
The queue eliminates this entirely. Log as much as you want with no measurable effect on response latency.
The usage pattern is always the same: logging.getLogger(__name__). The __name__ convention gives each module its own logger name, which becomes directly queryable in BetterStack.
One thing that catches developers out: Wagtail's WagtailAPIRouter does not use DRF's standard routing methods. Requests go through Wagtail's own methods:
/api/v2/pages/?slug=my-post → listing_view()
/api/v2/pages/123/ → detail_view()this means that logging logic or additional informations you want to track on your backend needs to go in listing_view and detail_view procedures. One example is reported below for a custom version of the API manager I implemented.
# core/views.py
import logging
from django.db import OperationalError
from wagtail.api.v2.views import PagesAPIViewSet
logger = logging.getLogger(__name__)
class CustomPagesAPIViewSet(PagesAPIViewSet):
def listing_view(self, request):
logger.info("Pages API listing from CustomPagesAPIViewSet", extra={
"slug": request.GET.get("slug"),
"type": request.GET.get("type"),
"child_of": request.GET.get("child_of"),
"ip": request.META.get("REMOTE_ADDR"),
})
try:
return super().listing_view(request)
except OperationalError as e:
logger.error("Database error in Pages API listing from CustomPagesAPIViewSet", extra={
"path": request.path,
"query_params": dict(request.GET),
"error": str(e),
})
raise
def detail_view(self, request, pk):
logger.info("Pages API detail from CustomPagesAPIViewSet", extra={
"page_id": pk,
"ip": request.META.get("REMOTE_ADDR"),
})
try:
return super().detail_view(request, pk)
except OperationalError as e:
logger.error("Database error in Pages API detail from CustomPagesAPIViewSet", extra={
"page_id": pk,
"error": str(e),
})
raiseIn Wagtail page models, clean() runs on editor saves through the Wagtail admin, meaning when you modify existing content or add new pages to your CRM. For example, It's the right place to log validation failures triggered by content editors like images upload.
# blog/models.py
import logging
logger = logging.getLogger(__name__)
class BlogPage(Page):
def clean(self):
super().clean()
if self.hero_image:
try:
validate_hero_image(self.hero_image)
except Exception as e:
logger.warning("Hero image validation failed", extra={
"page_id": self.id,
"page_slug": self.slug,
"image_id": self.hero_image.id,
"error": str(e),
"owner": str(self.owner) if self.owner else None,
})
raiseNote: serve() is never called in a headless setup since your frontend fetches data through the API, not by visiting page URLs directly. Don't add logging there expecting it to fire, if needed follow the custom view example showed above.
Because __name__ is used as the logger name throughout, every log is automatically tagged with its origin:
logger_name: core.views → Wagtail API v2 calls
logger_name: blog.views → blog filtering and pagination
logger_name: blog.models → editor validation in Wagtail admin
logger_name: core.exception_handlers → API error handlingThe structured extra={} dict maps directly to BetterStack's JSON fields, making every key queryable with their SQL-like interface. A query like logger_name = 'blog.views' AND level = 'warning' gives you all requests where invalid parameters were corrected, including exactly what was requested and what was applied.
Combined with frontend sources explained in the last post, you have full end-to-end visibility: client errors in the browser, server component logs from Next.js, and now structured logs from every layer of your Wagtail backend. Difference sources to separate domainds but all searchable in one place when something happen.
QueueListener daemon thread decouples log emission from HTTP delivery with zero request overheadatexit handles clean queue draining on graceful shutdowns. No logs lost on deployslisting_view and detail_view, allowing you to log additional context if neededclean() fires on Wagtail admin saves only, in case you want to track content validation__name__ as logger name gives you automatic filtering by app and module in BetterStackif LOGTAIL_TOKEN guard keeps local development dependency-freeLogging is infrastructure that should be invisible when things work and invaluable when they don't. The queue-based approach we've covered gives you exactly that: zero cost during normal operation, and full structured visibility when something goes wrong in production. The patterns here mirror the philosophy from the Next.js article: match your strategy to your execution model. Django's long-lived process means you don't have to fight against function termination, but you do need to protect your request thread. A background queue solves both concerns cleanly, and name-based logger names give you a queryable map of your entire backend without any extra configuration.
Start logging, ship with confidence, and let BetterStack do the detective work when you need it.
Questions about logging in your Wagtail application? Reach out via the contact form or connect on LinkedIn!