Learn how to stop escaped JSON in Middleware logs when using Docker. Covers both Fluentd driver and Fluent Bit bridge methods — with configs, troubleshooting, and verification steps.
Two reliable fixes with copy‑paste configs
Tested end‑to‑end with Docker Compose, Middleware Host Agent, and Fluent Bit / fluentd.
Audience: Devs/SREs shipping app logs to Middleware from Docker
Problem: Logs show up as a giant escaped JSON string (full of \" backslashes), so fields don’t parse/index.
Goal: Get clean JSON in the Middleware UI with minimal moving parts.
TL;DR
- Escaping happens because Docker’s default
json-filedriver wraps your JSON line inside another JSON envelope. - Fix A (simplest): Change the app’s logging driver to
fluentdand point it at the Middleware agent’s Fluent Forward port. No file tailing, no escapes. - Fix B (bridge): Keep
json-filebut run Fluent Bit to tail*-json.log, unescape the payload, and forward it to the agent.
Pick A when you can change the app’s logging driver. Pick B if policy/tooling requires json-file .
Why logs look escaped
Docker’s default logging (json-file ) stores each stdout line like:
{"log":"{\"message\":\"...\"}", "time":"..."}
When the agent reads that file, your original JSON is inside the log string → hence all the backslashes. Middleware can’t parse fields inside a quoted string.
Pre‑flight checklist (both fixes)
- A Middleware API key for the Host Agent.
- Docker/Compose installed.
- Confirm the agent’s Fluent Forward port 8006 is available on the host (or choose another host port to map, e.g., 8007).
- If you previously filtered by File Path in the Middleware UI, clear it when using the
fluentddriver (there’s no*-json.logpath in that flow).
Fix A — Point Docker directly to the agent (fluentd driver)
Fastest and most robust. The app writes logs straight to the agent via the Fluent Forward protocol.
Compose (port‑mapped agent)
services: app: image: python:3.12-alpine container_name: demo-json-app working_dir: /app volumes: - ./app.py:/app/app.py:ro command: ["python", "-u", "/app/app.py"] logging: driver: fluentd options: fluentd-address: 127.0.0.1:8006 # if 8006 is busy, map 8007:8006 below and use 127.0.0.1:8007 tag: "docker.{{.Name}}" fluentd-retry-wait: "1s" fluentd-max-retries: "3" middleware-agent: image: ghcr.io/middleware-labs/mw-host-agent:master container_name: middleware-agent environment: - MW_API_KEY=<YOUR_API_KEY> - MW_TARGET=https://<your-org>.middleware.io:443 - MW_HOST_TAGS=env:demo,project:json-test volumes: - /var/run/docker.sock:/var/run/docker.sock - /var/lib/docker/containers:/var/lib/docker/containers:ro ports: - "8006:8006" # change to "8007:8006" if 8006 is occupied
Why
127.0.0.1? Docker resolves the logging endpoint from the host perspective, so localhost + a mapped port is the most predictable.
Verify
docker inspect demo-json-app --format '{{json .HostConfig.LogConfig}}'should showType:"fluentd"and yourfluentd-address.- In Middleware → Logs , entries are clean (no
\") , and fields are visible under Parsed / Indexed Attributes .
Alternative: run the agent with
network_mode: hostand keepfluentd-address: 127.0.0.1:8006. Port mapping is usually safer to avoid clashes.
Fix B — Keep json-file , add a tiny Fluent Bit bridge
Use this when you can’t change the app’s logging driver. Fluent Bit tails Docker files, unescapes the payload, and forwards to the agent.
Compose
services:
app:
image: python:3.12-alpine
container_name: demo-json-app
working_dir: /app
volumes:
- ./app.py:/app.py:ro
command: ["python", "-u", "/app.py"] # default json-file logging stays enabled
middleware-agent:
image: ghcr.io/middleware-labs/mw-host-agent:master
container_name: middleware-agent
environment:
- MW_API_KEY=<YOUR_API_KEY>
- MW_TARGET=https://<your-org>.middleware.io:443
- MW_HOST_TAGS=env:demo,project:json-test
volumes:
- /var/run/docker.sock:/var/run/docker.sock
logging:
driver: "none" # optional: keep the agent’s own stdout out of your logs
fluent-bit:
image: cr.fluentbit.io/fluent/fluent-bit:2.2.0
container_name: fluent-bit
depends_on: [middleware-agent]
volumes:
- ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf:ro
- ./parsers.conf:/fluent-bit/etc/parsers.conf:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- ./state:/fluent-bit/state
- /var/run/docker.sock:/var/run/docker.sock
fluent-bit.conf
[SERVICE]
Parsers_File /fluent-bit/etc/parsers.conf
Log_Level info
Storage.path /fluent-bit/state
[INPUT]
Name tail
Path /var/lib/docker/containers/*/*-json.log
Tag docker.*
Parser docker_json_unescape
DB /fluent-bit/state/docker.db
Refresh_Interval 5
Skip_Long_Lines On
Read_from_Head Off
Path_Key filename
# Optional: enrich with Docker metadata (enable only if your image includes the plugin)
# [FILTER]
# Name docker
# Match docker.*
# Unix_Path /var/run/docker.sock
# Labels On
# Env On
# Keep noise out: drop the agent container’s own logs
[FILTER]
Name grep
Match docker.*
Exclude filename /var/lib/docker/containers/.*/middleware-agent-.*-json.log
# Nice-to-have: a stable service name for grouping in the UI
[FILTER]
Name modify
Match docker.*
Add service.name demo-json-app
[OUTPUT]
Name forward
Host middleware-agent
Port 8006
Match docker.*
parsers.conf
[PARSER]
Name docker_json_unescape
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
# If the "log" field contains a JSON string, decode it into objects
Decode_Field_As json log
Verify
docker logs fluent-bitshould showflush chunk ... succeededmessages once the agent is up.- In Middleware → Logs , fields are parsed; no more escaped payloads.
If you see:
section 'docker' tried to instance a plugin name that don't exists— your image lacks the docker filter plugin. Keep that filter commented out ; the bridge still works without it.
Minimal demo app (for reproducible screenshots)
# app.py
import json, time, datetime
i = 0
while True:
i += 1
rec = {
"@timestamp": datetime.datetime.utcnow().isoformat() + "Z",
"@version": "1",
"message": f"Running tick {i}",
"logger_name": "com.ch.ServerApplicationKt",
"thread_name": "main",
"level": "DEBUG",
"level_value": 10000,
}
print(json.dumps(rec), flush=True)
time.sleep(2)
Troubleshooting quick hits
- Port busy (8006): Map another host port (e.g.,
8007:8006) and pointfluentd-address(Fix A) or the Fluent Bit output (Fix B) accordingly. - No logs arriving: confirm the agent is listening:
sudo ss -ltnp | grep 8006(or your mapped port). - UI still shows escaped strings: verify the app is truly on
fluentd(Fix A), or thatDecode_Field_As json logis active (Fix B). Also clear any File Path filter in the UI when using Fix A. - Too much noise: either set
logging: { driver: "none" }on the agent (Fix A) or keep thegrep Excludein Fluent Bit (Fix B).
Which fix should I choose?
- Fix A (fluentd driver): fewer moving parts, most direct path, best default.
- Fix B (Fluent Bit): use when compliance/tooling dictates
json-fileor when you want to enrich and route logs at the file layer.
Final note
Rotate any test API keys used during screenshots before sharing publicly. Add org‑specific hostnames and tags as needed.
Credits
Thanks to the middleware community and middleware.io devs who surfaced and validated the fluentd-driver approach.

