Logging Architecture
Key Takeaways for AI & Readers
- Ephemeral Logs: Container logs are not persistent; they are lost when a container is deleted, necessitating a centralized logging solution.
- Node-Level Collection: The standard practice involves a Log Collector (DaemonSet) on each node to tail container logs from
/var/log/podsand forward them. - Centralized Storage: Collected logs are shipped to persistent backends like Elasticsearch, Loki, or Splunk for long-term storage and analysis.
- Sidecar for Legacy Apps: For applications that cannot write to
stdout, a sidecar container is used to stream file-based logs tostdoutfor the node agent to collect.
📦
App stdout
➡️
📄
/var/log/pods
➡️
🐝
FluentBit
➡️
📂
Elastic/Loki
Kubernetes containers write to stdout. A logging agent (running as a DaemonSet) picks up those files from the Node's filesystem and ships them to a central store.
In Kubernetes, logs are ephemeral. If a container is deleted, its logs are deleted with it.
2. Standard Pattern: Node-level Logging
- stdout/stderr: Your app writes logs to the console.
- File Store: The container engine (containerd) saves these to
/var/log/podson the Node. - Agent: A Log Collector (like FluentBit or Promtail) runs as a DaemonSet on every node.
- Backend: The agent reads the local files and ships them to Elasticsearch, Loki, or Splunk.
3. Sidecar Logging
If your app cannot write to stdout (e.g. legacy apps that only write to a local file), you must run a Sidecar Container that reads that file and streams it to the console so the Node Agent can pick it up.