Skip to content

Logging

Shipping

Prerequisites

A cluster-wide logging collector has been setup by the cluster administrator

To ingest your applications logs your pod needs to be annotated with: co.elastic.logs/enabled: true, This can be done either at the Namespace level or inside your workloads .spec.template.metadata.annotations fields.

Filtering

To filter overly verbose messages from being collected, add one or more drop_event

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    metadata:
      annotations:
         co.elastic.logs/processors.0.drop_event.when.contains.message: DEBUG
         co.elastic.logs/processors.1.drop_event.when.contains.message: TRACE

Parsing

To tokenize unstructured messages into structured fields before use the dissect

Given the following log line:

2021-02-14 07:35:46.222  INFO 1 --- [           main] o.h.h.i.QueryTranslatorFactoryInitiator  : HHH000397: Using ASTQueryTranslatorFactory

Using a tokenization string of: %{date} %{time} %{level} %{} %{} [%{entry}] %{class}: %{message} will parse it into:

{
  "class": "o.h.h.i.QueryTranslatorFactoryInitiator  ",
  "date": "2021-02-14",
  "entry": "           main",
  "level": "",
  "message": "HHH000397: Using ASTQueryTranslatorFactory",
  "time": "07:35:46.222"
}

Tip

dissect-tester can help testing tokenization strings with sample logs

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    metadata:
      annotations:
         co.elastic.logs/processors.0.dissect.tokenizer: '%{date} %{time}  %{level} %{} %{} [%{entry}] %{class}: %{message}'
         co.elastic.logs/processors.0.dissect.ignore_failure: "true"
         co.elastic.logs/processors.0.dissect.target_prefix: ""
         co.elastic.logs/processors.0.dissect.overwrite_keys: "true"

JSON

JSON formatted logs can be decoded using decode-json-fields

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    metadata:
      annotations:
         co.elastic.logs/processors.0.decode_json_fields.fields.0: message
         co.elastic.logs/processors.0.decode_json_fields.target: ""
         co.elastic.logs/processors.0.decode_json_fields.overwrite_keys: "true"
         co.elastic.logs/processors.0.decode_json_fields.add_error_key: "true"

Multiline

Multi-line log messages such as Java stack traces can be combined using multiline

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    metadata:
      annotations:
        co.elastic.logs/multiline.pattern: "^[[:space:]]+(at|\.{3})[[:space:]]+\b|^Caused by:"
        co.elastic.logs/multiline.negate: "true"
        co.elastic.logs/multiline.match: after

Example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: petclinic
  namespace: petclinic
spec:
  selector:
    matchLabels:
      app: petclinic
  template:
    metadata:
      labels:
        app: petclinic
      annotations:
        # first turn on logging
        co.elastic.logs/enabled: "true"
        # parse multi-line messages
        co.elastic.logs/multiline.pattern: "^[[:space:]]+(at|\.{3})[[:space:]]+\b|^Caused by:"
        co.elastic.logs/multiline.negate: "true"
        co.elastic.logs/multiline.match: after
        # tokenize log messages into structured fields
        co.elastic.logs/processors.0.dissect.tokenizer: "%{date} %{time}  %{level} %{} %{} [%{entry}] %{class}: %{message}"
        co.elastic.logs/processors.0.dissect.ignore_failure: "true"
        # overwrite existing fieds, do not create not fields under `dissect`
        co.elastic.logs/processors.0.dissect.target_prefix: ""
        co.elastic.logs/processors.0.dissect.overwrite_keys: "true"
        # trim whitespace from extracted fields
        co.elastic.logs/processors.0.dissect.trim_values: "all"
    spec:
      containers:
        - name: petclinic
          env:
            - name: SPRING_PROFILES_ACTIVE
              value: prod
          image: docker.io/arey/springboot-petclinic
          resources:
            limits:
              memory: 1Gi
              cpu: "500m"
          ports:
            - containerPort: 8080

Realtime Tailing

Prerequisites

stern is installed

Tail the gateway container running inside of the envvars pod on staging

stern envvars --context staging --container gateway

Tail the staging namespace excluding logs from istio-proxy container

stern -n staging --exclude-container istio-proxy .

Show auth activity from 15min ago with timestamps

stern auth -t --since 15m

Tail the pods filtered by run=nginx label selector across all namespaces

stern --all-namespaces -l run=nginx

Follow the frontend pods in canary release

stern frontend --selector release=canary

Pipe the log message to jq:

stern backend -o json | jq .

Only output the log message itself:

stern backend -o raw

Output using a custom template:

stern --template '{{.Message}} ({{.Namespace}}/{{.PodName}}/{{.ContainerName}})' backend