Collectord

Forwarding Kubernetes audit logs to CloudWatch Logs

March 14, 2019

Kubernetes API provides capabilities to produce audit logs about all calls to the API Server. By default audit is disabled.

If you are running managed EKS cluster on AWS, you can find documentation page on how to get the audit logs from the Kubernetes masters at Logging Amazon EKS API Calls with AWS CloudTrail.

If you are using self-provisioned Kubernetes cluster (with kops, kubeadm, or any other tool) we will guide you how to get audit logs to CloudWatch Logs.

We assume you already have Collectord installed with CloudWatch Logs output. If you don't, please follow our installation instructions, it is very easy to install.

Enable and Forward Audit Logs on Masters

You can read official Kubernetes documentation on Auditing. The following steps need to be done only on master nodes.

Create audit-policy.yaml file

Create Audit Policy file. Use our example as a reference and save the file in /etc/kubernetes/policies/audit-policy.yaml. This file defines which calls should be recorded in the audit logs. If you will enable everything by default, there could be a lot of records in the audit log from the system components. With our example we take some of the log messages from the system components, that will help you to focus more on the calls from the Kubernetes operators and users.

Another good example of the audit-policy.yaml file is an audit profile used by GCE.

apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
  # Do not log from kube-system accounts
  - level: None
    userGroups:
    - system:serviceaccounts:kube-system
  - level: None
    users:
    - system:apiserver
    - system:kube-scheduler
    - system:volume-scheduler
    - system:kube-controller-manager
    - system:node

  # Do not log from collector
  - level: None
    users:
    - system:serviceaccount:collectorforkubernetes:collectorforkubernetes

  # Don't log nodes communications
  - level: None
    userGroups:
    - system:nodes

  # Don't log these read-only URLs.
  - level: None
    nonResourceURLs:
    - /healthz*
    - /version
    - /swagger*

  # Log configmap and secret changes in all namespaces at the metadata level.
  - level: Metadata
    resources:
    - resources: ["secrets", "configmaps"]

  # A catch-all rule to log all other requests at the request level.
  - level: Request

Enable Audit Logs on Kubernetes Masters

You need to enable audit log only on Masters. For that, you need to edit the definition of Kubernetes API Server. In case of clusters bootstrapped by kubeadm you can find the definition of Kubernetes API Server in file /etc/kubernetes/manifests/kube-apiserver.yaml. In other cases Kubernetes API Server Pod definition can be stored in /etc/kubernetes/manifests/apiserver.json.

Writing audit logs to files

We will modify the definition for the Kubernetes API server as following

  • (Lines 11-14) Tells Collectord to pick up logs written inside of the containers in the volume audit-logs, and forward them to the LogGroup /kubernetes/{{cluster}}/audit and LogStream /{{host}}/{{pod_name}}/{{container_name}}/{{container_id}}. You can find more information in our documentation about annotations for application logs.
  • (Lines 22-26) Tells Kubernetes API Server to write audit logs to the location /var/log/audit/, keep a maximum of 3 backup files and keep the files under the size of 10Mb.
  • (Lines 35-37 and 50-53) Mounts created audit-policy.yaml inside of the Pod, so it could be read by the kube-apiserver.
  • (Lines 38-39 and 54-55) Creates and mounts an emptyDir volume that will contain audit logs.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
...
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  annotations:
    cloudwatch.collectord.io/volume.1-logs-name: 'audit-logs'
    cloudwatch.collectord.io/volume.1-logs-loggroup: '/kubernetes/{{cluster}}/audit'
    cloudwatch.collectord.io/volume.1-logs-logstream: '/{{host}}/{{pod_name}}/{{container_name}}/{{container_id}}/{{volume_name}}/{{file_path}}'
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
...
    - --audit-policy-file=/etc/kubernetes/policies/audit-policy.yaml
    - --audit-log-path=/var/log/audit/json.log
    - --audit-log-maxbackup=3
    - --audit-log-maxsize=10
    - --audit-log-format=json
...
    volumeMounts:
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/kubernetes/policies
      name: policies
      readOnly: true
    - mountPath: /var/log/audit/
      name: audit-logs
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/kubernetes/policies
      type: DirectoryOrCreate
    name: policies
  - emptyDir: {}
    name: audit-logs

Restart kubelet

sudo systemctl restart kubelet

You can verify now that files are in the pod (change the pod name to the pod name of your api server master)

kubectl exec -it kube-apiserver-master1.k8s-cluster-1-13.local.outcold.solutions -n kube-system sh -- -c 'ls -l /var/log/audit/'

You should see at least one file in the output.

Go to the CloudWatch and find the audit logs under LogGroup /kubernetes/{{cluster}}/audit.

CloudWatch Audit Logs

Writing audit logs to standard output

Another option is to write Audit Logs directly to the standard output of the Pod. You might choose that, if you need access to the audit logs with the kubectl logs ... command as well.

Because Pod also has other logs, we want to be able to separate audit logs from the pod logs. You can match the events and redirect them to a different LogGroup and LogStream using override annotations.

  • (Lines 11-14) Tells Collectord to find events that match regular expression pattern ^{"kind":"Event","apiVersion":"audit\.k8s\.io\/v1" and forward them to the LogGroup /kubernetes/{{cluster}}/audit and LogStream /{{host}}/{{pod_name}}/{{container_name}}/{{container_id}}. You can find more information in our documentation about overriding annotations.
  • (Lines 22-24) Tells Kubernetes API Server to write audit logs to staandard output (-).
  • (Lines 33-35 and 46-49) Mounts created audit-policy.yaml inside of the Pod, so it could be read by the kube-apiserver.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
...
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  annotations:
    cloudwatch.collectord.io/logs-override.1-match: '^{"kind":"Event","apiVersion":"audit\.k8s\.io\/v1"'
    cloudwatch.collectord.io/logs-override.1-loggroup: '/kubernetes/{{cluster}}/audit'
    cloudwatch.collectord.io/logs-override.1-logstream: '/{{host}}/{{pod_name}}/{{container_name}}/{{container_id}}/'
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
...
    - --audit-policy-file=/etc/kubernetes/policies/audit-policy.yaml
    - --audit-log-path=-
    - --audit-log-format=json
...
    volumeMounts:
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/kubernetes/policies
      name: policies
      readOnly: true
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/kubernetes/policies
      type: DirectoryOrCreate
    name: policies

To apply these changes you need to restart kubelet.

sudo systemctl restart kubelet

You can verify that audit events are going to standard output (change the pod name)

kubectl logs -n kube-system kube-apiserver-master1.k8s-cluster-1-13.local.outcold.solutions

Now you will find audit logs in the LogGroup /kubernetes/{{cluster}}/audit and other logs from the API Server will go to the standard LogGroup for the pod.

CloudWatch Audit Logs

Querying Audit Logs

Because audit logs are JSON objects, you can use CloudWatch Logs Insight to extract the fields easily.

For example, we can find how user kubernetes-admin accessed the API

fields sourceIPs.0, user.username, verb, responseStatus.code, objectRef.namespace, objectRef.name, requestURI, userAgent |
filter user.username = 'kubernetes-admin' and stage = 'ResponseComplete' and objectRef.name like /.+/

CloudWatch Insights for Audit Logs

Conclusion

As you can see Collectord provides very simple and flexible ways to get the logs from inside of the Pod, and can separate the stream in multiple LogGroups. You can find more about various annotations that Collectord support in the documentation about annotations.

And of course you can similarly upload the audit logs to S3 to store them for a much longer period and perform much more complex analysis of these logs.

collectord, kubernetes, eks, aws, cloudwatch, audit, cloudwatch logs

About Outcold Solutions

Outcold Solutions provides solutions for building centralized logging infrastructure and monitoring Kubernetes, OpenShift and Docker clusters. We provide easy to setup centralized logging infrastructure with AWS services. We offer Splunk applications, which give you insights across all containers environments. We are helping businesses reduce complexity related to logging and monitoring by providing easy-to-use and deploy solutions for Linux and Windows containers.