This blog was written by an independent guest blogger.
More and more organizations are adopting Kubernetes, but they’re encountering security challenges along the way. In the fall 2020 edition of its “State of Container and Kubernetes Security” report, for instance, StackRox found that nearly 91% of surveyed organizations had adopted Kubernetes, with a majority (75%) of participants revealing that they had deployed the container orchestration platform into their production environments. Even so, nine in 10 respondents said that they had experienced a security incident involving a misconfiguration, vulnerability or runtime error in their container and Kubernetes environments over the last 12 months. Nearly half (44%) went on to say that they had delayed moving an application into production as a result of their security concerns.
These findings highlight the need for organizations to ensure their Kubernetes configurations complement their security requirements. As part of this process, administrators can focus in on protecting their clusters, which are part of the Kubernetes architecture. After defining what a cluster is, this blog post will explore the two sets of components that exist within a cluster and provide guidance on how organizations can secure those components along the way.
Understanding the Kubernetes cluster
On its website, Kubernetes says that customers get a cluster—or a set of one or more worker machines called “nodes” that are responsible for running a containerized application—whenever they deploy Kubernetes. These nodes host pods, groups of one or more containers which function as the application workload’s components. Ultimately, Kubernetes makes it possible for administrators to manage the nodes and the cluster more generally, including events that affect either, by using the control plane.
Administrators can secure a Kubernetes cluster by specifically directing their efforts to the control plane and the worker nodes.
The control plane
Within the control plane, administrators can focus their security measures on five components: kube-apiserver, etcd, kube-scheduler, kube-controller-manager and cloud-controller-manager.
kube-apiserver
The kube-apiserver is the main implementation of a Kubernetes API server within a Kubernetes deployment. It scales horizontally as administrators deploy more instances of kube-apiserver to balance traffic within their environments. As the front end for the Kubernetes control plane, the API server potentially exposes the Kubernetes API. Administrators can secure this element by upgrading to the newest version of Kubernetes and by applying updates, thereby closing security gaps. From there, administrators can restrict access to the Kubernetes API server by setting up authentication for Kubernetes API clients and ensuring all API traffic is encrypted using TLS.
etcd
A key value store, etcd functions as the backing store for all Kubernetes cluster data. Administrators might want to consider having a back-up plan for that data. Similar to the kube-apiserver, they can once again turn to encryption, authentication and access control as a means of gaining visibility over read and write access to that data store.
kube-scheduler
Within the control plane, administrators can use the kube-scheduler component to function for newly created pods that don’t have an assigned node at the time of their inception. They can then assign a node for those pods to run on depending on resource requirements, policy constraints, data locality and other factors. To optimize the security of this element, administrators can restrict the file permissions on the kube-scheduler, configure the service to serve HTTPS only and bind it to a localhost interface.
kube-controller-manager
This feature of the control plane is responsible for running processes launched by the node controller, which responds to when nodes go down; the replication controller, which ensures that there’s a correct number of pods deployed in the system; as well as other controllers. StackRox recommends that administrators secure the kube-controller-manager by following all of the same security guidelines used for the kube-scheduler, with the addition that administrators “ensure that an individual service account credential is configured per controller in conjunction with Kubernetes RBAC to ensure the control loops run with minimum required permissions.”
cloud-controller-manager
Last but not least in the control plane is the cloud-controller-manager, or the component that enables administrators to link the cluster into their cloud provider’s API. Administrators can use the cloud-controller-manager to run controllers that are specific to their cloud provider. To make sure this element is secure, however, they need to follow the same guidelines identified for the kube-controller-manager above.
Kubernetes Node
Administrators need to secure three parts of a Kubernetes worker node: the kubelet, kube-proxy and container runtime.
kubelet
Found within each node of a cluster, the kubelet ensures that all containers are running within a pod. It goes a step further by using various pod specifications to make sure that those containers are running optimally. Even so, it provides this visibility only to containers created by Kubernetes. In support of this feature, administrators can apply available patches to remediate vulnerabilities that are identified in the kubelet. They can also use strong authentication and authorization to limit who can access this cluster element.
kube-proxy
Like the kubelet, kube-proxy runs on each node in the cluster. This feature enforces network rules on nodes, including specifications for allowing network communication from outside the cluster. Administrators can secure this component by restricting file permissions if they’re using a file-based kubeconfig file and by using a secured port to facilitate protected communication with the API server.
Container runtime
Container runtime lives up to its name. It’s the component that is responsible for running containers available via Docker and other runtimes. Administrators can secure that software by looking more broadly to the security of their containers. This includes reviewing their containers’ privileges and disallowing SSH services from running inside a container.
Read more here: cybersecurity.att.com