Getting started with Kubernetes is really easy. In just a matter of minutes you can set up a new cluster with minikube, kops, Amazon EKS, Google Kubernetes Engine, or Azure Kubernetes Service. What isn’t so easy is knowing what to do after you set up your cluster and run a few apps. One of the most important parts of setting up a Kubernetes cluster is to make sure your cluster is secure. In this blog post, we will go over some of the strategies you can use to help secure your Kubernetes cluster. This is by no means an exhaustive list of security items to check, but should get you started on the right path.
Kubernetes has over 2,000 individual contributors and is updated frequently. With more eyes on it, security vulnerabilities are also being discovered and patched more frequently. It is important to stay reasonably up-to-date on Kubernetes versions especially as it matures. How you upgrade your cluster depends on what tool or service you used to create it:
Try to stay no more than 1 or 2 major versions behind on Kubernetes, and take advantage of the existing tools to help you upgrade often and without service disruption.
Most cloud implementations for Kubernetes already restrict access to the Kubernetes API for your cluster by using IAM (Identity & Access Management), RBAC (Role-Based Access Control), or AD (Active Directory). If your cluster does not use these methods, you can usually set up one of these methods using open source projects for interacting with various authentication methods. I also recommend restricting API access by IP address if at all possible, only allowing access from trusted IPs such as a VPN or bastion host.
Another easy and essential security policy to implement in your new cluster is to restrict SSH access to your Kubernetes nodes. Ideally you would not have port 22 open on any node, but you may need it to debug issues at some point. You can configure your nodes via your cloud provider to block all access to port 22 except via your organization’s VPN or a bastion host. This way you can quickly get SSH access but outside attackers will not be able to.
If your cluster acts as a multi-tenant environment, you can and should use Namespaces to restrict access to resources within the cluster. Namespaces, together with RBAC, will let you create user accounts that have access only to particular resources. In this example, we create a user MyDevUser that only has access to resources in the development namespace:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: MyDevUser
namespace: development
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: MyDevUser
namespace: development
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: MyDevUser
namespace: development
subjects:
- kind: ServiceAccount
name: MyDevUser
namespace: development
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: MyDevUser
You can also configure your namespaces to restrict the amount of memory and CPU that are allowed to run in that namespace. This can help prevent rogue deployments in development or QA from affecting the available resources in production.
Network policies also allow you to restrict access to services within your Kubernetes cluster. You can also use them to restrict access to your cloud’s metadata API from pods in your cluster. Follow this documentation to set up a network policy.
One of the most overlooked security issues is running the containers in your Pods as the root user. In Kubernetes, the UID of the user running a container is mapped directly to the host. This means that if your container runs as UID 0 (root) it will also appear as root on the node it is running on. Kubernetes has built-in protections to prevent escalation of privileges with this mechanism, but there is always the risk of a security vulnerability or exploit where a container could escalate privileges this way.
The way around this is usually quite simple: do not run your containers as root. You can accomplish this by modifying the Dockerfile for your built containers to create and use a user with a known UID. For example, here the beginning of a Dockerfile that adds a user named user with UID 1000 to an image for Java 8:
FROM openjdk:8-jre-slim-stretch
USER root
RUN groupadd -r user --gid="1000" \
&& adduser --home "/home/user" --gid "1000" --disabled-password --disabled-login --gecos '' --shell "/bin/bash" --uid "1000" user \
&& chown -R user /home/user
USER 1000
Notice that we use USER 1000 instead of USER user to declare what user is moved going forward. We do this for the sake of consistency with Kubernetes. When you configure your Kubernetes manifest to run your container, you can specify what UID the container must run as to enforce that the correct user is used. This is especially useful for larger teams where cluster security may be enforced by a different team than the one writing the Dockerfiles. Simply add these lines to your spec.containers to enforce that the container is ran as UID 1000.
securityContext:
runAsUser: 1000
allowPrivilegeEscalation: false
You can also enforce that non-root users are used using PodSecurityPolicies. This feature is in beta as of Kubernetes v1.18.
One of the benefits of running Kubernetes in one of AWS, GCP, or Azure is the ability to use their managed services to run your DNS, databases, load balancing, and monitoring. You will likely need to both grant and restrict access to these services from your Kubernetes cluster so you can fully integrate Kubernetes.
Google cloud uses Cloud IAM to control access to its services. This is integrated with GKE using RBAC as described here. You can restrict your GCP users and roles to certain access within your Kubernetes cluster, but there is no built-in way to assign an IAM role to a pod and restrict its access to services; a pod will have the same access as the node it runs on.
Azure’s AKS uses Active Directory to manage access to resources. This documentation describes how you can use AD to not only restrict user access to your cluster, but you can also assign Pod Identities for fine-grained control over how pods access other Azure services.
Amazon’s EKS by default uses IAM to restrict user access to your EKS cluster. There is no built-in method for restricting pod access to other AWS services, but the open-source projects kiam and kube2iam provide this functionality. On EKS clusters, kiam is more difficult to set up because of the client-server model that project uses, but both solutions will work on a kops-managed cluster. For an in-depth look at managing IAM permissions for Kubernetes in AWS specifically, check out our blog series.
It can be easy to forget about one of the most mundane security tasks: getting an external security review. It is extremely important to validate the work you’ve done on your cluster with a 3rd party if your application will be handling any sensitive user data. Even if it does not, it is a good practice to do at least annual security reviews to make sure you are on top of all of the issues mentioned above.
Staying ahead of security issues can be a daunting task. Just remember that by implementing multiple layers of defense, you will greatly reduce your risk as an organization. Kubernetes has a very active community that shares many of the same concerns your organization does; leverage the community and contribute back when possible to make Kubernetes more secure for everybody.
However, it can be hard to monitor Kubernetes with traditional tools. If you are looking for a monitoring solution, consider Blue Matador. Blue Matador automatically checks for over 25 Kubernetes events out-of-the-box. We also monitor over 20 AWS services in conjunction with Kubernetes, providing full coverage for your entire production environment with no alert configuration or tuning required.