Kubernetes is getting more and more traction since its release 5 years ago (wow, time flies). And with more and more companies adopting K8S for their production workloads, security is a core focus. A main aspect of Kubernetes security is restricting user access, only allowing people to do what they should be doing. The worst thing to do is to make everyone a cluster admin! Let’s see how to better achieve this using RBAC aka Role Based Access Control.
Note: Coming back to this post for the Git repo? Got you covered.
One cluster, multiple teams
Once you’re past the “how to get started with GKE (Google Kubernetes Engine)” tutorials, you will probably realise it’s time to get a bit more serious about security in Kubernetes. When you set up your cluster, you are a cluster admin and own the key to the kingdom.
Now let’s imagine you’re guiding a workshop where you are teaching Kubernetes to a class of 20 students. You want them to be touching the real thing, so you prefer GKE over minikube. However, spinning up 20 clusters might be a bit too expensive for this class, so you decide to divide them into groups of 5 where every group will share a single cluster. You provide one namespace per group of students. You set up a challenge and the first group to accomplish it, wins a great K8S t-shirt.
Preventing Vincent from destroying your pods
There is however one student, let’s call him Vincent, who loves trolling people. You expect Vincent to start terminating random pods from other teams. Therefore, you want to limit each group of students to their own namespace. RBAC to the rescue!
Using RBAC, Role Based Access Control, you have the power to grant specific roles (either system roles or self-defined roles) to specific people, groups or service accounts, limiting their access to only the access they need.
Let’s cover some best practices to achieve better security with RBAC on GKE.
Least privileged access
It all starts with the widely adopted pattern of least privileged access rights. This strategy demands you only give the access rights a user needs to fulfil his task, not a single permission more. For GKE, this might be a bit tricky to understand. There is Google Cloud Platform IAM, and then you have Kubernetes Roles.
GCP IAM defines what actions you are allowed to take on GCP (for example create Cloud Storage Buckets, deploy an App Engine app). Kubernetes Roles define permissions you have within a single cluster. Some GCP IAM roles actually propagate down to the GKE clusters running in that project. For example, the GCP IAM role Kubernetes Engine Developer will give edit access to every cluster in the project this IAM role is granted on. That’s of course way too broad.
Because Kubernetes itself does not have the concept of users, GKE will rely on GCP IAM permissions to Authenticate users (who are you?). However, you should rely as much as possible on a Kubernetes Role for Authorization (what are you allowed to do?). Therefore, we will grant minimal roles on the GCP IAM side that can be used to identify yourself to a cluster. Then the GKE cluster itself will decide whatever you are allowed to do, based on a Kubernetes Role.
Google Cloud Platform IAM
The minimal set of permissions you need on the GCP IAM side are the following:
- container.apiServices.get
- container.apiServices.list
- container.clusters.get
- container.clusters.getCredentials
So let’s create a custom role for these permissions.
gcloud iam roles create $ROLE_NAME — file iam.yaml — project $PROJECT
|
Group membership over individual access
Another best practice is to rely on group membership for access, rather than granting individual users specific access. One advantage is maintainability.
Imagine a user leaves a team that is working on 20 projects. If you give individual access, you have to check 20 locations to revoke the user’ access. If you grant permissions to a group only, you just remove the user from the group, implicitly revoking his access from the 20 projects. Quite convenient.
So let’s give all members from team 1 our custom role, by granting it to the group.
gcloud projects add-iam-policy-binding $PROJECT — member group:$DEV_GROUP — role projects/$PROJECT/roles/$ROLE_NAME
|
Caveat Alert! For GKE, there’s a bit of a special setup needed. For Google Kubernetes Engine to respect group membership, you have that group to be a member of a dedicated group called gke-security-groups@[your-domain], as explained here. We can then create our cluster with this specific security-group as a root group that will contain all groups.
gcloud beta container clusters create $CLUSTER_NAME — project $PROJECT — zone $ZONE — security-group=$GKE_DOMAIN_GROUP
|
So that’s great. We have the members of team 1 as a member of the Google Group team_1@fourcast.io, and any member of this group is allowed to authenticate itself against our Kubernetes cluster on GKE. Done? Not quite yet.
While Kubernetes will now know who you are, it will not allow you to perform any meaning operation, as we still need to get through the second part: Authorization. For completeness, there’s also the concept of Admission Control, which we’ll not cover here. Go here for full details.
Role Based Access Control
Now we’ll tell this specific cluster it should create a namespace ‘team-1’ for the first group and allow members of the dedicated Google Group ‘team_1@fourcast.io’ pod level access in this namespace. All students of the first team are a member of this Google Group. We do this by creating a Role and a RoleBinding. Note the `Subjects` element, which refers to which subject to bind to the specified Role.
Caveat alert! When running this on a cluster we had up for a while, we lost quite some time on a nasty thing that’s easily overlooked. Legacy Authorization needs to be disabled on your cluster. New clusters automatically have this disabled, but old clusters might still be running using Legacy Authorization!
As a cluster admin, we first get our credentials:
gcloud container clusters get-credentials $CLUSTER_NAME –project $PROJECT –zone $ZONE
|
Then we create the Namespace, Role and Rolebinding:
kubectl apply -f rbac.yaml
|
And there you have it! Now any member of the Google Group team_1@fourcast.io will be able to manage Deployments in the team_1 namespace of this cluster, leaving other namespaces and other clusters in the same GCP projects out of their reach.
All they need to do is configure their kubectl using :
gcloud container clusters get-credentials $CLUSTER_NAME –project $PROJECT –zone $ZONE
|
$kubectl get deployments
Error from server (Forbidden): deployments.extensions is forbidden: User “X.Y@fourcast.io” cannot list resource “deployments” in API group “extensions” in the namespace “default” |
But they will be able to access the deployments in the team-1 namespace:
$kubectl get deployments -n team-1
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE first-deployment 1 1 1 0 4s |
For a full code sample please refer to this GitHub repo: https://github.com/Fourcast/GKE-rbac
Keeping Vincent in line
Thanks to RBAC on Kubernetes, combined with Google Groups and the least privilege access rights, we now kept Vincent in line, not allowing him to tamper with any of the other teams’ namespaces. Our troll will have to find another cluster to mess up!
Also check out our other K8S blog posts, or contact us if you have any questions! {{cta(‘a47a0944-6b4b-4399-a256-08993b3657a3′,’justifycenter’)}}