New Developers joining
Three new developers have been added to our team. They won’t need IAM permissions and access to the AWS Console but since they will be collaborators, they will need programmatic access and they should have admin rights to our EKS cluster.
In this lab, we’ll create additional users and provide them RBAC permissions to access our Kubernetes clusters. This lab has three parts:
Part 1: Create the Users
Part 2: Provide Cluster Admin Access
Part 3: Provide Admin Access for dedicated namespace
Part 4: Provide Read-only Access for dedicated namespace
We’ll also be using ap-southeast-1 region (Singapore).
We need to do the following before we can perform EKS operations.
For the IAM User and Group, you can use the values below. Make sure to add the user to the group.
NOTE: I would give k8s-admin as AdminstratorAccess since you might run into some issues later on.
Once you’ve created the k8s-user
, log-in to the AWS Management Console using this IAM user.
To avoid confusion, we’ll label the user accounts as:
k8s-admin - main admin user that we’ll use
k8s-user-2 - a second admin user that we’ll create
k8s-user-prodadmin - admin on prod
k8s-user-prodviewer - a read-only user on prod
We also need to install the following CLI tools:
Once you’ve installed AWS CLI, add the access key to your credentials file. It should look like this:
# /home/user/.aws/credentials
[k8s-admin]
aws_access_key_id = AKIAxxxxxxxxxxxxxxxxxxx
aws_secret_access_key = ABCDXXXXXXXXXXXXXXXXXXXXXXX
region = ap-southeast-1
output = json
You can use a different profile name. To use the profile, export it as a variable.
$ export AWS_PROFILE=k8s-admin
To verify, we can run the commands below:
$ aws configure list
$ aws sts get-caller-identity
Although the region is already set in the profile, we’ll also be using the region in many of the commands. We can save it as a variable.
$ export AWSREGION=ap-southeast-1
To use as an example later on, we can launch a simple cluster. But before we do that, let’s first verify if we’re using the main admin’s access keys
$ aws sts get-caller-identity
{
"UserId": "AIDxxxxxxxxxxxxxx",
"Account": "1234567890",
"Arn": "arn:aws:iam::1234567890:user/k8s-admin"
}
For the cluster, we can reuse the eksops.yml file from the previous labs. Launch the cluster.
$ time eksctl create cluster -f eksops.yml
Check the nodes and pods.
$ kubectl get nodes
$ kubectl get pods -A
Let’s start with creating the new users in the IAM console. Note that we’ll be using our own “k8s-admin” that has an AdministratorAccess.
Create the k8s-user-2 with no IAM permissions.
In the next page, set the following:
Click Next: permissions > Next: tags
Create the k8s-user-prodadmin with no IAM permissions. Repeat the same steps, but change the values. Make sure to download the CSV files and save the ARN.
For the username:
For the tags:
Do the same for k8s-user-prodviewer. Make sure to download the CSV files and save the ARN.
For the username:
For the tags:
Before we give the new IAM user cluster rights, let’s test first the cluster access. Set a profile in the AWS credentias file then add the access key ID and secret acces key from the CSV file that’s downloaded as a CSV file in the first step.
$ vim ~/.aws/credentials
[k8s-user-2]
aws_access_key_id = AKIAxxxxxxxxxxxxxxxxxxx
aws_secret_access_key = ABCDXXXXXXXXXXXXXXXXXXXXXXX
region = ap-southeast-1
output = json
[k8s-user-prodadmin]
aws_access_key_id = AKIAxxxxxxxxxxxxxxxxxxx
aws_secret_access_key = ABCDXXXXXXXXXXXXXXXXXXXXXXX
region = ap-southeast-1
output = json
[k8s-user-prodviewer]
aws_access_key_id = AKIAxxxxxxxxxxxxxxxxxxx
aws_secret_access_key = ABCDXXXXXXXXXXXXXXXXXXXXXXX
region = ap-southeast-1
output = json
To use the new profile, export it as a variable then check the identity again.
$ export AWS_PROFILE=k8s-user-2
$ aws sts get-caller-identity
{
"UserId": "AIDxxxxxxxxxxxxxx",
"Account": "1234567890",
"Arn": "arn:aws:iam::1234567890:user/k8s-user-2"
}
Test that the new user account still doesn’t have cluster access.
$ kubectl get nodes
error: You must be logged in to the server (Unauthorized)
$ kubectl get svc
error: You must be logged in to the server (Unauthorized)
Repeat the same for k8s-user-prodviewer.
$ export AWS_PROFILE=k8s-user-prodviewer
$ aws sts get-caller-identity
{
"UserId": "AIDxxxxxxxxxxxxxx",
"Account": "1234567890",
"Arn": "arn:aws:iam::1234567890:user/k8s-user-prodviewer"
}
$ kubectl get nodes
error: You must be logged in to the server (Unauthorized)
$ kubectl get svc
error: You must be logged in to the server (Unauthorized)
$ eksctl get nodegroup --cluster eksops
Error: unable to describe cluster control plane: operation error EKS: DescribeCluster, https response error StatusCode: 403, RequestID: 7778c7b0-e3ef-41e5-9b92-14c5b558ba22, api error AccessDeniedException: User: arn:aws:iam::848587260896:user/k8s-user-2 is not authorized to perform: eks:DescribeCluster on resource: arn:aws:eks:ap-southeast-1:848587260896:cluster/eksops
We now have two IAM users with no permissions to the AWS Console and no admin rights to the EKS cluster.
Switch back to our main k8s-admin admin account.
$ export AWS_PROFILE=k8s-admin
$ aws sts get-caller-identity
Verify that the Configmap is created in our cluster. This should return the aws-auth.
$ kubectl -n kube-system get cm
Next is to edit the the Configmap. We can edit the file using the command below:
$ kubectl edit configmap aws-auth -n kube-system
Another approach it to print the Configmap in YAML format and then store it in a file which we can edit later. We’ll proceed with this approach.
$ kubectl -n kube-system get configmap aws-auth -o yaml > aws-auth-configmap.yml
Edit the file. Populate the mapUsers block. Replace userarn with the ARN of k8s-user-2
$ vim aws-auth-configmap.yml
mapUsers: |
- username: k8s-user-2
userarn: arn:aws:iam::1234567890:user/k8s-user-2
groups:
- system:masters
Apply the changes.
$ kubectl -n kube-system apply -f aws-auth-configmap.yml
Check if the user was saved in the Configmap.
$ kubectl -n kube-system describe cm aws-auth
In the CLI, switch over to the profile of k8s-user-2. Check if the new user can now access the cluster.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-12-34.ap-southeast-1.compute.internal Ready <none> 80m v1.22.12-eks-ba74326
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.1.2 <none> 443/TCP 91m
Let’s now create a manifest that will deploy NGINX in the default namespace.
Apply the NGINX file.
$ kubectl apply -f main-nginx.yml
Verify that the pod was created.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-demo 1/1 Running 0 16s
Switch back to our main k8s-admin admin account.
$ export AWS_PROFILE=k8s-admin
$ aws sts get-caller-identity
Create the new namespace.
$ kubectl create ns prod
Verify.
$ kubectl get ns
Create the role-prodadmin.yml. Make sure to add prod in the namespace field.
Create the rolebind-prodadmin. Add the user name k8s-user-prodadmin in the name field in the Subjects block.
Under the Roleref, add the name of the role.
Apply the role and rolebindings.
$ kubectl apply -f role-prodadmin.yml
$ kubectl apply -f rolebind-prodadmin.yml
Add the k8s-user-prodadmin to the Configmap.
$ kubectl edit configmap aws-auth -n kube-system
mapUsers: |
- userarn: arn:aws:iam::848587260896:user/k8s-user-2
username: k8s-user-2
groups:
- system:masters
- userarn: arn:aws:iam::848587260896:user/k8s-user-prodadmin
username: k8s-user-prodadmin
groups:
- role-prodadmin
Check if the user was saved in the Configmap.
$ kubectl -n kube-system describe cm aws-auth
Switch over to the new profile.
$ export AWS_PROFILE=k8s-user-prodadmin
$ aws sts get-caller-identity
Let’s test if the user is able to retrieve the nodes for all namespaces. This should return an error.
$ kubectl get nodes
error: You must be logged in to the server (Unauthorized)
Let’s now create a manifest that will deploy nginx in the Prod namespace.
Apply the manifest in the prod namespace.
$ kubectl apply -f prod-nginx.yml -n prod
pod/nginx-demo created
We now have NGINX running in two namespaces: in the default and in prod. k8s-user-prodadmin should only be able to access pods in the prod namespace.
$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "k8s-user-prodadmin" cannot list resource "pods" in API group "" in the namespace "default"
$ kubectl get pods -n prod
NAME READY STATUS RESTARTS AGE
nginx-demo 1/1 Running 0 40s
Switch back to our main k8s-admin admin account.
$ export AWS_PROFILE=k8s-admin
$ aws sts get-caller-identity
Create the role-prodviewer.yml. Make sure to add prod in the namespace field.
Create the rolebind-prodviewer.yml. Add the user name k8s-user-prodviewer in the name field in the Subjects block.
Under the Roleref, add the name of the role.
Apply the role and rolebindings.
$ kubectl apply -f role-prodviewer.yml
$ kubectl apply -f rolebind-prodviewer.yml
Next, edit the the Configmap.
$ kubectl edit configmap aws-auth -n kube-system
Add k8s-user-prodviewer.
mapUsers: |
- userarn: arn:aws:iam::848587260896:user/k8s-user-2
username: k8s-user-2
groups:
- system:masters
- userarn: arn:aws:iam::848587260896:user/k8s-user-prodadmin
username: k8s-user-prodadmin
groups:
- role-prodadmin
- userarn: arn:aws:iam::848587260896:user/k8s-user-prodviewer
username: k8s-user-prodviewer
groups:
- role-prodviewer
Check if the user was saved in the Configmap.
$ kubectl -n kube-system describe cm aws-auth
We now have another user that has read-acess to the prod namespace only. Switch over to the new profile.
$ export AWS_PROFILE=k8s-user-prodviewer
$ aws sts get-caller-identity
Let’s test if he’s able to retrieve the nodes for all namespaces.
$ kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "k8s-user-prodviewer" cannot list resource "nodes" in API group "" at the cluster scope
Checking the nodes for the defailt namespace also returns an error.
$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "k8s-user-prodviewer" cannot list resource "pods" in API group "" in the namespace "default"
The user should be able to access pods in the prod namespace.
$ kubectl get pods -n prod
NAME READY STATUS RESTARTS AGE
nginx-demo 1/1 Running 0 10m52s
Recall that this user has Read-only access. Let’s try to the pod.
$ kubectl delete pod nginx-demo
Error from server (Forbidden): pods "nginx-demo" is forbidden: User "k8s-user-prodviewer" cannot delete resource "pods" in API group "" in the namespace "default"
Right, we need to specify the namespace.
$ kubectl delete pod nginx-demo -n prod
Error from server (Forbidden): pods "nginx-demo" is forbidden: User "k8s-user-prodviewer" cannot delete resource "pods" in API group "" in the namespace "prod"
How about if we try to delete it by running the command below? This should delete all the NGINX pods (if there’s more than one pod) in the prod namespace.
$ kubectl delete -f prod-nginx.yml
Error from server (Forbidden): error when deleting "prod-nginx.yml": pods "nginx-demo" is forbidden: User "k8s-user-prodviewer" cannot delete resource "pods" in API group "" in the namespace "prod"
As we can see, k8s-user-prodviewer cannot do any update or delete action to the running pods because it only has read access to the namespace.
Whew, that was a lot! Switch over to the main admin account.
$ export AWS_PROFILE=k8s-admin
$ aws sts get-caller-identity
Before we officially close this lab, make sure to destroy all resources to prevent incurring additional costs.
$ time eksctl delete cluster -f eksops.yml
Note that when you delete your cluster, make sure to double check the AWS Console and check the Cloudformation stacks (which we created by eksctl) are dropped cleanly.