In this lab, we’ll use a sample counter application that increments and prints a counter. This application will be split into 4 containers across three tiers:
Application tier - a NodeJS application server that accepts POST requests to increment the counter and GET requests to retrieve the counter value
Data tier - a Redis database which stores the data from the counter
Support tier - a poller that continuously makes GET requests and a counter that continuously makes POST requests
Before we start, let’s first verify if we’re using the correct IAM user’s access keys. This should be the user we created from the pre-requisites section above.
$ aws sts get-caller-identity
{
"UserId": "AIDxxxxxxxxxxxxxx",
"Account": "1234567890",
"Arn": "arn:aws:iam::1234567890:user/k8s-admin"
}
For the cluster, we can reuse the eksops.yml file from the previous labs.
Launch the cluster.
time eksctl create cluster -f eksops.yml
Check the nodes and pods.
kubectl get nodes
Save the cluster, region, and AWS account ID in a variable. We’ll be using these in a lot of the commands later.
MYREGION=ap-southeast-1
MYCLUSTER=eksops
MYAWSID=$(aws sts get-caller-identity | python3 -c "import sys,json; print (json.load(sys.stdin)['Account'])")
Namespaces separate resources according to users, environments, or applications. To secure access to a namespace, we can use Role-based access control (RBAC).
All the YAML files are inside the manifests directory.
cd manifests
We’ll use namespace.yml to create the microservices namespace which will be tagged with the “app: counter” label.
apiVersion: v1
kind: Namespace
metadata:
name: microservices
labels:
app: counter
Apply.
kubectl apply -f namespace.yml
To get the namespaces:
kubectl get ns -A
Next, we’ll use multi-containers.yml to create a Pod with three containers.
apiVersion: v1
kind: Pod
metadata:
name: app
namespace: microservices
spec:
containers:
- name: redis
image: redis:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
- name: server
image: lrakai/microservices:server-v1
ports:
- containerPort: 8080
env:
- name: REDIS_URL
value: redis://localhost:6379
- name: counter
image: lrakai/microservices:counter-v1
env:
- name: API_URL
value: http://localhost:8080
- name: poller
image: lrakai/microservices:poller-v1
env:
- name: API_URL
value: http://localhost:8080
The first container is a Redis container which will pull down the latest image from Dockerhub. Whenever the Pod is started, the latest image is always pulled down. This could introduce some security issues since the latest image could contain bugs. As a precaution, we can tell the Pod to only pull the latest image if no image exists on the machine.
The second container uses a public image and exposes the port 8080. It also utilizes an environment variable to connect to the backend database. This ensures the server can find the redis database through port 6379. Since the Pod will have one IP address that we’ll be shared by all containers inside it, the containers will talk to each other over the localhost.
The third and fourth uses the same image. Environment variables are also specified to tell the poller and couner container to find the server through port 8080 over the localhost.
Launch the Pod.
kubectl apply -f multi-containers.yml
To check the Pod:
kubectl get pod -n microservices
Not scaleable
Scaling the application would require the Pods to be scaled out. Since each Pod will have the same number of containers, then we’ll have proportionate growing number of containers as the Pod scales. This is not ideal.
A much better approach is to have each container scaled independently by putting them in different Pods.
Logs are events written to standard out or standarderr in the container. This allows us to verify that the application is working as exepcted.
To view the logs of the counter container:
kubectl logs app counter \
--tail 10 \
-n microservices
It should return:
Incrementing counter by 2 ...
Incrementing counter by 3 ...
Incrementing counter by 5 ...
Incrementing counter by 8 ...
Incrementing counter by 10 ...
Incrementing counter by 9 ...
Incrementing counter by 2 ...
Incrementing counter by 3 ...
Incrementing counter by 8 ...
Incrementing counter by 2 ...
To see the logs for the poller container, we can also use the “-f” parameter which shows the logs in real-time.
kubectl logs -f app poller \
-n microservices
Make sure to delete the cluster after the lab to save costs.
$ time eksctl delete cluster -f eksops.yml
When you delete your cluster, make sure to double check the AWS Console and that the Cloudformation stacks (which we created by eksctl) are dropped cleanly.