EKS complete with eksctl / Sprint 1

Marc Enschede
4 min readJul 27, 2022

--

Setting up a Kubernetes cluster is a lot of work. Thanks to the guys and galls of Amazon we do have EKS; Kubernetes in the cloud solution. Using EKS saves us a whole lot of time and headaches. But still EKS has to be configured too.

In this set of stories I’m going to share my EKS configuration with you. I’m going to use Flux v2, an Elastic Load Balancer, Crossplane, a cloud database, Route53, Cluster Autoscalers, Serverless containers, certificate management, eksctl, AWS IAM integration and a whole lot more.

Sprint 1

The sprint goal of the first story is to create a Spring Boot application running on a EKS cluster just saying “Hello” to the world.

Prerequisites

I assume that you have basic knowledge of Kubernetes and Bash/Zsh-shells. And you have knowledge of handling Git and Git submodules.

Setting up the project repo

In this article we are going to use 2 repo’s on GitHub; the parent repo that will contain the EKS scripts and a sub repo that will contain the Spring Boot app.

git clone https://github.com/enschede/eks-demo.git
git checkout tag/sprint1
git submodule update

You can clone it and, please, check out tag “sprint1”.

The eks-demo-app

The app is a Spring Boot app using Kotlin and Maven. It contains the web and actuator dependencies and has a get-endpoint /hello that will tell hello to everybody on this world.

The pom-file will make sure that a docker image can be build from Maven

Building and pushing the app uses the following commands

mvn spring-boot:build-imagedocker push enschede/eks-demo-app:sprint1

The Spring Boot app is stored in the repo enschede/eks-demo-app and is configured as a submodule of the eks-demo project.

Starting my EKS cluster with eksctl

To start, stop and configure the cluster we do need the eksctl tool. This tool can be installed (on a Mac) using brew install eksctl.

Configuring and starting an EKS cluster is fairly simple. Lets first go through the configuration file

The iam.withAddonPolicies.cloudWatch tells eksctl the node will have access to write to CloudWatch.

Starting and stopping using the right files.

start.sh

stop.sh

The cluster can now be started using the start script.

Using k9s

After starting the cluster using the start.sh command, we will use k9s to see what pods are running in the cluster.

k9s is a text oriented tool to dive into a Kubernetes cluster very easily and can be installed (on a Mac) using brew install k9s and can be started by typing k9s. k9s is inspired by vi and so to select the pods screen type

:pod

This will show a screen like this

K9s pod screen

Running the app

Now that we have pushed the Spring Boot Docker image and the started the EKS, it is time to run the app on EKS. Therefor we do have a deployment and a service descriptor

There is the deployment.yaml descriptor

And the service.yaml descriptor

Starting the file app

kubectl apply -f deployment/deployment.yamlkubectl apply -f deployment/service.yaml

Now that our app is deployed, we can watch it running using k9s. By typing :service we can see the service running on nodeport 3000

k9s :svc screen

By typing :node in k9s we can see the node running and by selecting the letter d (of describe) we can find more information on the node.

From the node screen in k9s we do copy the external dns name, from the service screen in k9s we do pick the nodeport and from the getmapping in the controller we need the endpoint. We can compose this into a url:

http://ec2–34–219–77–37.us-west-2.compute.amazonaws.com:30000/hello

A we there? Not quite, we cannot reach the node from the outside world. Therefor I altered the security group that allows ssh access from the outside world to allow tcp traffic to port 30000 as well

Adding TCP traffic to port 30000 in the security group

And finally, we can reach our app from the outside world

Retro

Plus: the app is running and we can reach it.

Minus: finding an IP-address of the node and altering a security group is not a solution to be proud of. Using an Ingress router and a dns name is a better solution.

Minus: loading the descriptor files by hand is not a nice solution. We have to go gitops.

In the next sprint we will implement an ingress router, based on a AWS load balancer.

In later sprints we will go on with GitOps, Crossplane, IPv6 and all other cool stuff.

Useful commands

eskctl

  • eksctl create cluster
  • eksctl delete cluster
  • eksctl get cluster
  • eksctl get cluster <clustername>

k9s

  • :pod
  • :svc
  • :deploy
  • :quit

--

--

Marc Enschede

Java, Spring Boot and Kotlin developer. The backend guy!