How to Deploy a Dockerized App to Amazon EKS

How to Deploy a Dockerized App to Amazon EKS

Introduction

New month and new challenge. This month I will be learning how to deploy a Dockerized app to Amazon EKS. I will be using the AWS EKS Workshop as a guide. I will be using the AWS CLI to complete this challenge. Note this is meant to be a high level overview of the steps involved in deploying a Dockerized app to Amazon EKS. For more details, please refer to the AWS EKS Workshop and AWS CLI documentation. Now let's get started.

Overview

The diagram below shows the steps involved in deploying a Dockerized app to Amazon EKS.

How to Deploy a Dockerized App to Amazon EKS

Now let's highlight some of the key terms and concepts involved in deploying a Dockerized app to Amazon EK:

Docker

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. This ensures the application run on any other machine provided it runs Docker.

Dockerfile

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

Docker Hub

Docker Hub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker Cloud so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.

Amazon EKS

Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service. Customers such as Intel, Snap, Intuit, GoDaddy, and Autodesk trust EKS to run their most sensitive and mission critical applications because of its security, reliability, and scalability.

Kubernetes

Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

Kubernetes Manifests

Kubernetes manifests are files that describe the desired state of a Kubernetes resource. Manifests can be written in YAML or JSON. They can be used to create, modify, and delete Kubernetes resources such as pods, deployments, and services.

kubectl

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

Amazon CloudWatch

Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.

Amazon CloudWatch Logs

Amazon CloudWatch Logs is a service for ingesting, storing, and accessing your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources. You can then retrieve the associated log data from CloudWatch Logs using the CloudWatch console, CloudWatch Logs commands in the AWS CLI, CloudWatch Logs API, or CloudWatch Logs SDK.

Amazon CloudWatch Metrics

Amazon CloudWatch Metrics are the fundamental concept in CloudWatch. A metric represents a time-ordered set of data points that are published to CloudWatch. Think of a metric as a variable to monitor, and the data points as representing the values of that variable over time. For example, the CPU usage of a particular EC2 instance is one metric provided by Amazon EC2. The data points themselves can come from any application or business activity from which you collect data.

Amazon Load Balancer

Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. Elastic Load Balancing offers three types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault tolerant.

Now that we have a high level overview of the steps involved in deploying a Dockerized app to Amazon EKS, lets dive into the details.

Step 1 - Build and push Docker image to Docker Hub

Create a Dockerfile

The first step is to create a Dockerfile that defines the Docker image to build. This includes specifying a base image, copying application code into the image, installing dependencies, exposing ports, and defining the startup command. The Dockerfile allows Docker to automate building repeatable and consistent images for your application.

Push image to registry

Once the Dockerfile is created, you can build the Docker image locally and push it to a registry like Docker Hub. This allows your EKS cluster nodes to pull down the image when deploying the application later. Make sure to tag your image appropriately and push to a repository you have access to.

Step 2 - Create an EKS cluster on AWS

Provision EKS control plane

Use the AWS console or AWS CLI to create a new Amazon EKS cluster. This provisions the Kubernetes control plane and underlying EC2 instances needed to run the master nodes. Make sure to specify the region, instance type and size for the cluster based on your application needs.

Add worker nodes

Once the control plane is created, you need to add worker nodes to run your Docker containers. Create a node group using EC2 instances that can connect to the EKS master endpoints. The worker nodes will join the cluster automatically using the provided configuration.

Step 3 - Deploy Kubernetes manifests to EKS

Create Kubernetes deployment manifests

Once the EKS cluster is up and running, Kubernetes deployment manifests need to be created to deploy the Dockerized application. The manifests define Kubernetes resources like Deployments, Services, Ingresses etc that run and expose the application containers. Use config files to specify resource details like container image, replicas, ports and more.

Apply manifests for app deployment

With the manifests defined, use the kubectl tool to apply them to the EKS cluster. This will create the resources on the worker nodes and pull the application Docker image from the registry to run. Make sure the container image path and tags match in the manifests. Check that pods and services are created successfully.

Step 4 - Setup load balancing and auto scaling

Configure LoadBalancer service

To distribute traffic across the application pods in the EKS cluster, you can create a LoadBalancer service. This will provision an Elastic Load Balancer in AWS that targets all pods of the deployment across worker nodes. Configure load balancing by exposing the necessary application ports.

Set up cluster autoscaling

To handle varying traffic levels automatically, enable the cluster autoscaler for EKS. This allows worker nodes to scale out and in based on pod resource demands. Make sure node machine types can be provisioned quickly to adapt to traffic spikes. Set autoscaling thresholds appropriately for cost optimization. Monitor scale events in CloudWatch.

Step 5 - Monitor logs and metrics

View application and system logs

It is important to monitor your EKS cluster by viewing logs and metrics after deploying the application. Application logs provide visibility into functionality, errors and performance. Check CloudWatch Logs for output from the pods and nodes. Debug issues by searching for error codes and messages. Understand usage by analyzing request volume and types.

Set up custom metrics with CloudWatch

In addition to logs, make use of CloudWatch Metrics to track custom statistics for your application and cluster resources. Set up custom metrics for application response times, error rates, and throughput. Monitor EKS metrics like pod utilization, node capacity and more. Set CloudWatch alarms to notify if critical metrics exceed thresholds, indicating a need to scale or investigate issues.

Conclusion

In this challenge, I learned how to deploy a Dockerized app to Amazon EKS. I learned how to create a Dockerfile, push a Docker image to Docker Hub, create an EKS cluster on AWS, add worker nodes, deploy Kubernetes manifests to EKS, setup load balancing and auto scaling, and monitor logs and metrics. I hope you found this challenge useful. If you have any questions or feedback, please leave a comment below. You can also reach out to me on Twitter. or LinkedIn If you found this article helpful feel free to share it with others.

Buy me a coffee here.

next-time

References