Everything You Need to Get Started With Kubernetes on AWS

Historically, a well-known project in the community to provision and manage Kubernetes clusters on AWS was kops. However, operating a Kubernetes control plane isn’t an easy task; there are many things you need to control. Therefore, AWS announced in 2018 the general availability of EKS, their Kubernetes managed service.

Initially, creating a Kubernetes cluster in EKS was difficult, so the folks from Weaveworks released a CLI tool called eksctl. Now some time has passed, and it’s getting easier to create a Kubernetes cluster in EKS. For instance, you can get started using the AWS console, CloudFormation, or Terraform.

In this post, I include everything you need to know to get started with Kubernetes in AWS using EKS. I’ll start with how to create a cluster using the AWS console, as I believe it’s the easiest way to understand all the concepts. Then I’ll create a similar cluster using Terraform.

Let’s have some kubefun.

kubernetes aws logos combined

What’s EKS?

I don’t want to spend too much time here. As I explained previously, EKS is the offering from AWS to have a Kubernetes managed cluster. AWS takes care of the master control plane, and you take care of the worker nodes. Nothing new here in comparison to the main AWS competitors, Azure and Google Cloud. However, in AWS, users have to pay $0.10 per hour for every master control plane, or around $75 per month. And of course, you continue paying the normal rate for all the resources you create like load balancers, storage, or EC2 instances.

EKS integrates very well with other AWS services like IAM to manage users, native networking with VPC, or AWS ALB for ingress objects. Additionally, you can integrate EKS with Fargate to create pods on demand without having to provision EC2 worker nodes. There are too many topics I could cover, but today I’ll focus only on how to get started with EKS. You can learn more about EKS from their official docs site.

Let’s get into the details on how to get started creating Kubernetes clusters using EKS.

Prerequisites

You need to spend some time preparing your environment for this guide, especially when configuring the IAM user or role to create the EKS cluster. But don’t worry. I’ll give you the details of everything you need to do in your AWS account. However, you need to have some basic knowledge with AWS with services like VPC, CloudFormation, and IAM. You can find every template I’ll use in GitHub in case something isn’t crystal clear.

So here’s the list of steps you need to follow before you can start with this guide:

  1. Create or use a VPC with internet access. I’d recommend you use the CloudFormation template in GitHub because the template has the tags the subnets need to create load balancers.
  2. Have an IAM user or role with the minimum permissions to administrate an EKS cluster. You can find the policy template you need to use in GitHub. This step is super important because AWS adds this IAM identity to the Kubernetes RBAC authorization table as the administrator. No other entity will have access to the cluster initially. You can use this identity to add any other administrator user to the cluster later.
  3. Install the latest version of the AWS CLI and configure the credentials with the IAM identity from the previous step. I had a few problems when trying to connect to the cluster with kubectl, so make sure you’re using the same IAM identity to create and connect to the cluster.
  4. Create an IAM role that the EKS cluster will use to create resources in AWS like an ALB. You can find the CloudFormation template in GitHub that includes the minimum permissions for this role.
  5. Install the latest version of kubectl.

Once you have everything from the above list ready, you can continue. You can use the AWS console, Terraform, CloudFormation, or even the CLI to provision a cluster. In this guide, I’ll start with the console to explain critical concepts graphically. Then I’ll use Terraform to provision a cluster in an automated fashion.

Create an EKS Cluster With the AWS Console

1. Create the EKS Cluster

Head over to the EKS console, and make sure you’re in the “Amazon EKS” section (1 in the graphic below). Then type the name you want to use for the cluster (2), and click on the “Next step” button (3).

2. General Configuration

This page is where you reference the resources you created in the prerequisites section. By default, AWS selects the latest Kubernetes version available, but you can change it if you want. You can also change the Kubernetes version later, but it’s not simple, so choose wisely. Then choose from the “Role name” list the name of the IAM role you created previously for the EKS cluster.

3. Networking Configuration

Scroll down a little bit, and you’ll see the “Networking” section. Make sure you pick the VPC you created previously, or choose an existing one. Then select the subnets, including public and private. EKS makes sure to create resources in the proper subnet in case you want to create public or private load balancers. If you expect you’ll only create private resources, then only choose private subnets. You can learn more about the VPC considerations in the AWS documentation.

4. Security Groups Configuration

The CloudFormation template for the VPC in GitHub creates a security group for the cluster. The minimum requirement is to have all outbound access enabled. You don’t need to configure any inbound rule. AWS assigns this security group to the control plane servers. Use this as the source for the security group in the worker nodes. AWS guarantees that initially only the control plane can communicate with the worker nodes. You can change this behavior for the worker nodes—more on this later.

Additionally, here’s where you can specify if the Kubernetes API can have public and private access. My recommendation is to not enable public access to the control plane for security reasons. You can use a bastion host, VPN, or Direct Connect to communicate with the cluster privately.

5. Configure Encryption and Logging

You can configure a KMS key to encrypt the secrets you create in the cluster. I highly recommend it, but I’m not covering it in this guide. Additionally, you can enable different types of logging, and AWS will put them in AWS CloudWatch. For instance, you can use these logs to troubleshoot connection problems to the cluster. For now, I won’t enable any logs, but you can change it later. Then you can centralize all these logs to have a wider perspective, including other resources, in our Scalyr platform.

6. Configure Tags

Finally, you have the option to add any tag to the cluster to allocate costs or find resources quickly.

To create the cluster, click on the “Create” button. AWS will take around 10 to 15 minutes to finish.

7. Connect to Cluster

Once the cluster is in the “Active” state, you can connect to the cluster. Notice that I created a cluster with private access. So in my case, I had to create a bastion host, which is simply a t2.micro instance with Amazon Linux v2 using a public subnet to SSH from my workstation. I had to install the latest version of the AWS CLI and configure it with the credentials of the IAM user I used to create the cluster.

To connect to a Kubernetes cluster, you need to generate a kubeconfig file. You can generate it with the AWS CLI by running the following command (but make sure you change the region and cluster name to the ones you used):

aws eks --region eu-west-1 update-kubeconfig --name eks-101

To confirm that everything is working, you can run a kubectl command:

8. Create Managed Worker Nodes

The cluster alone won’t be enough. You need to have worker nodes so Kubernetes can schedule pods. I’m going to create managed worker nodes from the AWS console. Managed worker nodes mean that AWS takes care of the provisioning of the EC2 instance in your account. You’ll still have to access or configure the user data, but AWS takes care of all the “hard” parts for you. If you prefer, you can launch your worker nodes with CloudFormation or Terraform, but if you do it this way, you won’t see them in the EKS console—I use this approach with Terraform, which I include in this guide too.

8.1 Create an IAM Role

Before you start, you need to create an IAM role for the worker nodes. You can use the CloudFormation template in GitHub, which includes the minimum set of permissions a cluster needs. Take note of the role name. You’ll need it when creating the worker node group.

8.2 Add a Node Group From the EKS Console

Head over to the EKS console, and click on the cluster you created previously. Scroll down a little bit, and you’ll see a section for “Node Groups.” Then click on the “Add Node Group” button (1).

8.3 Configure the Name, IAM Role, Subnet, and SSH Access

On the following page, you can configure the general details for the worker nodes like the name, subnets, and the IAM role you created before.

You can also enable SSH access to the worker nodes in case you need to troubleshoot a problem and SSH into the instances. You need to select an SSH key pair and select any security group you want. Then click on the “Next” button.

8.4 Configure Instance Details

On this screen, you can configure the AMI to use, the instance type, and the disk size for each node that will be sufficient to store container images and temporary logs. Click on the “Next” button.

8.5 Configure Scaling Parameters

Under the hood, AWS creates an autoscaling group, so you can configure the minimum, maximum, and desired size. However, you’ll need to configure autoscaler to scale the worker nodes automatically. Once you’ve entered the details, click on the “Next” button.

8.6 Review Group Node Details

Confirm that you’ve entered the proper configuration, and at the bottom of the page, click on the “Create” button.

AWS will take around 10 minutes to have the worker nodes ready. You can always check the status of the instances in the AWS EC2 page. Or you can monitor the progress on the EKS page.

Once the worker nodes are running, you can confirm that they’re registered to the Kubernetes cluster. If you configured SSH access, you should be able to connect to the cluster as well.

And that’s it. You can start using your Kubernetes cluster and deploy your applications in it.

Create an EKS Cluster Using Terraform

Another way to create an EKS cluster is by using Terraform. If you’ve followed the previous section to create the cluster using the AWS console, then this section will be pretty straightforward. All the Terraform templates I’m using are in GitHub as well. In this case, I’ll use self-managed worker nodes. You won’t see the nodes connected from the EKS console page, but you’ll have more control over them through the Terraform templates. Also, I’m going to cover only the concepts you need to get started. If you want to go deeper, take a look at the official Terraform site.

I include the steps you need to follow below. Let’s get started.

1. Prerequisites

You’ll still need to follow the general prerequisites from the beginning of this guide. And here are the additional specific prerequisites you’ll need if you want to go ahead with the Terraform approach.

  • Download and install the latest version of the Terraform binary.
  • Attach the appropriate IAM permissions Terraform needs to the user or role you created to administer EKS clusters. To do so, create a new IAM policy. You can find the JSON policy template you need in GitHub. Then attach this policy to the EKS administrator user or role.
  • Make sure you configure the AWS CLI with the previous user or role

In the set of templates I’m using, I include the template to create the VPC in case you want to do everything from Terraform. I believe it’s easier to remove things than to find out how to add specific configurations. You can adapt the templates as you see fit for your use case.

2. Initialize Terraform

To follow this guide smoothly, it’s better if you clone the GitHub repository and change directory (cd) to the Terraform folder that includes all the templates you need. To do so, run the following commands:

git clone https://github.com/christianhxc/kubernetes-aws-eks
cd kubernetes-aws-eks/terraform

Then run the below command to download all dependencies like the Terraform EKS module:

terraform init

You should see something like this:

3. Create the EKS Cluster and Worker Nodes

Notice that the main template to create the EKS cluster, including the worker nodes, is as simple as this:

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.9"
}

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = local.cluster_name
  cluster_version = "1.15"
  vpc_id          = module.vpc.vpc_id
  subnets         = module.vpc.private_subnets

  worker_groups = [
    {
      name                          = "worker-group-1"
      instance_type                 = "t2.small"
      additional_userdata           = "echo foo bar"
      asg_desired_capacity          = 2
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id]
    }
  ]
}

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

To create all the resources, including the Kubernetes cluster, the worker nodes, the VPC, subnets, and security groups, simply run the following command and confirm you want to create all the 47 resources:

terraform apply

You should see something like this:

AWS will take around 10 to 15 minutes to have everything ready, so keep calm and relax.

When it’s done, you should be able to see the outputs of the template, like this:

4. Connect to the Cluster

You should be able to connect to the cluster now. In this case, I created a cluster with public access, even though I wouldn’t recommend it under any circumstances for security reasons, not even for a development environment. However, for the sake of starting quickly and learning, it’s OK. I’m assuming you’re going to use the same workstation you used to create the cluster, so no need to configure the AWS CLI again because it’s already using the credentials for the EKS admin IAM user or role.

You need a kubeconfig file to connect to the cluster. To generate one, use the AWS CLI. If you already have a kubeconfig file, the CLI will append the data and switch the context to the new cluster. Simply run the following command (and make sure you use the region and cluster name from the Terraform outputs in the previous step):

aws eks --region eu-west-1 update-kubeconfig --name eks-terraform-HgHCjhRM

You should be able now to run any kubectl command, like this:

And that’s it. You can now deploy your applications to Kubernetes in AWS.

What’s Next?

There are other ways to provision EKS clusters. For instance, I didn’t cover CloudFormation, but there’s a GitHub repository with all the templates you need to create a cluster, including a bastion host, among other resources. There’s also the CLI called eksctl from Weaveworks, which AWS references several times in their documentation.

If you want to know how other things such as ingress or persistent storage work in EKS, I’d recommend you visit the official EKS documentation. The AWS team is working hard on including as much information as possible, and the guides or blog posts are very informative. Finally, there’s a complete workshop you can follow not only to get started with EKS but also to configure more complicated things like a service mesh using Istio.