Setting up EKS with Terraform 

Provisioning Amazon EKS (Elastic Kubernetes Service) using Terraform automates the process of building scalable Kubernetes clusters on AWS. Terraform acts as an Infrastructure as Code (IaC) tool that enables the definition, provisioning, and management of EKS clusters through declarative configurations. 

To create a functional EKS cluster, you need to define a few key components such as the VPC, subnets, IAM roles, security groups, and node groups. Terraform will help streamline the entire process by writing configurations in .tf files, which can be applied to provision the infrastructure. 

Setting up EKS with Terraform 

How to Setup EKS in Terraform?

Before diving into the configuration files, it’s essential to have a few prerequisites in place. First, install the AWS CLI, which will allow you to interact with AWS services. Additionally, the Terraform binary needs to be installed on your system.

1. Install AWS CLI

# For macOS
brew install awscli
# For Ubuntu/Debian-based Linux
sudo apt-get update
sudo apt-get install awscli

The AWS CLI lets you authenticate and configure access to your AWS environment by running the aws configure command and providing your access key, secret key, and default region.

2. Install Terraform

Terraform can be downloaded and installed using the package manager of your choice.

# For macOS using Homebrew
brew install terraform
# For Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y terraform

Once both are installed, you are ready to configure the infrastructure.

Install Terraform

3. Setting Up VPC and Subnets

The foundation of any EKS cluster involves setting up a VPC (Virtual Private Cloud), along with public and private subnets, where your EKS cluster and nodes will reside. 

Setting Up VPC and Subnets

To create these networking components, you will define Terraform resources in a .tf configuration file.

# VPC definition
resource "aws_vpc" "eks_vpc" {
  cidr_block = "10.0.0.0/16"
  enable_dns_support = true
  enable_dns_hostnames = true
}
# Public Subnet
resource "aws_subnet" "public_subnet" {
  vpc_id            = aws_vpc.eks_vpc.id
  cidr_block        = "10.0.1.0/24"
  map_public_ip_on_launch = true
  availability_zone = "us-west-2a"
}

# Private Subnet
resource "aws_subnet" "private_subnet" {
  vpc_id     = aws_vpc.eks_vpc.id
  cidr_block = "10.0.2.0/24"
  availability_zone = "us-west-2a"
}

The code block defines the VPC and subnets. The VPC (aws_vpc) is the private network for your cluster, while the public and private subnets are where different services and nodes will be hosted.

4. Creating IAM Roles

For EKS to function properly, certain AWS IAM (Identity and Access Management) roles need to be created. These roles give the EKS control plane the necessary permissions to communicate with other AWS services and manage worker nodes.

# EKS Cluster IAM Role

resource "aws_iam_role" "eks_role" {
  name = "eks_cluster_role"
  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Principal = {
          Service = "eks.amazonaws.com"
        },
        Action = "sts:AssumeRole"
      }
    ]
  })
}

# Attach EKS policies

resource "aws_iam_role_policy_attachment" "eks_policy_attachment" {
  role       = aws_iam_role.eks_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
}

Here, an IAM role is created with a trust policy that allows EKS to assume the role. This role is also associated with necessary AWS-managed policies for EKS.

5. Provisioning EKS Cluster

After configuring the VPC, subnets, and IAM roles, the EKS cluster can be created. In Terraform, the aws_eks_cluster resource manages the control plane, while the aws_eks_node_group will define worker nodes.

# EKS Cluster Definition

resource "aws_eks_cluster" "my_cluster" {
  name     = "my-eks-cluster"
  role_arn = aws_iam_role.eks_role.arn
  vpc_config {
    subnet_ids = [
      aws_subnet.public_subnet.id,
      aws_subnet.private_subnet.id
    ]
  }
}

This creates an EKS cluster using the IAM role and subnets defined earlier. The control plane of EKS will be deployed, and the subnets will allow the nodes to communicate with each other.

6. Setting up Worker Nodes

Worker nodes are the actual EC2 instances running Kubernetes workloads. Terraform helps define node groups to manage these instances.

# Worker Node Group

resource "aws_eks_node_group" "worker_nodes" {
  cluster_name    = aws_eks_cluster.my_cluster.name
  node_group_name = "eks-worker-nodes"
  node_role_arn   = aws_iam_role.worker_role.arn
  subnet_ids      = aws_subnet.public_subnet.id
  scaling_config {
    desired_size = 2
    max_size     = 3
    min_size     = 1
  }
}

The worker nodes in the cluster are configured using the aws_eks_node_group resource. The node group defines how many EC2 instances should be running at any time, allowing for autoscaling.

Setting up Worker Nodes

7. Applying Terraform Configuration

Once all the configuration files are ready, they need to be applied using Terraform commands. First, initialize Terraform to download the necessary provider plugins, and then apply the configuration to provision the infrastructure.

# Initialize Terraform
terraform init
# Apply the configuration
terraform apply

Terraform will prompt for approval before it proceeds to create the EKS cluster, IAM roles, and other resources. Once complete, your EKS cluster will be ready, and you can start deploying Kubernetes applications.

8. Connecting to the Cluster

After provisioning the cluster, use the AWS CLI and kubectl to interact with your EKS environment.

aws eks --region us-west-2 update-kubeconfig --name my-eks-cluster

This command configures kubectl to communicate with your newly created EKS cluster, allowing you to manage it using standard Kubernetes commands.

Frequently Asked Questions 

Can we use Eksctl with terraform?

You can create an EKS cluster in several ways: by using the AWS Management Console, a command-line tool called eksctl, or by writing code using a tool like Terraform. Each method has its own advantages and disadvantages, so the best choice for you will depend on your specific needs and preferences.

How do I scale my EKS cluster using Terraform?

Scaling the EKS cluster is as simple as modifying the desired number of nodes in the aws_eks_node_group resource’s scaling_config section. After changing the desired, min, or max sizes, reapply the Terraform configuration.

Can I integrate Terraform with CI/CD for EKS deployment?

Yes, Terraform integrates well with CI/CD pipelines. You can add Terraform commands within your pipeline configuration to automatically provision and update EKS clusters when changes are committed.

Conclusion 

And that’s about it. In this article, we have tried to demonstrate the complete process of setting up your EKS with Terraform. With the setup complete, your EKS cluster is ready for running containerized workloads, providing a scalable and efficient environment for Kubernetes in the AWS ecosystem.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *