Once way to provision AWS EKS is by using Terraform and integrating EKS provisioning into your CI/CD build and pipeline workflows.
When managing EKS, you may then want to use the kubectl CLI....so you'll need to update your kubeconfig file.
Here's how to do it using Terraform
1) In your Terraform output file, output 2 values to validate
output "region" {
description = "AWS region"
value = var.region
}
output "cluster_name" {
description = "Kubernetes Cluster Name"
value = local.cluster_name
}
These two values are used in the kubeconfig file
2) In your Terraform files, create a "null_resource" to run a command on your computer that runs the Terraform files, in my case it's my Macbook Air.
resource "null_resource" "kubectl" {
provisioner "local-exec" {
command = "aws eks --region ${var.region} update-kubeconfig --name ${local.cluster_name}"
}
}
That should be it - now when you run a kubectl command, you see see your AWS EKS objects.