Easy AWS EKS cluster provisioning and user access
While working on Gloo (our hybrid application gateway) and SuperGloo (our multi-mesh control fabric), we often need access to Kubernetes clusters to test out features and have a deployment target from which we can demo and run proofs of concept for our customers. I typically use Google Cloud GKE but for this project we needed to use Amazon Web Services. I followed the Getting Started docs for AWS EKS and was a little surprised at how tedious the whole process was. Not only was it time consuming (spinning up nodes) but it was error prone with all of the manual steps and copy/paste of ARNs and AMIs, etc, etc.
After my third attempt to get an AWS EKS cluster up and running, and running into errors, debugging, and getting nowhere I tweeted the following:
So creating a Kubernetes cluster on EKS is surprisingly error prone and incredibly complicated/manual. Might as well just boot the EC2 nodes myself and use something like kops — oh my
— Christian Posta (@christianposta) January 8, 2019
To which the wonderful community empathized with me, gave suggestions, and even suggested some interesting alternatives. One of those was a wonderful tool from Weaveworks called eksctl. This
eksctl tool makes it very easy to bootstrap EKS clusters without the separation of EKS/IAM/VPC/CloudFormations and associated hoops you otherwise have to jump through with the official AWS documentation. Checkout the eksctl documentation for your developer configuration, but on my Mac it was as simple as:
$ brew install weaveworks/tap/eksctl $ eksctl create cluster --name solo-test-cluster
Of course, you should already have the AWS CLI SDK already installed and configured for your username.
eksctl tool will set up your
kubectl config file automatically so that you can then run
kubectl get pod and carry on. One note, you’ll need to make sure you have the aws-iam-authenticator kubectl plugin which knows how to translate AWS IAM users for authentication purposes.
The next challenge was to expose this cluster to other people to use. In other words, when I bootstrapped the cluster using
eksctl I did so with my IAM AWS identity which has quite a bit of permissions for my AWS account; I cannot just hand these credentials over to prospective users of the cluster. What I needed to do was hook into AWS’s IAM system and
aws-iam-authenticator plugins for
kubectl for a different user than my own.
Setting up kubectl access for an alternative user
To create a new user, we need to define a new user in the AWS IAM (preferably only programatic access; this user will not need access to the web console) and associate a new policy that allows this user to read/list EKS clusters. We also need to update the AWS authentication properties in the
configmp/aws-auth configmap to give access to an AWS user to our cluster. We need to update that configmap to have our new user and any roles/cluster roles we want to associate with it. For example:
Lastly, we need to generate the
kubeconfig file for our new user with the appropriate cluster URL, CA, ARN, and kubernetes context.
We can generate the kubeconfig with the AWS CLI tooling for our new user like this:
aws eks update-kubeconfig --name cluster-name-here
At this point we would be logged into the cluster and able to run our
kubectl commands like we would on any Kubernetes cluster.
In this video, we’ll walk step-by-step through all of these things (setting up the cluster and configuring it for user access) in about 5m (time scrubbed so we don’t wait for AWS EKS node provisioning… as at the moment that still takes a bit):