005: Managed Kubernetes with EKS Skip to main content

005: Managed Kubernetes with EKS

AWS Elastic Kubernetes Service (EKS) is a managed  infrastructure for container orchestration. In our organisation it is secured by Okta/MFA.

The daily mood

I am motivated. My goal is to move simple assets from my local cluster (deployment source) to our remote/shared one (deployment target). For this I need to require access to a GitHub enterprise hosting our configuration repository, and an AWS Elastic Kubernetes Service (EKS) for running development workloads.
Note: We also have cluster instances running on AWS virtual machines (EC2) and containers (ECS) which we manage using Kubespray, as well as cluster instances in Azure Kubernetes Service (AKS). As far as I know, we do not use KOPS, another popular tool for installing and launching Kubernetes clusters in different cloud infrastructures.

Access authorization

GitHub Enterprise was pretty straight forward. I already had a personal account which just needed to be mapped with our organisation. Then I was able to login interactively via Okta, create an SSH key and use it as a bypass with standard Git client.

AWS behind Okta/MFA was definitely more tricky because accounts, groups,  roles and shared access keys are managed by our IT.
Together with our Helpdesk we got stuck to the point that since I was not yet officially part of the R&D organisation, I couldn't get the required permissions.

I also setup the command line tool okta-awscli (not an official one but a community project) in order to fetch AWS EKS configuration and then use standard AWS CLI. This is just for interactive authorization via MFA. In case of a service running in background I would need from SRE team an IAM user, along with its temporary access keys.

Remote connection

On the next day IT finally entitled me a new role allowing access to the right AWS account ID and ressources. Then I was able to go through the following configuration steps.
# ~/.aws/credentials updates section myprofile
okta-awscli --profile myprofile
# ~/.aws/config updates section myprofile
aws configure --profile myprofile
# verify identity
aws sts --profile myprofile get-caller-identity
# verify cluster
aws eks --profile myprofile list-clusters
# ~/.kube/awsdev.config updates configuration
aws eks --profile myprofile update-kubeconfig \
    --name cluster_name \
    --role-arn arn:aws:iam::aws_account_id:role/role_name
    --kubeconfig ~/.kube/awsdev.config
# temporary environment
export KUBECONFIG=$KUBECONFIG:~/.kube/microk8s.config:~/.kube/awsdev.config
# persistent environment
echo "export KUBECONFIG=$KUBECONFIG:~/.kube/microk8s.config:~/.kube/awsdev.config" >> ~/.profile
Now we can connect to the cluster using a local client.
kubectx arn:aws:iam::[account_id]:cluster/[cluster_name]
kubens
kubectl get svc

Comments