EKS Tutorial
Install AWS CLI
My environment
- WSL
- Python is installed wiht pyenv.
Install aws-cli.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Check.
$ aws --version
aws-cli/2.0.61 Python/3.7.3 Linux/4.19.104-microsoft-standard exe/x86_64.ubuntu.20
Issue AWS access key
- Log in to Dashboard.
- Navigate to IAM.
- In the left pane, “Access management” -> “Users”
- Click an user, and go to tab “Security credentials”
- Access keys -> Create access key. You can get credentials, and don’t forget to save “Secret access key,” or download CSV. As AWS said “This is the only time that the secret access keys can be viewed or downloaded.”
Configure CLI
$ aws configure
AWS Access Key ID [None]: AKIxxxxxxxxxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxxxxxxxxxxxxxxxx
Default region name [None]: eu-central-1
Default output format [None]: json
The API credentials are stored in ~/.aws/credentials.
Check.
$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ******************** shared-credentials-file
secret_key ******************** shared-credentials-file
region eu-central-1 config-file ~/.aws/config
Install EKS
https://docs.aws.amazon.com/de_de/eks/latest/userguide/getting-started-eksctl.html
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
Check.
$ eksctl version
0.30.0
Install kubectl
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.9/2020-08-04/bin/linux/amd64/kubectl
$ which kubectl
/usr/local/bin/kubectl
(Pass) Check installed (downloaded) kubectl checksum
Create EKS Cluster
cf) the key should be RSA. When I tried with my ECDSA key, it returns the error as follows.
Error: computing fingerprint for key ".ssh/id_ecdsa.pub": error computing fingerprint for SSH public key: Unexpected type of SSH key ("*ssh.ecdsaPublicKey"); AWS can only import RSA keys
Create
eksctl create cluster \
--name {{ your_cluster_name }} \
--version 1.18 \
--region eu-central-1 \
--nodegroup-name {{ your_nodegroup_name}} \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--ssh-access \
--ssh-public-key .ssh/id_rsa.pub \
--managed
Here is my full command log.
$ eksctl create cluster --name {{ my_cluster_name }} --version 1.18 --region eu-central-1 --nodegroup-name {{ my_node_name }} --nodes 3 --nodes-min 1 --nodes-max 4 --ssh-access --ssh-public-key .ssh/id_rsa.pub --managed
[ℹ] eksctl version 0.30.0
[ℹ] using region eu-central-1
[ℹ] setting availability zones to [eu-central-1a eu-central-1b eu-central-1c]
[ℹ] subnets for eu-central-1a - public:192.168.0.0/19 private:192.168.***.0/19
[ℹ] subnets for eu-central-1b - public:192.168.32.0/19 private:192.168.***.0/19
[ℹ] subnets for eu-central-1c - public:192.168.64.0/19 private:192.168.***.0/19
[ℹ] using SSH public key ".ssh/id_rsa.pub" as "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_node_name }}-42:3a:75:ad:45:bb:3f:bf:a4:bd:ca:a5:17:e2:44:aa"
[ℹ] using Kubernetes version 1.18
[ℹ] creating EKS cluster "{{ my_cluster_name }}" in "eu-central-1" region with managed nodes
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-central-1 --cluster={{ my_cluster_name }}'
[ℹ] CloudWatch logging will not be enabled for cluster "{{ my_cluster_name }}" in "eu-central-1"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-central-1 --cluster={{ my_cluster_name }}'
[ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "{{ my_cluster_name }}" in "eu-central-1"
[ℹ] 2 sequential tasks: { create cluster control plane "{{ my_cluster_name }}", 2 sequential sub-tasks: { no tasks, create managed nodegroup "{{ my_node_name }}" } }
[ℹ] building cluster stack "eksctl-{{ my_cluster_name }}-cluster"
[ℹ] deploying stack "eksctl-{{ my_cluster_name }}-cluster"
[ℹ] building managed nodegroup stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}"
[ℹ] deploying stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}"
[ℹ] waiting for the control plane availability...
[✔] saved kubeconfig as "/home/atlex/.kube/config"
[ℹ] no tasks
[✔] all EKS cluster resources for "{{ my_cluster_name }}" have been created
[ℹ] nodegroup "{{ my_nodegroup_name }}" has 3 node(s)
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] waiting for at least 1 node(s) to become ready in "{{ my_nodegroup_name }}"
[ℹ] nodegroup "{{ my_nodegroup_name }}" has 3 node(s)
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] kubectl command should work with "/home/atlex/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "{{ my_cluster_name }}" in "eu-central-1" region is ready
In my case, it took around 20 mins.
Create a cluster from YAML config file
You can create from an yaml file also.
$ cat cluster.yml
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: my-cluster-name
region: eu-central-1
version: "1.18"
managedNodeGroups:
- name: my-nodegroup-name
instanceType: t2.nano
desiredCapacity: 3
minSize: 1
maxSize: 4
ssh:
publicKeyPath: .ssh/id_rsa.pub
$ eksctl create cluster -f cluster.yml
Check VPC and networks
- The new VPC
vpc-************ / eksctl-{{ my_cluster_name }}-cluster/VPCwas created. - 6 subnets are created.
eksctl-{{ my_cluster_name }}-cluster/SubnetPrivateEUENTRAL1(A|B|C)andeksctl-{{ my_cluster_name }}-cluster/SubnetPublicEUENTRAL1(A|B|C)
- The Internet Gateway
eksctl-{{ my_cluster_name }}-cluster/InternetGatewaywas created. - The Elastic IP
eksctl-{{ my_cluster_name }}-cluster/NATIPis created.- It has Public IP.
- The NAT gateway
eksctl-{{ my_cluster_name }}-cluster/NATGatewaywas created.- The Elastic IP address (Global IP) above was assigned.
- The connected subnet is one of 3 public subnets.
- 4 Route tables were created.
eksctl-{{ my_cluster_name }}-cluster/PublicRouteTable: Containing 3 subnetseksctl-{{ my_cluster_name }}-cluster/SubnetPublicEUCENTRAL1(A|B|C). Destination0.0.0.0/0is targeted to igw. For example, subnet CIDR is192.168.0.0/19while local destination table is192.168.0.0/16 local.eksctl-{{ my_cluster_name }}-cluster/PrivateRouteTableEUCENTRAL1(A|B|C). Destination0.0.0.0/0is targeted to NAT.
- Four Security Groups are created.
eksctl-{{ my_cluster_name }}-cluster/ClusterSharedNodeSecurityGroup: Communication between all nodes in the cluster.eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup _name }}/SSH: Allow ssh (port 22) access.eks-cluster-sg-{{ my_cluster_name }}-**********: EKS created security group applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads.eksctl-{{ my_cluster_name }}-cluster/ControlPlaneSecurityGroup: Communication between the control plane and worker nodegroups.
Check Nodes
- The instance type is
m5.largeby default.- EKS nodegroups instance types cannot be changed after creation. You’ll have to create a new node group every time you’d like a new instance type.
- https://stackoverflow.com/questions/61038956/aws-eks-nodegroup-update-instance-types
--node-type t2.nano
- The AMI was (as of 13 Mar. 2021)
ami-0f85d2eeb0bea62a7,EKS Kubernetes Worker AMI with AmazonLinux2 image, (k8s: 1.18.9, docker: 19.03.13ce-1.amzn2, containerd: 1.4.1-2.amzn2). - Global IPv4s are assiend to the nodes by default.
- The global IP is not same as NAT global IP.
- Auto Scaling was configured.
- Two Launch Templates.
eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}andeks-********-****-****-****-************ - One Auto Scaling group.
eks-********-****-****-****-************, same name as above. - 100% On-Demand, 0% Spot
- Spot allocation strategy: Lowest price - diversified across the 2 lowest priced pools
- Two Launch Templates.
- Key Pair is registered.
- Two private IPs are created
eth0andeth1in the same subnet.- The subnet is one of public subnets like
eksctl-{{ my_cluster_name }}-cluster/SubnetPublicEUCENTRAL1(A|B|C) - Checked by
ssh ec2-user@{{ the_instance_IP }} -i .ssh/id_rsa
- The subnet is one of public subnets like
- No CloudWatch as a default.
- 2 CloudFormation templates.
eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}: EKS Managed Nodes (SSH access: true) created byeksctleksctl-{{ my_cluster_name }}-cluster: EKS cluster (dedicated VPC: true, dedicated IAM: true) created and managed byeksctl.- “CloudFormation Designer” shows you comprehensive diagrams.
Private cluster
Note. If I want to out with fixed public IP, it seems we need private cluster +NAT.
https://stackoverflow.com/questions/56974480/communicating-with-a-ip-whitelisted-service-with-eks
To do so, I add --node-private-networking option at the end of eksctl create cluster command.
differences Same:
- VPC created.
- 6 subnets created.
- 4 Route tables created.
- Internet gateway created.
- An Elastic IP created.
- An NAT gateway created, and same global IP as the Elastic IP.
Differences:
- No Global IP to the instances.
- The instance is connected to Private subnet
eksctl-{{ my_cluster_name }}-cluster/SubnetPrivateEUCENTRAL1(A|B|C). - ENI is created, which is connected to the subnet to the instance and “Interface type:
nat_gateway”.- The public IP of the ENI is the Elastic IP’s.
Run a sample deployment and, get shell by kubectl exec --stdin --tty {{ your_Nginx_pod_name }} -- /bin/bash. Now, get your global IP like curl ifconfig.me and it returns the Elastic IP! (Achtung, I deploy no service yet.)
Try to apply the service:loadbalancer yaml above. An ELB was created, and it’s AZ is 3 public subnet. Try to access the ELB domain and it will return Nginx welcome page!
How to delete
From AWS console,
- Detach
{{ your_cluster_name }}- instances are terminated.
- Delete cluster
- Delete VPC
or the single command.
$ eksctl delete cluster --region=eu-central-1 --name={{ my_cluster_name }}
[ℹ] eksctl version 0.30.0
[ℹ] using region eu-central-1
[ℹ] deleting EKS cluster "{{ my_cluster_name }}"
[✔] kubeconfig has been updated
[ℹ] 2 sequential tasks: { delete nodegroup "{{ my_nodegroup_name }}", delete cluster control plane "{{ my_cluster_name }}" [async] }
[ℹ] will delete stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}"
[ℹ] waiting for stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}" to get deleted
[ℹ] will delete stack "eksctl-{{ my_cluster_name }}-cluster"
[✔] all cluster resources were deleted
How did I created my test EKS environment at the end
I want to deploy with the node-type t2.nano. Here is my command memo.
eksctl create cluster --name {{ my_cluster_name }} --version 1.18 --region eu-central-1 --nodegroup-name {{ my_nodegroup_name }} --nodes 3 --nodes-min 1 --nodes-max 4 --node-type t2.nano --ssh-access --ssh-public-key .ssh/id_rsa.pub --managed
Check deploy status
$ eksctl get cluster
NAME REGION
{{ my_cluster_name }} eu-central-1
$ eksctl get nodegroup --cluster {{ my_cluster_name }}
CLUSTER NODEGROUP CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID
{{ my_cluster_name }} {{ my_nodegroup_name }} 2020-10-30T14:42:51Z 1 4 3
Scale down.
$ eksctl scale nodegroup --cluster {{ my_cluster_name }} --name {{ your_nodegroup_name }} --nodes 1
[ℹ] scaling nodegroup stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}" in cluster eksctl-{{ my_cluster_name }}-cluster
[ℹ] scaling nodegroup, desired capacity from 3 to 1
Created Kubernetes cluster
$ kubectl get ns
NAME STATUS AGE
default Active 3d18h
kube-node-lease Active 3d18h
kube-public Active 3d18h
kube-system Active 3d18h
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-***-***.eu-central-1.compute.internal Ready <none> 3d18h v1.18.8-eks-7c9bda
Total Cost at the moment
- EKS: 0.10 USD per hour
- EC2:
- Amazon Elastic Compute Cloud NatGateway (alomost free)
- $0.052 per GB Data Processed by NAT Gateways
- $0.052 per NAT Gateway Hour
- EC2 instance: $0.0067 per On Demand Linux t2.nano Instance Hour
- EBS: $0.119 per GB-month of General Purpose SSD (gp2) provisioned storage
- Amazon Elastic Compute Cloud NatGateway (alomost free)
- Data Transfer: Depend on your usage.
Deploy simple Nginx container
https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/
nginx-deployment.yml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.3
ports:
- containerPort: 80
Deploy it with kubectl apply -f nginx-deployment.yml.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-6b474476c4-8cjv4 1/1 Running 0 8m3s
nginx-deployment-6b474476c4-xldh2 0/1 Pending 0 8m3s
Create service (ELB) also
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
app: nginx
type: LoadBalancer
Deploy with kubectl apply -f nginx-loadbalancer.yml, and check.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 3d18h
nginx-service LoadBalancer 10.100.***.*** ******-***.eu-central-1.elb.amazonaws.com 80:30660/TCP 5m55s
At this point
- An ELB created automatically.
**********-*******.eu-central-1.elb.amazonaws.comis an auto-generated domain. You can access to this domain, and it returns Nginx welcome page.- The IP of the domain is not your VPC Elastic IP created above.
- No additional Elastic IP was assigned.
- It automatically pass to the pods.
- The subnet of the service is in three AZs (public subnet).
- Listner: load-balancer port is 80, and Instance port is assigned automatically port (by Kubernetes).
- If you check your global IP from inside the Pods, it returns your node’s Global IP.
kubectl exec -it nginx-deployment-*********-***** -- /bin/bash
- The port
30660is random.
service (ELB) cost
- $0.008 per GB Data Processed by the LoadBalancer
- $0.030 per LoadBalancer-hour
Public
- Route53
- Add A record s.t.
yourdomain.com A ******-***.eu-central-1.elb.amazonaws.comByCreate record-> Simple routing -> Define simple record -> Value/Route traffic to: Alias to Application and Classic Load Balancer + select region + chose the locadbalander.
What’s next?
- “Creating your own container” is your task.
kubectldoesn’t build a container. Create your own container (from Dockerfile.)
Memo: not enough resource
Deployment does not have minimum availability.
this could be caused as there is insufficient CPU or memory, but that is not always the case.