EKS Tutorial

Page content

Install AWS CLI

My environment

  • WSL
  • Python is installed wiht pyenv.

Install aws-cli.

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Check.

$ aws --version
aws-cli/2.0.61 Python/3.7.3 Linux/4.19.104-microsoft-standard exe/x86_64.ubuntu.20

Issue AWS access key

  1. Log in to Dashboard.
  2. Navigate to IAM.
  3. In the left pane, “Access management” -> “Users”
  4. Click an user, and go to tab “Security credentials”
  5. Access keys -> Create access key. You can get credentials, and don’t forget to save “Secret access key,” or download CSV. As AWS said “This is the only time that the secret access keys can be viewed or downloaded.”

Configure CLI

$ aws configure
AWS Access Key ID [None]: AKIxxxxxxxxxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxxxxxxxxxxxxxxxx
Default region name [None]: eu-central-1
Default output format [None]: json

The API credentials are stored in ~/.aws/credentials.

Check.

$ aws configure list
      Name                    Value             Type    Location
      ----                    -----             ----    --------
   profile                <not set>             None    None
access_key     ******************** shared-credentials-file
secret_key     ******************** shared-credentials-file
    region             eu-central-1      config-file    ~/.aws/config

Install EKS

https://docs.aws.amazon.com/de_de/eks/latest/userguide/getting-started-eksctl.html

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin

Check.

$ eksctl version
0.30.0

Install kubectl

curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.9/2020-08-04/bin/linux/amd64/kubectl
$ which kubectl
/usr/local/bin/kubectl

(Pass) Check installed (downloaded) kubectl checksum

Create EKS Cluster

cf) the key should be RSA. When I tried with my ECDSA key, it returns the error as follows.

Error: computing fingerprint for key ".ssh/id_ecdsa.pub": error computing fingerprint for SSH public key: Unexpected type of SSH key ("*ssh.ecdsaPublicKey"); AWS can only import RSA keys

Create

eksctl create cluster \
--name {{ your_cluster_name }} \
--version 1.18 \
--region eu-central-1 \
--nodegroup-name {{ your_nodegroup_name}} \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--ssh-access \
--ssh-public-key .ssh/id_rsa.pub \
--managed

Here is my full command log.

$ eksctl create cluster --name {{ my_cluster_name }} --version 1.18 --region eu-central-1 --nodegroup-name {{ my_node_name }} --nodes 3 --nodes-min 1 --nodes-max 4 --ssh-access --ssh-public-key .ssh/id_rsa.pub --managed
[]  eksctl version 0.30.0
[]  using region eu-central-1
[]  setting availability zones to [eu-central-1a eu-central-1b eu-central-1c]
[]  subnets for eu-central-1a - public:192.168.0.0/19 private:192.168.***.0/19
[]  subnets for eu-central-1b - public:192.168.32.0/19 private:192.168.***.0/19
[]  subnets for eu-central-1c - public:192.168.64.0/19 private:192.168.***.0/19
[]  using SSH public key ".ssh/id_rsa.pub" as "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_node_name }}-42:3a:75:ad:45:bb:3f:bf:a4:bd:ca:a5:17:e2:44:aa"
[]  using Kubernetes version 1.18
[]  creating EKS cluster "{{ my_cluster_name }}" in "eu-central-1" region with managed nodes
[]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-central-1 --cluster={{ my_cluster_name }}'
[]  CloudWatch logging will not be enabled for cluster "{{ my_cluster_name }}" in "eu-central-1"
[]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-central-1 --cluster={{ my_cluster_name }}'
[]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "{{ my_cluster_name }}" in "eu-central-1"
[]  2 sequential tasks: { create cluster control plane "{{ my_cluster_name }}", 2 sequential sub-tasks: { no tasks, create managed nodegroup "{{ my_node_name }}" } }
[]  building cluster stack "eksctl-{{ my_cluster_name }}-cluster"
[]  deploying stack "eksctl-{{ my_cluster_name }}-cluster"

[]  building managed nodegroup stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}"
[]  deploying stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}"
[]  waiting for the control plane availability...
[]  saved kubeconfig as "/home/atlex/.kube/config"
[]  no tasks
[]  all EKS cluster resources for "{{ my_cluster_name }}" have been created
[]  nodegroup "{{ my_nodegroup_name }}" has 3 node(s)
[]  node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[]  node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[]  node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[]  waiting for at least 1 node(s) to become ready in "{{ my_nodegroup_name }}"
[]  nodegroup "{{ my_nodegroup_name }}" has 3 node(s)
[]  node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[]  node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[]  node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[]  kubectl command should work with "/home/atlex/.kube/config", try 'kubectl get nodes'
[]  EKS cluster "{{ my_cluster_name }}" in "eu-central-1" region is ready

In my case, it took around 20 mins.

Create a cluster from YAML config file

You can create from an yaml file also.

$ cat cluster.yml
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: my-cluster-name
  region: eu-central-1
  version: "1.18"
managedNodeGroups:
  - name: my-nodegroup-name
    instanceType: t2.nano
    desiredCapacity: 3
    minSize: 1
    maxSize: 4
    ssh:
        publicKeyPath: .ssh/id_rsa.pub
$ eksctl create cluster -f cluster.yml

Check VPC and networks

  • The new VPC vpc-************ / eksctl-{{ my_cluster_name }}-cluster/VPC was created.
  • 6 subnets are created.
    • eksctl-{{ my_cluster_name }}-cluster/SubnetPrivateEUENTRAL1(A|B|C) and eksctl-{{ my_cluster_name }}-cluster/SubnetPublicEUENTRAL1(A|B|C)
  • The Internet Gateway eksctl-{{ my_cluster_name }}-cluster/InternetGateway was created.
  • The Elastic IP eksctl-{{ my_cluster_name }}-cluster/NATIP is created.
    • It has Public IP.
  • The NAT gateway eksctl-{{ my_cluster_name }}-cluster/NATGateway was created.
    • The Elastic IP address (Global IP) above was assigned.
    • The connected subnet is one of 3 public subnets.
  • 4 Route tables were created.
    • eksctl-{{ my_cluster_name }}-cluster/PublicRouteTable: Containing 3 subnets eksctl-{{ my_cluster_name }}-cluster/SubnetPublicEUCENTRAL1(A|B|C). Destination 0.0.0.0/0 is targeted to igw. For example, subnet CIDR is 192.168.0.0/19 while local destination table is 192.168.0.0/16 local.
    • eksctl-{{ my_cluster_name }}-cluster/PrivateRouteTableEUCENTRAL1(A|B|C). Destination 0.0.0.0/0 is targeted to NAT.
  • Four Security Groups are created.
    • eksctl-{{ my_cluster_name }}-cluster/ClusterSharedNodeSecurityGroup: Communication between all nodes in the cluster.
    • eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup _name }}/SSH: Allow ssh (port 22) access.
    • eks-cluster-sg-{{ my_cluster_name }}-**********: EKS created security group applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads.
    • eksctl-{{ my_cluster_name }}-cluster/ControlPlaneSecurityGroup: Communication between the control plane and worker nodegroups.

Check Nodes

  • The instance type is m5.large by default.
  • The AMI was (as of 13 Mar. 2021) ami-0f85d2eeb0bea62a7, EKS Kubernetes Worker AMI with AmazonLinux2 image, (k8s: 1.18.9, docker: 19.03.13ce-1.amzn2, containerd: 1.4.1-2.amzn2).
  • Global IPv4s are assiend to the nodes by default.
    • The global IP is not same as NAT global IP.
  • Auto Scaling was configured.
    • Two Launch Templates. eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }} and eks-********-****-****-****-************
    • One Auto Scaling group. eks-********-****-****-****-************, same name as above.
    • 100% On-Demand, 0% Spot
    • Spot allocation strategy: Lowest price - diversified across the 2 lowest priced pools
  • Key Pair is registered.
  • Two private IPs are created eth0 and eth1 in the same subnet.
    • The subnet is one of public subnets like eksctl-{{ my_cluster_name }}-cluster/SubnetPublicEUCENTRAL1(A|B|C)
    • Checked by ssh ec2-user@{{ the_instance_IP }} -i .ssh/id_rsa
  • No CloudWatch as a default.
  • 2 CloudFormation templates.
    • eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}: EKS Managed Nodes (SSH access: true) created by eksctl
    • eksctl-{{ my_cluster_name }}-cluster: EKS cluster (dedicated VPC: true, dedicated IAM: true) created and managed by eksctl.
    • “CloudFormation Designer” shows you comprehensive diagrams.

Private cluster

Note. If I want to out with fixed public IP, it seems we need private cluster +NAT.

https://stackoverflow.com/questions/56974480/communicating-with-a-ip-whitelisted-service-with-eks

To do so, I add --node-private-networking option at the end of eksctl create cluster command.

differences Same:

  • VPC created.
  • 6 subnets created.
  • 4 Route tables created.
  • Internet gateway created.
  • An Elastic IP created.
  • An NAT gateway created, and same global IP as the Elastic IP.

Differences:

  • No Global IP to the instances.
  • The instance is connected to Private subnet eksctl-{{ my_cluster_name }}-cluster/SubnetPrivateEUCENTRAL1(A|B|C) .
  • ENI is created, which is connected to the subnet to the instance and “Interface type:nat_gateway”.
    • The public IP of the ENI is the Elastic IP’s.

Run a sample deployment and, get shell by kubectl exec --stdin --tty {{ your_Nginx_pod_name }} -- /bin/bash. Now, get your global IP like curl ifconfig.me and it returns the Elastic IP! (Achtung, I deploy no service yet.)

Try to apply the service:loadbalancer yaml above. An ELB was created, and it’s AZ is 3 public subnet. Try to access the ELB domain and it will return Nginx welcome page!

How to delete

From AWS console,

  • Detach {{ your_cluster_name }}
    • instances are terminated.
  • Delete cluster
  • Delete VPC

or the single command.

$ eksctl delete cluster --region=eu-central-1 --name={{ my_cluster_name }}
[]  eksctl version 0.30.0
[]  using region eu-central-1
[]  deleting EKS cluster "{{ my_cluster_name }}"
[]  kubeconfig has been updated
[]  2 sequential tasks: { delete nodegroup "{{ my_nodegroup_name }}", delete cluster control plane "{{ my_cluster_name }}" [async] }
[]  will delete stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}"
[]  waiting for stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}" to get deleted
[]  will delete stack "eksctl-{{ my_cluster_name }}-cluster"
[]  all cluster resources were deleted

How did I created my test EKS environment at the end

I want to deploy with the node-type t2.nano. Here is my command memo.

eksctl create cluster --name {{ my_cluster_name }} --version 1.18 --region eu-central-1 --nodegroup-name {{ my_nodegroup_name }} --nodes 3 --nodes-min 1 --nodes-max 4 --node-type t2.nano --ssh-access --ssh-public-key .ssh/id_rsa.pub --managed

Check deploy status

$ eksctl get cluster
NAME                    REGION
{{ my_cluster_name }}   eu-central-1
$ eksctl get nodegroup --cluster {{ my_cluster_name }}
CLUSTER                 NODEGROUP                   CREATED                 MIN SIZE        MAX SIZE        DESIRED CAPACITY            INSTANCE TYPE    IMAGE ID
{{ my_cluster_name }}   {{ my_nodegroup_name }}     2020-10-30T14:42:51Z    1               4               3

Scale down.

$ eksctl scale nodegroup --cluster {{ my_cluster_name }} --name {{ your_nodegroup_name }} --nodes 1
[]  scaling nodegroup stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}" in cluster eksctl-{{ my_cluster_name }}-cluster
[]  scaling nodegroup, desired capacity from 3 to 1

Created Kubernetes cluster

$ kubectl get ns
NAME              STATUS   AGE
default           Active   3d18h
kube-node-lease   Active   3d18h
kube-public       Active   3d18h
kube-system       Active   3d18h
$ kubectl get nodes
NAME                                              STATUS   ROLES    AGE     VERSION
ip-192-168-***-***.eu-central-1.compute.internal   Ready    <none>   3d18h   v1.18.8-eks-7c9bda

Total Cost at the moment

  • EKS: 0.10 USD per hour
  • EC2:
    • Amazon Elastic Compute Cloud NatGateway (alomost free)
      • $0.052 per GB Data Processed by NAT Gateways
      • $0.052 per NAT Gateway Hour
    • EC2 instance: $0.0067 per On Demand Linux t2.nano Instance Hour
    • EBS: $0.119 per GB-month of General Purpose SSD (gp2) provisioned storage
  • Data Transfer: Depend on your usage.

Deploy simple Nginx container

https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/

nginx-deployment.yml

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.19.3
        ports:
        - containerPort: 80

Deploy it with kubectl apply -f nginx-deployment.yml.

$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-6b474476c4-8cjv4   1/1     Running   0          8m3s
nginx-deployment-6b474476c4-xldh2   0/1     Pending   0          8m3s

Create service (ELB) also

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: nginx
  type: LoadBalancer

Deploy with kubectl apply -f nginx-loadbalancer.yml, and check.

$ kubectl get services
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP                                 PORT(S)        AGE
kubernetes      ClusterIP      10.100.0.1      <none>                                      443/TCP        3d18h
nginx-service   LoadBalancer   10.100.***.***  ******-***.eu-central-1.elb.amazonaws.com   80:30660/TCP   5m55s

At this point

  • An ELB created automatically.
  • **********-*******.eu-central-1.elb.amazonaws.com is an auto-generated domain. You can access to this domain, and it returns Nginx welcome page.
    • The IP of the domain is not your VPC Elastic IP created above.
    • No additional Elastic IP was assigned.
  • It automatically pass to the pods.
  • The subnet of the service is in three AZs (public subnet).
  • Listner: load-balancer port is 80, and Instance port is assigned automatically port (by Kubernetes).
  • If you check your global IP from inside the Pods, it returns your node’s Global IP.
    • kubectl exec -it nginx-deployment-*********-***** -- /bin/bash
  • The port 30660 is random.

service (ELB) cost

  • $0.008 per GB Data Processed by the LoadBalancer
  • $0.030 per LoadBalancer-hour

Public

  1. Route53
  2. Add A record s.t.
  • yourdomain.com A ******-***.eu-central-1.elb.amazonaws.com By Create record -> Simple routing -> Define simple record -> Value/Route traffic to: Alias to Application and Classic Load Balancer + select region + chose the locadbalander.

What’s next?

  • “Creating your own container” is your task. kubectl doesn’t build a container. Create your own container (from Dockerfile.)

Memo: not enough resource

Deployment does not have minimum availability.

https://devops.stackexchange.com/questions/3980/what-does-does-not-have-minimum-availability-in-k8s-mean

this could be caused as there is insufficient CPU or memory, but that is not always the case.