Install aws-cli
.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Check.
$ aws --version
aws-cli/2.0.61 Python/3.7.3 Linux/4.19.104-microsoft-standard exe/x86_64.ubuntu.20
$ aws configure
AWS Access Key ID [None]: AKIxxxxxxxxxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxxxxxxxxxxxxxxxx
Default region name [None]: eu-central-1
Default output format [None]: json
The API credentials are stored in ~/.aws/credentials
.
Check.
$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ******************** shared-credentials-file
secret_key ******************** shared-credentials-file
region eu-central-1 config-file ~/.aws/config
https://docs.aws.amazon.com/de_de/eks/latest/userguide/getting-started-eksctl.html
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
Check.
$ eksctl version
0.30.0
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.9/2020-08-04/bin/linux/amd64/kubectl
$ which kubectl
/usr/local/bin/kubectl
(Pass) Check installed (downloaded) kubectl checksum
cf) the key should be RSA. When I tried with my ECDSA key, it returns the error as follows.
Error: computing fingerprint for key ".ssh/id_ecdsa.pub": error computing fingerprint for SSH public key: Unexpected type of SSH key ("*ssh.ecdsaPublicKey"); AWS can only import RSA keys
Create
eksctl create cluster \
--name {{ your_cluster_name }} \
--version 1.18 \
--region eu-central-1 \
--nodegroup-name {{ your_nodegroup_name}} \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--ssh-access \
--ssh-public-key .ssh/id_rsa.pub \
--managed
Here is my full command log.
$ eksctl create cluster --name {{ my_cluster_name }} --version 1.18 --region eu-central-1 --nodegroup-name {{ my_node_name }} --nodes 3 --nodes-min 1 --nodes-max 4 --ssh-access --ssh-public-key .ssh/id_rsa.pub --managed
[ℹ] eksctl version 0.30.0
[ℹ] using region eu-central-1
[ℹ] setting availability zones to [eu-central-1a eu-central-1b eu-central-1c]
[ℹ] subnets for eu-central-1a - public:192.168.0.0/19 private:192.168.***.0/19
[ℹ] subnets for eu-central-1b - public:192.168.32.0/19 private:192.168.***.0/19
[ℹ] subnets for eu-central-1c - public:192.168.64.0/19 private:192.168.***.0/19
[ℹ] using SSH public key ".ssh/id_rsa.pub" as "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_node_name }}-42:3a:75:ad:45:bb:3f:bf:a4:bd:ca:a5:17:e2:44:aa"
[ℹ] using Kubernetes version 1.18
[ℹ] creating EKS cluster "{{ my_cluster_name }}" in "eu-central-1" region with managed nodes
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-central-1 --cluster={{ my_cluster_name }}'
[ℹ] CloudWatch logging will not be enabled for cluster "{{ my_cluster_name }}" in "eu-central-1"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-central-1 --cluster={{ my_cluster_name }}'
[ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "{{ my_cluster_name }}" in "eu-central-1"
[ℹ] 2 sequential tasks: { create cluster control plane "{{ my_cluster_name }}", 2 sequential sub-tasks: { no tasks, create managed nodegroup "{{ my_node_name }}" } }
[ℹ] building cluster stack "eksctl-{{ my_cluster_name }}-cluster"
[ℹ] deploying stack "eksctl-{{ my_cluster_name }}-cluster"
[ℹ] building managed nodegroup stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}"
[ℹ] deploying stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}"
[ℹ] waiting for the control plane availability...
[✔] saved kubeconfig as "/home/atlex/.kube/config"
[ℹ] no tasks
[✔] all EKS cluster resources for "{{ my_cluster_name }}" have been created
[ℹ] nodegroup "{{ my_nodegroup_name }}" has 3 node(s)
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] waiting for at least 1 node(s) to become ready in "{{ my_nodegroup_name }}"
[ℹ] nodegroup "{{ my_nodegroup_name }}" has 3 node(s)
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] node "ip-192-168-***-***.eu-central-1.compute.internal" is ready
[ℹ] kubectl command should work with "/home/atlex/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "{{ my_cluster_name }}" in "eu-central-1" region is ready
In my case, it took around 20 mins.
You can create from an yaml file also.
$ cat cluster.yml
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: my-cluster-name
region: eu-central-1
version: "1.18"
managedNodeGroups:
- name: my-nodegroup-name
instanceType: t2.nano
desiredCapacity: 3
minSize: 1
maxSize: 4
ssh:
publicKeyPath: .ssh/id_rsa.pub
$ eksctl create cluster -f cluster.yml
vpc-************ / eksctl-{{ my_cluster_name }}-cluster/VPC
was created.eksctl-{{ my_cluster_name }}-cluster/SubnetPrivateEUENTRAL1(A|B|C)
and eksctl-{{ my_cluster_name }}-cluster/SubnetPublicEUENTRAL1(A|B|C)
eksctl-{{ my_cluster_name }}-cluster/InternetGateway
was created.eksctl-{{ my_cluster_name }}-cluster/NATIP
is created.eksctl-{{ my_cluster_name }}-cluster/NATGateway
was created.eksctl-{{ my_cluster_name }}-cluster/PublicRouteTable
: Containing 3 subnets eksctl-{{ my_cluster_name }}-cluster/SubnetPublicEUCENTRAL1(A|B|C)
. Destination 0.0.0.0/0
is targeted to igw. For example, subnet CIDR is 192.168.0.0/19
while local destination table is 192.168.0.0/16 local
.eksctl-{{ my_cluster_name }}-cluster/PrivateRouteTableEUCENTRAL1(A|B|C)
. Destination 0.0.0.0/0
is targeted to NAT.eksctl-{{ my_cluster_name }}-cluster/ClusterSharedNodeSecurityGroup
: Communication between all nodes in the cluster.eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup _name }}/SSH
: Allow ssh (port 22) access.eks-cluster-sg-{{ my_cluster_name }}-**********
: EKS created security group applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads.eksctl-{{ my_cluster_name }}-cluster/ControlPlaneSecurityGroup
: Communication between the control plane and worker nodegroups.m5.large
by default.--node-type t2.nano
ami-0f85d2eeb0bea62a7
, EKS Kubernetes Worker AMI with AmazonLinux2 image, (k8s: 1.18.9, docker: 19.03.13ce-1.amzn2, containerd: 1.4.1-2.amzn2)
.eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}
and eks-********-****-****-****-************
eks-********-****-****-****-************
, same name as above.eth0
and eth1
in the same subnet.eksctl-{{ my_cluster_name }}-cluster/SubnetPublicEUCENTRAL1(A|B|C)
ssh ec2-user@{{ the_instance_IP }} -i .ssh/id_rsa
eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}
: EKS Managed Nodes (SSH access: true) created by eksctl
eksctl-{{ my_cluster_name }}-cluster
: EKS cluster (dedicated VPC: true, dedicated IAM: true) created and managed by eksctl
.Note. If I want to out with fixed public IP, it seems we need private cluster +NAT.
https://stackoverflow.com/questions/56974480/communicating-with-a-ip-whitelisted-service-with-eks
To do so, I add --node-private-networking
option at the end of eksctl create cluster
command.
differences Same:
Differences:
eksctl-{{ my_cluster_name }}-cluster/SubnetPrivateEUCENTRAL1(A|B|C)
.nat_gateway
”.Run a sample deployment and, get shell by kubectl exec --stdin --tty {{ your_Nginx_pod_name }} -- /bin/bash
. Now, get your global IP like curl ifconfig.me
and it returns the Elastic IP! (Achtung, I deploy no service yet.)
Try to apply the service:loadbalancer yaml above. An ELB was created, and it’s AZ is 3 public subnet. Try to access the ELB domain and it will return Nginx welcome page!
From AWS console,
{{ your_cluster_name }}
or the single command.
$ eksctl delete cluster --region=eu-central-1 --name={{ my_cluster_name }}
[ℹ] eksctl version 0.30.0
[ℹ] using region eu-central-1
[ℹ] deleting EKS cluster "{{ my_cluster_name }}"
[✔] kubeconfig has been updated
[ℹ] 2 sequential tasks: { delete nodegroup "{{ my_nodegroup_name }}", delete cluster control plane "{{ my_cluster_name }}" [async] }
[ℹ] will delete stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}"
[ℹ] waiting for stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}" to get deleted
[ℹ] will delete stack "eksctl-{{ my_cluster_name }}-cluster"
[✔] all cluster resources were deleted
I want to deploy with the node-type t2.nano
. Here is my command memo.
eksctl create cluster --name {{ my_cluster_name }} --version 1.18 --region eu-central-1 --nodegroup-name {{ my_nodegroup_name }} --nodes 3 --nodes-min 1 --nodes-max 4 --node-type t2.nano --ssh-access --ssh-public-key .ssh/id_rsa.pub --managed
$ eksctl get cluster
NAME REGION
{{ my_cluster_name }} eu-central-1
$ eksctl get nodegroup --cluster {{ my_cluster_name }}
CLUSTER NODEGROUP CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID
{{ my_cluster_name }} {{ my_nodegroup_name }} 2020-10-30T14:42:51Z 1 4 3
Scale down.
$ eksctl scale nodegroup --cluster {{ my_cluster_name }} --name {{ your_nodegroup_name }} --nodes 1
[ℹ] scaling nodegroup stack "eksctl-{{ my_cluster_name }}-nodegroup-{{ my_nodegroup_name }}" in cluster eksctl-{{ my_cluster_name }}-cluster
[ℹ] scaling nodegroup, desired capacity from 3 to 1
$ kubectl get ns
NAME STATUS AGE
default Active 3d18h
kube-node-lease Active 3d18h
kube-public Active 3d18h
kube-system Active 3d18h
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-***-***.eu-central-1.compute.internal Ready <none> 3d18h v1.18.8-eks-7c9bda
https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/
nginx-deployment.yml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.3
ports:
- containerPort: 80
Deploy it with kubectl apply -f nginx-deployment.yml
.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-6b474476c4-8cjv4 1/1 Running 0 8m3s
nginx-deployment-6b474476c4-xldh2 0/1 Pending 0 8m3s
Create service (ELB) also
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
app: nginx
type: LoadBalancer
Deploy with kubectl apply -f nginx-loadbalancer.yml
, and check.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 3d18h
nginx-service LoadBalancer 10.100.***.*** ******-***.eu-central-1.elb.amazonaws.com 80:30660/TCP 5m55s
At this point
**********-*******.eu-central-1.elb.amazonaws.com
is an auto-generated domain. You can access to this domain, and it returns Nginx welcome page.kubectl exec -it nginx-deployment-*********-***** -- /bin/bash
30660
is random.yourdomain.com A ******-***.eu-central-1.elb.amazonaws.com
By Create record
-> Simple routing -> Define simple record -> Value/Route traffic to: Alias to Application and Classic Load Balancer + select region + chose the locadbalander.kubectl
doesn’t build a container. Create your own container (from Dockerfile.)this could be caused as there is insufficient CPU or memory, but that is not always the case.