My environment Pop!_OS $ uname -a Linux system76 5.11.0-7620-generic #21~1624379747~20.10~3abeff8-Ubuntu SMP Wed Jun 23 02:23:59 UTC x86_64 x86_64 x86_64 GNU/Linux Set up pre-requirements for kind Install latest Go cd /usr/local/src sudo curl -L -O https://golang.org/dl/go1.16.5.linux-amd64.tar.gz tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz export PATH=$PATH:/usr/local/go/bin /etc/profile PATH=/usr/local/go/bin:$PATH Rootless Docker Refer to my post.
Install kubectl https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management
sudo apt install -y apt-transport-https ca-certificates curl sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.
My environment Pop!_OS $ uname -a Linux system76 5.11.0-7620-generic #21~1624379747~20.10~3abeff8-Ubuntu SMP Wed Jun 23 02:23:59 UTC x86_64 x86_64 x86_64 GNU/Linux Configure rootless Docker https://docs.docker.com/engine/security/rootless/
$ dockerd-rootless-setuptool.sh install [ERROR] Missing system requirements. Run the following commands to [ERROR] install the requirements and run this tool again. ########## BEGIN ########## sudo sh -eux <<EOF # Install newuidmap & newgidmap binaries apt-get install -y uidmap EOF ########## END ########## OK… Run the command above and try again.
Motivation I want to mount Nginx configuration as ConfigMap.
The simplest example Original Here is the default /etc/nginx/conf.d/default.conf in Nginx Docker image (comment outs were removed.)
server { listen 80; listen [::]:80; server_name localhost; location / { root /usr/share/nginx/html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } Overwritte - return a message I’ve changed location / to return the message Have a nice day!
What is Helm https://www.youtube.com/watch?v=fy8SHvNZGeE
Install https://helm.sh/docs/intro/install/
From built code - I prefer this way personally curl https://get.helm.sh/helm-v3.8.2-linux-amd64.tar.gz -o helm.tar.gz tar -xzvf helm.tar.gz sudo mv ./linux-amd64/helm /usr/local/bin/ rm -rf ./linux-amd64 Via apt curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null sudo apt-get install apt-transport-https --yes echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list sudo apt-get update sudo apt-get install helm Use Quickstart https://helm.sh/docs/intro/quickstart/
Added 20.
What I did Install k8s worker node on Ubuntu 20.04 VM server. Most parts are similar to master node installation instruction.
Environments Ubuntu 20.04 Requirement 2 CPU required. sudo swapoff -a Install Docker sudo apt update sudo apt install -y docker.io sudo systemctl enable docker Set up worker node Network configuration before installing https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
Change kernel parameters and open ports for master node.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.
What I did Install k8s master node on Ubuntu 20.04 VM server.
Environments Ubuntu 20.04 Requirement 2 CPU required. sudo swapoff -a Install Docker sudo apt update sudo apt install -y docker.io sudo systemctl enable docker Set up master node Network configuration before installing https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
Change kernel parameters and open ports for master node.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system sudo modprobe br_netfilter sudo ufw allow 8080/tcp #Kubernetes API Server sudo ufw allow 64430:64439/tcp #Kubernetes API Server sudo ufw allow 2379:2380/tcp #etcd server client API sudo ufw allow 10250/tcp #Kublet API sudo ufw allow 10251/tcp #kube-scheduler sudo ufw allow 10252/tcp # kube-controller-manager sudo ufw allow 6443/tcp #Kubernetes API server Install k8s I use kubenetes-xenial in focal, but as of 2020/09/17 and 2020/06/01 I can’t find any issue.
Concepts We can regard Secret as encrypted ConfigMap. In order to enabling encryption, we should
Future scope: how to encrypted??? Secret We should store the key in base64 encode. Suppose we want to store secret value this_is_value with key key_1.
First, we should encode the secret value as follow.
$ echo -n "this_is_value" | base64 dGhpc19pc192YWx1ZQ== dGhpc19pc192YWx1ZQ== is base64 encoded this_is_value.
Now, make a YAML file.
apiVersion: v1 kind: Secrets metadata: name: my-secret data: key_1: dGhpc19pc192YWx1ZQ== Even we try to read a value with kubectl describe secrets it doesn’t return a credential.
Intro : overried ENTRYPOINT of a DOcker image in k8s We can overried ENTRYPOINT value in a Docker image with command argumanet. CMD can’t be overwrrided by command argument. We have to use args argument instead.
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
Here is an official sample.
pods/commands.yaml apiVersion: v1 kind: Pod metadata: name: command-demo labels: purpose: demonstrate-command spec: containers: - name: command-demo-container image: debian command: ["printenv"] args: ["HOSTNAME", "KUBERNETES_PORT"] restartPolicy: OnFailure Inject environment variable In container section, add env.
ReplicationController No more details than official document.
https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/
It enables HA, auto scaling, multi node controll (across nodes). This is the first fancy function in k8s!!
ReplicaSet ReplicaSet is kind of newer version of ReplicationController.
https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#replicaset
In stead of ReplicationController and ReplicaSet, we create “Deployment” object to manage Pods. Actually, Deployment use ReplicaSet and when we create a Deployment it creates ReplicaSet automatically.
Labels and Selectors By labels, Selectors can select the Pod which should be monitored by the Selector.
Service type https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
ClusterIP NodePort LoadBalancer ExternalName ClusterIP https://d33wubrfki0l68.cloudfront.net/e351b830334b8622a700a8da6568cb081c464a9b/13020/images/docs/services-userspace-overview.svg
Internal network in the node. Use at first for ingress test.
NodePort Bind a port of node and forward it.
TargetPort: port on pod. Port: A Port from Service (to Deployments.) NodePort: A port on the node. selector: labels of the pods It calls “Node"Port but NodePorts can proxy pass to other nodes. It opens all worker node port.
Loadbalaner For cloud provider.