$ uname -a
Linux system76 5.11.0-7620-generic #21~1624379747~20.10~3abeff8-Ubuntu SMP Wed Jun 23 02:23:59 UTC x86_64 x86_64 x86_64 GNU/Linux
cd /usr/local/src
sudo curl -L -O https://golang.org/dl/go1.16.5.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
/etc/profile
PATH=/usr/local/go/bin:$PATH
sudo apt install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubectl
# check
sudo kind create cluster
https://kind.sigs.k8s.io/docs/user/quick-start/
cd /usr/local/src
sudo curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.12.0/kind-linux-amd64
sudo chmod +x ./kind
sudo mv ./kind /usr/bin
(as root user)
$ kind create cluster
ERROR: failed to create cluster: running kind with rootless provider requires cgroup v2, see https://kind.sigs.k8s.io/docs/user/rootless/
There was no /etc/default/grub
in Pop!_OS…
Check whether the system supports cgroup v2.
$ grep cgroup /proc/filesystems
nodev cgroup
nodev cgroup2
OK, cgroup2 is supported.
https://github.com/opencontainers/runc/blob/master/docs/cgroup-v2.md
$ sudo apt install -y dbus-user-session
$ systemctl --user start dbus
https://github.com/opencontainers/runc/blob/master/docs/cgroup-v2.md#am-i-using-cgroup-v2
Am I using cgroup v2? Yes if
/sys/fs/cgroup/cgroup.controllers
is present.
Still no changes…
https://www.kernel.org/doc/html/v5.5/admin-guide/cgroup-v2.html
$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.6G 2.0M 1.6G 1% /run
/dev/mapper/data-root 450G 408G 19G 96% /
tmpfs 7.8G 459M 7.3G 6% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/nvme0n1p1 511M 347M 165M 68% /boot/efi
/dev/nvme0n1p2 4.0G 2.3G 1.8G 56% /recovery
tmpfs 1.6G 160K 1.6G 1% /run/user/1000
cgroup is mounted on /sys/fs/cgroup
https://wiki.archlinux.org/title/cgroups
/etc/sysctl.conf
systemd.unified_cgroup_hierarchy=1
$ sudo sysctl -p
sysctl: cannot stat /proc/sys/systemd/unified_cgroup_hierarchy: No such file or directory
/boot/efi/EFI/Pop_OS-ee80fe8b-8d46-4f70-816c-b9619e87c840/cmdline
# From
# initrd=\EFI\Pop_OS-ee80fe8b-8d46-4f70-816c-b9619e87c840\initrd.img root=UUID={{ My_UUID }} ro quiet loglevel=0 systemd.show_status=false splash
# To
initrd=\EFI\Pop_OS-ee80fe8b-8d46-4f70-816c-b9619e87c840\initrd.img root=UUID={{ My_UUID }} ro quiet loglevel=0 systemd.show_status=false systemd.unified_cgroup_hierarchy=1 splash
restart
# cat /proc/cmdline
initrd=\EFI\Pop_OS-ee80fe8b-8d46-4f70-816c-b9619e87c840\initrd.img root=UUID={{ My_UUID }} ro quiet loglevel=0 systemd.show_status=false splash
No change…
$ bootctl status
I reboot the PC, but no change…
Pop!_OS manage the kernel’s command-lines parameters with (kernelstub
)[https://github.com/isantop/kernelstub].
$ sudo vim /etc/kernelstub/configuration
Add user.kernel_options.systemd.unified_cgroup_hierarchy=1
in the json file, and run kernelstub
Reboot the PC and /proc/cmdline
Changed!
/sys/fs/cgroup/cgroup.controllers
exists!! Now, my laptop uses cgroup v2.
I’ve enabled cgroup v2, but kind fails..
$ kind create cluster
ERROR: failed to create cluster: running kind with rootless provider requires setting systemd property "Delegate=yes", see https://kind.sigs.k8s.io/docs/user/rootless/
Follow https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.
$ cat /etc/systemd/system/user@.service.d/delegate.conf
[Service]
Delegate=yes
$ sudo systemctl daemon-reload
$ cat /etc/modules-load.d/iptables.conf
iptable_nat
ip6table_nat
Done!!
Check kind
works.
$ kind create cluster
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.21.1) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! π
$ kind delete cluster
kind-example-config.yaml
:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
$ kind create cluster --config kind-example-config.yaml
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.21.1) πΌ
β Preparing nodes π¦ π¦ π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
β Joining worker nodes π
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community π
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c898ef1dcb6f kindest/node:v1.21.1 "/usr/local/bin/entrβ¦" About a minute ago Up About a minute kind-worker2
a1a775dd026f kindest/node:v1.21.1 "/usr/local/bin/entrβ¦" About a minute ago Up About a minute 127.0.0.1:35205->6443/tcp kind-control-plane
94d8f85eaeea kindest/node:v1.21.1 "/usr/local/bin/entrβ¦" About a minute ago Up About a minute kind-worker
https://kind.sigs.k8s.io/docs/user/ingress/
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
- role: worker
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Test
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
β kind create cluster --config kind-example-config.yaml
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.21.1) πΌ
β Preparing nodes π¦ π¦ π¦
ERROR: failed to create cluster: docker run error: command "docker run --hostname kind-control-plane --name kind-control-plane --label io.x-k8s.kind.role=control-plane --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --detach --tty --label io.x-k8s.kind.cluster=kind --net kind --restart=on-failure:1 --init=false --publish=0.0.0.0:80:80/TCP --publish=0.0.0.0:443:443/TCP --publish=127.0.0.1:44365:6443/TCP -e KUBECONFIG=/etc/kubernetes/admin.conf kindest/node:v1.21.1@sha256:69860bda5563ac81e3c0057d654b5253219618a22ec3a346306239bba8cfa1a6" failed with error: exit status 126
Command Output: 20f156defa5d6991674662c3284a6f07de6c957fc1c328fe131b1650ae2b0db9
docker: Error response from daemon: driver failed programming external connectivity on endpoint kind-control-plane (480c59fb678256886d547a0fbda23c9afb90693dbe78f6bce0b39aa5b5bd8f99): Error starting userland proxy: error while calling PortManager.AddPort(): cannot expose privileged port 443, you can add 'net.ipv4.ip_unprivileged_port_start=443' to /etc/sysctl.conf (currently 1024), or set CAP_NET_BIND_SERVICE on rootlesskit binary, or choose a larger port number (>= 1024): listen tcp4 0.0.0.0:443: bind: permission denied.
As the message said, add a line in /etc/sysctl.conf
as follows:
net.ipv4.ip_unprivileged_port_start=80
and sysctl -p /etc/sysctl.conf
.
This change could be unsecure. Make sure your kind environment is local or secure.
kindest/node
.