What’s lsyncd My understanding is, it’s kind of rsync daemon.
Install On Ubuntu Install on source servers (no need to install on destination servers).
apt update apt install lsyncd Set kernel parameter fs.inotify.max_user_watches.
/etc/sysctl.conf
# for lsyncd fs.inotify.max_user_watches = 8192000 Before running We should configure firewall so that the source server can SSH to destination servers.
Here is my ssh snippets.
Use (sync) Configuration Here is the configuration example.
mkdir /etc/lsyncd vim /etc/lsyncd/lsyncd.
Overview of the task Here is the official tutorial page.
Here is an overview of the tasks.
Create a role contains two policies. AmazonAPIGatewayPushToCloudWatchLogs AmazonS3ReadOnlyAccess Create paths with {} braces. This part is regarded as variable. E.g. {folder}. Create method request for the paths. Authorization: using AWS_IAM. request path: {folder} Pre-setting up You should make your own IAM Role which contains two policies below.
AmazonAPIGatewayPushToCloudWatchLogs AmazonS3ReadOnlyAccess AmazonAPIGatewayPushToCloudWatchLogs
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:PutLogEvents", "logs:GetLogEvents", "logs:FilterLogEvents" ], "Resource": "*" } ] } AmazonS3ReadOnlyAccess
What is Helm https://www.youtube.com/watch?v=fy8SHvNZGeE
Install https://helm.sh/docs/intro/install/
From built code - I prefer this way personally curl https://get.helm.sh/helm-v3.8.2-linux-amd64.tar.gz -o helm.tar.gz tar -xzvf helm.tar.gz sudo mv ./linux-amd64/helm /usr/local/bin/ rm -rf ./linux-amd64 Via apt curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null sudo apt-get install apt-transport-https --yes echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list sudo apt-get update sudo apt-get install helm Use Quickstart https://helm.sh/docs/intro/quickstart/
Added 20.
Install and setup NFS server on Ubuntu 20.04 sudo su apt update apt upgrade -y apt install -y nfs-kernel-server Create export directory (NFS directory).
sudo mkdir -p /mnt/nfs_dir chown {{ your_user }}:{{ your_user_group }} /mnt/nfs_dir chmod 777 /mnt/nfs_dir Configuring NFS server.
vim /etc/exports # Add the line follows /mnt/nfs_dir 192.168.100.0/24(rw,sync,no_subtree_check) Open port for NFS (TCP 2049)
ufw allow from 192.168.100/24 to any port nfs Apply the cofiguration and start NFS.
Set up in GitLab Create user token at your GitLab. (https://{{ your_GitLab_server_domain }}/profile/personal_access_tokens) The token expire in around 14 days as default.
Enable Container Regository also.
Here is my note.
Create a Secret Create a Secret to access GitLab Container Registry.
kubectl create secret docker-registry my-reg --docker-server={{ your_GitLab_server_domain }}:5050 --docker-username={{ your-GitLab-name }} --docker-password={{ token_you_issued_at_your_GitLab_or_your_password }} Docker login In the worker nodes, add a GitLaB’s SSL to the chain.
Copy CA cert under /etc/ssl/certs/.
What I did Install k8s worker node on Ubuntu 20.04 VM server. Most parts are similar to master node installation instruction.
Environments Ubuntu 20.04 Requirement 2 CPU required. sudo swapoff -a Install Docker sudo apt update sudo apt install -y docker.io sudo systemctl enable docker Set up worker node Network configuration before installing https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
Change kernel parameters and open ports for master node.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.
What I did Install k8s master node on Ubuntu 20.04 VM server.
Environments Ubuntu 20.04 Requirement 2 CPU required. sudo swapoff -a Install Docker sudo apt update sudo apt install -y docker.io sudo systemctl enable docker Set up master node Network configuration before installing https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
Change kernel parameters and open ports for master node.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system sudo modprobe br_netfilter sudo ufw allow 8080/tcp #Kubernetes API Server sudo ufw allow 64430:64439/tcp #Kubernetes API Server sudo ufw allow 2379:2380/tcp #etcd server client API sudo ufw allow 10250/tcp #Kublet API sudo ufw allow 10251/tcp #kube-scheduler sudo ufw allow 10252/tcp # kube-controller-manager sudo ufw allow 6443/tcp #Kubernetes API server Install k8s I use kubenetes-xenial in focal, but as of 2020/09/17 and 2020/06/01 I can’t find any issue.
Send a POST request curl -X POST https://yourdomain.com -d @data.txt @ is necessary.
Enable redirecting curl -L yourdomain.com Ignore TLS certificate error in cURL Frequently used in test environment.
curl --insecure https://yourdomain.com # or curl -k https://yourdomain.com Download a content which includes redirect curl -L yourdomain.com --output myfilename IPv6 Force curl to DNS as IPv6.
curl -6 mydomain.com # From man # -6, --ipv6 Resolve names to IPv6 addresses Show header only curl -sS -I https://youdomain.
Prep - Install Nvidia driver and cuda-toolkit I installed on Ubuntu 20.04.
Install https://developer.nvidia.com/ffmpeg
git clone https://git.videolan.org/git/ffmpeg/nv-codec-headers.git cd nv-codec-headers && sudo make install && cd – git clone https://git.ffmpeg.org/ffmpeg.git cd ffmpeg #export PKG_CONFIG_PATH="/usr/local/lib/pkgconfig" sudo apt install -y pkg-config ./configure --enable-cuda-sdk --enable-cuvid --enable-nvenc --enable-nonfree --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64 make -j 10 Error 1 /usr/local/cuda-10.2/bin/../targets/x86_64-linux/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version!
Basic information about dataframe df.info() #basic information about dataframe len(df.index) #rethrn the number of rows (data) df.count() #return the number of values which are non-NaN on each column df.head() df.tail() Count the data in a column In this example, the column is “Product”.
df["Product"].value_counts() unique values to series.
df["Product"].unique() # the type numpy.ndarray check distrivution in graph # Check the data distribution # The column is Score ax = df["Score"]value_counts().plot(kind='bar') fig = ax.