for
loop of all files in a directoryThe following snippet shows all files in Loop_dir
.
Loop_dir="path_to_dir"
for entry in "$Loop_dir"/*
do
echo "$entry"
done
https://www.cyberciti.biz/faq/unix-linux-bash-script-check-if-variable-is-empty/
ps -aef —forest
Find all lines from files in the current directory
find ./ -type f -print | xargs grep 'foo'
I created my custom alias.
alias afind="find -type f -print | xargs grep"
Here is a pioneer.
https://gist.github.com/larshaendler/3c477182717d32a4fc64070c283d24ad
The following snippets deletes suffix “foo” in the directory this bash file is.
for file in foo*
do
mv "$file" "${file#foo}"
done
In one liner,
for file in foo*; do mv "$file" "${file#foo}"; done;
du -h ./to/the/dir
du
: Disk Usage.-h
: Human readable.Get the size of files in current directory
du -h --max-depth=1 ./
IFS=","
for v in "$@"
do
echo $v
done
dig -x {{ IP }}
dig +additional {{ your_domain }}
while read line; do
echo "$line"
done < file.txt
https://stackoverflow.com/questions/11176284/time-condition-loop-in-shell
#! /bin/bash
end=$((SECONDS+3))
while [ $SECONDS -lt $end ]; do
# Do what you want.
:
done
find . -type f -printf '%T@ %p\n' | sort -n | tail -1 | cut -f2- -d" "
eval
is useful when we construct one-liner.
It concatenates variables and execute the result.
https://linuxhint.com/bash_eval_command/
sudo su --preserve-env
# or
sudo su --preserve-environment
# or
sudo -E su -p
curl ifconfig.me
curl ident.me
paste file1 file2
dig
with bash
#!/bin/bash
for i in $(seq -f "%04g" 1001 1050)
do
echo -n "server$i "
dig "server$i.myservice.com" +short
done
sed
sed -i 's/before/after/g' file.txt
sed
conditions in a single line## Sepreated one
#sed -i 1,2d test.txt
#sed -i '/NaN/d' test.txt
#sed -i 's/: /,/g' test.txt
sed -i '1,2d;/nan/d;s/: /,/g' test.txt
date
commandBasic: `date +"{{ format }}"
Examples
$ date +"%Y-%m-%d"
2021-03-11
figlet
A qute tool.
$ figlet -c "This is my page"
_____ _ _ _
|_ _| |__ (_)___ (_)___ _ __ ___ _ _ _ __ __ _ __ _ ___
| | | '_ \| / __| | / __| | '_ ` _ \| | | | | '_ \ / _` |/ _` |/ _ \
| | | | | | \__ \ | \__ \ | | | | | | |_| | | |_) | (_| | (_| | __/
|_| |_| |_|_|___/ |_|___/ |_| |_| |_|\__, | | .__/ \__,_|\__, |\___|
|___/ |_| |___/
myfile.csv
$ cat server-if_bond0-up-d-MAX.txt
1615374900,first-line
1615374901,second-line
csv_split.sh
file_name="myfile.csv"
IFS=","
while read time comment; do
echo "$time"
echo "$comment"
done < $file_name
NCurses Disk Usage
# check from shell
sudo apt install oathtool
oathtool --totp=sha256 "myhexcode"
// oathtool --totp=sha256 "c7b794........................................................9f"
# for AWS
oathtool -b --totp="SHA1" ABCD.....
tree
tool exclude a directorytree -I dont-search-me
tree -I "complicated1|complicated2|wildcard-also-works*"
apt install -y fping
fping -g -r 1 192.168.122.0/24
Ubuntu21.04
eog image.jpg
jq
)The simple usage:
curl blabla.com | jq "."
little bit deeper:
jq "." # . is the root of json element
jq ".[2]" # the second element
jq ".foo" # get the value of the key "foo"
jq '.results[] | {id}': from the list results, only extract the key id
grep
only get html image tagsimg_tags=$(grep -oP "\<img[^\>]*\>" myfile.html)
Paths of the images in the HTML:
png_list=$(grep -oP "(?<=\<img\ src\=\')[\d\w\/\-\_]+\.png" myfile.html)
Cf. non-greedy operator for shortest match
https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags
basename $path_contains_slashes
https://stackoverflow.com/a/2973495
For foobarbarfoo
:
bar(?=bar) finds the 1st bar ("bar" which has "bar" after it)
bar(?!bar) finds the 2nd bar ("bar" which does not have "bar" after it)
(?<=foo)bar finds the 1st bar ("bar" which has "foo" before it)
(?<!foo)bar finds the 2nd bar ("bar" which does not have "foo" before it)
https://stackoverflow.com/a/8303552/9923806
[\S\s]
#!/bin/bash
file=$1
shared_dir=/usr/local/shared
if [[ -f "$shared_dir/$file" ]]; then
echo "there is the file already at $shared_dir/$file."
else
echo "Downloading the file in to $shared_dir"
curl "https://localhost/myfile" -o $shared_dir/$file
fi
-f
checks only the “file” exists. -e
, on the other hand, can also check the existence of directory.
group=$(expr $(date +%V) % 4)
openssl s_client -connect www.google.com:443 2>/dev/null </dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p'
All modules are stored in /lib/modules/$(uname -r)
.
find /lib/modules/$(uname -r) -type f -name '*.ko'
The program is called nautilus:
nautilus ./
In order to make output directory independently from the executed location:
SCRIPT_DIR=$(dirname "$0")
export NAMESPACE=${NAMESPACE:-default}
NAMESPACE would be default
aws s3 rm s3://unused-bucket --recursive
mkdir backup-me
cd backup-me
aws s3 sync s3://please-delete-2 ./
for file in ./*; do ls $file | tr '\n' ' '; echo $file | grep foo; done
#!/bin/zsh
# i=0
for first_dir in ./*/ ; do
echo "$first_dir"
for file in ./$first_dir/* ; do
# operate on $file
done
# ((i++))
done
rm ./*/*.bak
#for file in ./*/ ; do
# echo "$file"
#done
#!/bin/sh
my_query='{\"query\":\"query{
jobs{
job {
id
}
status
subject_id
engine {
name
}
}
}
\"}'
echo "$my_query"
curl -d "$my_query" http://beluga-graphql.default.beta.deepc-gate.de/
/dev/null
./script > /dev/null 2>&1
oder
./script &>/dev/null
#/bin/sh
inotifywait -m ./ -e create |
while read directory action file; do
echo $file
echo "The file was just created."
done