This page is a collection of Linux-focused one-liners using open-source tools. The long names of flags are presented for better readability. Tested version of each tool is also documented and can be seen by hovering your mouse over the links.
💡 Depending on the configuration of your machine and your access level, some of these container tools may not be fully functional!
podman build --tag your_container_name:latest -f Dockerfile
[The last argument in this command .
is the context of
image building]
ch-image build --tag your_container_name:latest -f Dockerfile .
apptainer build your_container.sif your_container.def
cosign generate-key-pair
… using cosign.key
private key.
cosign sign --key cosign.key your_container_name:latest
… using cosign.pub
public key.
cosign verify --key cosign.pub your_container_name:latest
manifest-tool inspect your_container_name:latest
regctl image manifest your_container_name:latest
regctl image inspect your_container_name:latest
trivy image container_image:tag
… and exit with a non-zero code only when vulnerabilities with high/critical severity are found.
trivy image --exit-code 1 --severity "HIGH,CRITICAL" container_image:tag
ps
… and print the command arguments, controlling terminal, and process hierarchy.
ps -fu
kill
(from shell)
Most shells have their own built-in kill
command for
terminating processes. Depending on your shell, the command syntax may
be different. Please use kill --help
to read the details on
how to use it.
… with process ID of 123
by sending
SIGTERM
(gracefully).
kill 123
… with process ID of 123
by sending
SIGKILL
(forcefully).
kill --signal SIGKILL 123
… from all users and sessions by sending SIGTERM
.
pkill bash
… controlled by pts/1
terminal, by sending
SIGTERM
and printing the process ID of matched
processes.
pkill --echo --terminal pts/1 bash
💡 You can find the controlling terminal of each process using
ps -fu
command (in the TTY column).
… controlled by pts/1
terminal, by sending
SIGKILL
and printing the process ID of matched
processes.
pkill --echo --terminal pts/1 --signal SIGKILL bash
… all files and directories with name matching *.md
(where their names end with .md
) in
/your/path/
, recursively.
find /your/path/ -name '*.md'
… all files and directories with name case-insensitively matching
*.md
(where their names end with .md
or
.Md
or .mD
or .MD
) in
/your/path/
, recursively.
find /your/path/ -iname '*.md'
… only files with name matching *.md
(where their names
end with .md
) in /your/path/
, recursively.
find /your/path/ -type f -name '*.md'
… only directories with name matching *temp*
(where
their names include temp
) in /your/path/
,
recursively.
find /your/path/ -type d -name '*temp*'
file /path/to/your_file
md5sum your_file
💡 The command is the same for other checksum tools, e.g.
b2sum
,sha1sum
,sha256sum
,sha224sum
,sha384sum
,sha512sum
, etc.
… and fail if the calculated hash of the file is different than the value passed in commandline.
echo "9158bc19c2ab91636f4e44ab9f5e80f5 your_file" | md5sum -c
💡 The command is the same for other checksum tools, e.g.
b2sum
,sha1sum
,sha256sum
,sha224sum
,sha384sum
,sha512sum
, etc.
… following redirections and save it to
~/Downloads/302.jpg
.
curl --location --output ~/Downloads/302.jpg https://http.cat/images/302.jpg
… following redirections and save it to current directory.
wget https://http.cat/images/302.jpg
… copy local_file
to ~/remote_directory
on
remote_host
. Use remote_username
to login to
remote_host
.
scp local_file remote_username@remote_host:~/remote_directory
… copy ~/remote_directory/remote_file
from
remote_host
to local_file
. Use
remote_username
to login to remote_host
.
scp remote_username@remote_host:~/remote_directory/remote_file local_file
… copy ~/remote_directory1/remote_file1
from
remote_host1
to
~/remote_directory2/remote_file2
on
remote_host2
. Use remote_username1
to login to
remote_host1
and remote_username2
to login to
remote_host2
.
scp -3 remote_username1@remote_host1:~/remote_directory1/remote_file1 remote_username2@remote_host2:~/remote_directory2/remote_file2
… synchronize local_directory
with
~/remote_directory
on remote_host
. Do not
remove extra files in ~/remote_directory
. Use
remote_username
to login to remote_host
.
rsync -av local_directory remote_username@remote_host:~/remote_directory
… synchronize local_directory
with
~/remote_directory
on remote_host
. Remove
extra files in ~/remote_directory
. Use
remote_username
to login to remote_host
.
⚠️ Using
--delete
with the wrong path can result in the removal of all your files from that directory!
rsync -av --delete local_directory remote_username@remote_host:~/remote_directory
… only first 4 lines
head --lines=4 your_file
… full content
cat your_file
… full content with syntax highlighting
bat your_file
… only last 4 lines
tail --lines=4 your_file
… and print line-numbers and colorize all occurrences of
this_string
in your_file
grep --line-number --color=always this_string your_file
… read input_file
, replace the first occurrence of
this_string
in each line with other_string
and
write modified content to output_file
sed 's/this_string/other_string/' input_file > output_file
… read input_file
, replace the first occurrence of
this_string
in each line with other_string
in-place
sed -i 's/this_string/other_string/' input_file
… read input_file
, replace all occurrences of
this_string
in each line with other_string
in-place
sed -i 's/this_string/other_string/g' input_file
… and print the result to standard output.
sort input_file
… and write the result to output_file
.
sort --output=output_file input_file
wc --lines your_file
… and exit with a non-zero code and log differences if they are different.
diff file1 file2
… and print the content side-by-side and highlight differences.
diff --color=always --side-by-side file1 file2
… and print the content side-by-side and highlight differences.
vim -d file1 file2
vimdiff file1 file2
… and print the content side-by-side and highlight differences.
nvim -d file1 file2
numdiff --absolute-tolerance=1e-6 --relative-tolerance=1e-4 file1.txt file2.txt
… and save output to files.out
cat file1 file2 file3 > files.out
markdownlint file.md
… with syntax highlighting
xq your_file.html
stylelint --color --formatter verbose file.css
jq '.' file.json
trivy config Dockerfile
… and exit with a non-zero code only if issues with high/critical severity are found.
trivy config --exit-code 1 --severity "HIGH,CRITICAL" Dockerfile
… of all files in your_project/
directory and save them
to your_project.tar
without any compression.
tar --create --file your_project.tar your_project/
… and compress it with gzip.
tar --create --auto-compress --file your_project.tar.gz your_project/
tar --extract --file your_project.tar
tar --extract --file your_project.tar.gz
… to a file
ln -s your_file your_symlink
… to a directory
ln -s your_directory your_symlink
unlink your_symlink
… and fix it in-place.
clang-format -i file.cpp
… but do not fix the file.cpp
and only verify its format
and exit with a non-zero code if the file is not formatted properly.
clang-format --dry-run -Werror file.cpp
include-what-you-use file.cpp
… and fix it in-place.
black your_script.py
… but do not fix. Verify file and exit with a non-zero code if the
file is not formatted properly (--check
). Also log
formatting issues (--diff
).
black --check --diff your_script.py
… and fix it in-place.
isort your_script.py
… but do not fix. Verify file and exit with a non-zero code if the
file is not formatted properly (--check
). Also log
formatting issues (--diff
).
isort --check --diff your_script.py
vulture your_script.py
bandit your_script.py
… recursively, and compare the results to
bandit_baseline.yml
bandit --recursive path/to/scripts -b bandit_baseline.yml
pip-audit -r ./requirements.txt
… by your script (ignoring imports).
vermin your_script.py
pylint your_script.py
… and fix it in-place.
fprettify file.f90
… but only fix the indentation.
findent file.f90 > file_fixed.f90
… but only fix the indentations in-place, recursively.
findent_batch --dir=/path/to/project_dir
time
(from shell)
Most Linux shells have their own
internal timing routine. You can confirm this by running
which time
or type time
. The following example
runs ls -lhA --color
command and measures its execution
time.
time ls -lhA --color
time (GNU time)
/usr/bin/time ls -lhA --color
… of ls -lhA --color
command after warmup for 10
runs.
hyperfine --warmup=10 'ls -lhA --color'
… of ls -lhA --color
command and compare it to
ls
after warmup for 10 runs.
hyperfine --warmup=10 'ls -lhA --color' 'ls'
ldd /full/path/to/executable
… in current directory.
git init
… from https://gitlab.mpcdf.mpg.de/elpa/elpa.git
,
copying all commits from all branches to /local/path
git clone https://gitlab.mpcdf.mpg.de/elpa/elpa.git /local/path
… from https://gitlab.mpcdf.mpg.de/elpa/elpa.git
,
copying only the last commit of the default branch to
/local/path
git clone --depth 1 https://gitlab.mpcdf.mpg.de/elpa/elpa.git /local/path
… from https://gitlab.mpcdf.mpg.de/elpa/elpa.git
,
copying all commits from the default branch to
/local/path
git clone --single-branch https://gitlab.mpcdf.mpg.de/elpa/elpa.git /local/path
… from https://gitlab.mpcdf.mpg.de/elpa/elpa.git
,
copying all commits from the test
branch to
/local/path
git clone --branch test --single-branch https://gitlab.mpcdf.mpg.de/elpa/elpa.git /local/path
… for current git repository
git config user.name "Your Name"
… for all your local git repositories
git config --global user.name "Your Name"
… for current git repository
git config user.email "123-usr@users.noreply.gitlab.com"
… for all your local git repositories
git config --global user.email "123-usr@users.noreply.gitlab.com"
git status
… in the current branch
git log
git reflog
git push
… and overwrite the branch in remote repository
git push --force
⚠️ Using
--force
with unprotected remote branches will result in the deletion of all commits in remote branch which are not locally present.
git add your_file
git commit --message="Your commit message here!"
git branch new_branch
git checkout other_branch