I’ve been setting up a new Proxmox server and messing around with VMs, and wanted to know what kind of useful commands I’m missing out on. Bonus points for a little explainer.
Journalctl | grep -C 10 'foo' was useful for me when I needed to troubleshoot some fstab mount fuckery on boot. It pipes Journalctl (boot logs) into grep to find ‘foo’, and prints 10 lines before and after each instance of ‘foo’.
i do not know if this counts as a command but you might want to check Atuin, what it does is help you find, manage and edit the commands you used in your shell history saves you a lot of time
The watch command is very useful, for those who don’t know, it starts an automated loop with a default of two seconds and executes whatever commands you place after it.
It allows you to actively monitor systems without having to manually re-run your command.
So for instance, if you wanted to see all storage block devices and monitor what a new storage device shows up as when you plug it in, you could do:
watch lsblkAnd see in real time the drive mount. Technically not “real time” because the default refresh is 2 seconds, but you can specify shorter or longer intervals.
Obviously my example is kind of silly, but you can combine this with other commands or even whole bash scripts to do some cool stuff.
I’m a big enjoyer of pushd and popd
so if youre in a working dir and need to go work in a different dir, you can pushd ./, cd to the new dir and do your thing, then popd to go back to the old dir without typing in the path again
find /path/to/starting/dir -type f -regextype egrep -regex 'some[[:space:]]*regex[[:space:]]*(goes|here)' -exec mv {} /path/to/new/directory/ \;I routinely have to find a bunch of files that match a particular pattern and then do something with those files, and as a result,
findwith-execis one of my top commands.If you’re someone who doesn’t know wtf that above command does, here’s a breakdown piece by piece:
find- cli tool to find files based on lots of different parameters/path/to/starting/dir- the directory at which find will start looking for files recursively moving down the file tree-type f- specifies I only wantfindto find files.-regextype egrep- In this example I’m using regex to pattern match filenames, and this tellsfindwhat flavor of regex to use-regex 'regex.here'- The regex to be used to pattern match against the filenames-exec-execis a way to redirect output in bash and use that output as a parameter in the subsequent command.mv {} /path/to/new/directory/-mvis just an example, you can use almost any command here. The important bit is{}, which is the placeholder for the parameter coming fromfind, in this case, a full file path. So this would read when expanded,mv /full/path/of/file/that/matches/the/regex.file /path/to/new/directory/\;- This terminates the command. The semi-colon is the actual termination, but it must be escaped so that the current shell doesn’t see it and try to use it as a command separator.
Search for github repos of dotfiles and read through people’s shell profiles, aliases, and functions. You’ll learn a lot.
ps -ef | grep <process_name
Kill -9 proces id
I googled that -15 is better, I forgot what -9 even did, been using it for years.
The number is the signal you send to the program. There’s a lot of signals you can send (not just 15 and 9).
The difference between them is that 15 (called SIGTERM) tells the program to terminate by itself (so it can store its cached data, create a save without losing data or corrupting, drop all its open connections gracefully, etc). 9 (called SYGKILL) will forcefully kill a program, without waiting for it to properly close.
You normally should send signal 15 to a program, to tell it to stop. If the program is frozen and it’s not responding or stopping, you then send signal 9 and forcefully kill it. No signal is “better” than the other, they just have different usecases.
Not a command but the tab key for auto complete. This made it much easier for me.
ctrl+r on bash will let you quickly search and execute previous commands by typing the first few characters usually.
it’s much more of a game changer than it first meets the eye.
And I believe shift+r will let you go forward in history if you’re spamming ctrl+r too fast and miss whatever you’re looking for
I use $_ a lot, it allows you to use the last parameter of the previous command in your current command
mkdir something && cd $_
nano file
chmod +x $_As a simple example.
If you want to create nested folders, you can do it in one go by adding -p to mkdir
mkdir -p bunch/of/nested/folders
Good explanation here:
https://koenwoortman.com/bash-mkdir-multiple-subdirectories/qI really hope I remember this one long enough to make it a habit
I have my .bashrc print useful commands with a short explanation. This way I see them regularly when I start a new session. Once I use a command enough that I have it as part of my toolkit I remove it from the print.
ripgrep
cd `pwd`for when you want to stay in a dìr that gets deleted and recreated.
cat /proc/foo/exe > program cat /proc/foo/fd/bar > fileto undelete still-running programs and files still opened in running programs
systemd-run lets you run a command under some limitations, ie
systemd-run --scope -p MemoryLimit=1000M -p CPUQuota=20% ./heavyduty.shulimit can also be used to define limits, but for a user rather than a process. This could protect you against, ie, a fork bomb
ncis useful. For example: if you have a disk image downloaded on computer A but want to write it to an SD card on computer B, you can run something likeuser@B: nc -l 1234 | pv > /dev/$sdcardAnd
user@A: nc B.local 1234 < /path/to/image.img(I may have syntax messed up–also don’t transfer sensitive information this way!)
Similarly, no need to store a compressed file if you’re going to uncompress it as soon as you download it—just pipe
wgetorcurltotarorxzor whatever.I once burnt a CD of a Linux ISO by
wgeting directly tocdrecord. It was actually kinda useful because it was on a laptop that was running out of HD space. Luckily the University Internet was fast and the CD was successfully burnt :)parallel, easy multithreading right in the command line
inotifywait, for seeing what files are being accessed/modified
tail -F, for a live feed of a log file
I can recommend tmux also as an alternative to screen
should be the number of hardware threads available on the system by default
No, not at all. That is a terrible default. I do work a lot on number churning and sometimes I have to test stuff on my own machine. Generally I tend to use a safe number such as 10, or if I need to do something very heavy I’ll go to 1 less than the actual number of cores on the machine. I’ve been burned too many times by starting a calculation and then my machine stalls as that code is eating all CPU and all you can do is switch it off.
Something that really improved my life was learn to properly use
find,grep,xargsandsed. Besides that, there are these two little ‘hacks’ that are really handy at times…1- find out which process is using some local port (i.e. the modern netstat replacement):
$ ss -ltnp 'sport = :<port-number>'2- find out which process is consuming your bandwidth:
sudo nethogsI always just do
ss -ltnp | grep <port-number>, which filters well enough for my purposes and is a bit easier to remember…










