Linux Commaned Line Survival Guide ( for beginners )
This is the Linux command line survival guide ( for beginners ).
- In this guide I’m going to cover the most important and most common commands to get you up and running fast.
- If you learn the commands in this guide, you should have enough info to get by and be competent.
- This guide is meant to be concise and practical.
- I’m including as much useful info as possible without making things too complicated or advanced.
-
This guide is meant for beginners. It is meant to make beginners competent.
- I had two types of people in mind when I made this guide.
- A home or desktop user that is new to Linux and just needs to get a handle on the commandline.
- A developer or other IT role that needs to be able to login to Linux servers and be somewhat competent. ( and not embarass yourself )
- I’m going to show you how to:
- manage files and disk space
- edit and process text
- search for files and text
- manage users and permissions
- basic network stuff including checking open ports
- install / manage packages
- start/stop and manage services
- zip and unzip files
- a whole lot more
-
Check the link in the description for copy and past examples of anything you see in this video.
- There are a lot of other really useful commands and information that I’m not including in this video because I’m trying to keep this simple for beginners.
-
If you want more advanced stuff check the play list.
-
For each command I’m showing the most common usage and options.
-
If you want to go deeper and learn more about any of these commands, check the playlist for videos on each command.
- I’m assuming you’re using the BASH shell which is usually the default on most distros. Some distros have other defaults and some comanies use other shells as their standard ( ex: ksh ). Most of what I show should work in most common shells.
First 3 Commands
NOTE - Anything that comes after a pound sign ‘#’ is a comment and is not actually part of the command. It has no affect when run on the commandline. Many of the command examples I use are documented this way.
ls command:
ls # list files in current dir
ls -l # long listing format
ls -lh # long listing format and human readable size
ls -la # long listing and show all ( includes hidden )
ls -ltrh # also sort by time (t) and reverse (r)
ls -R # recursive, list sub dirs recursively
ls -S # sort by size, largest first
ls dir1 # list files in this dir
ls /var/log # list files in this dir
pwd command:
pwd # show current working directory
cd command:
cd dir1 # change directory ( relative path )
cd /var/log # change directory ( absolute path )
cd . # current dir
cd .. # parent dir
cd ../.. # up two dirs
cd ../tmp # relative path example
cd # change to home dir
cd ~ # change to home dir
cd - # change to last dir
~ | home dir |
. | current dir |
.. | parent dir |
NOTE - Any file or dir starting with a ‘.’ is a hidden dir and isn’t visible by default.
Navigation and Getting Help
You can lookup documentation for each command like this:
man ls # standard docs for command
info ls # alternate, more detailed docs for command
help cd # docs for built in commands
tldr ls # quick summary for command
tldr cd # also works for built in command
NOTE - The TLDR command may need to be installed. The man command should always be there.
sudo apt -y install tldr # install on Debian/Ubuntu/etc.
sudo dnf -y install tldr # install on new RHEL/Fedora/CentOS
sudo yum -y install tldr # install on old RHEL/Fedora/CentOS
sudo pacman -S tldr # install on arch
tldr -u # update doc db before use
Command history:
history # show a history of commands that have been entered
history 0 # show entire history ( might need this on some non-bash shells )
A few shortcuts:
[up] | select previous command from history, keep pressing to go back |
[tab] | auto complete path |
[ctrl] - r | Search commands you’ve typed |
[ctrl] + a or [Home] | Moves the cursor to the start of a line. |
[ctrl] + e or [End] | Moves the cursor to the end of a line. |
Clear the screen:
clear
More Basic Commands
View contents of a text file ( more on this command later ):
cat test1.txt
copy Command
Copy files:
cp test1.txt test2.txt # copy a file
cp test1.txt dir1 # copy file in to a dir
cp test1 test2 dir1 # copy 2 files into a dir
cp -p test1.txt dir1 # perserve mode, ownership, timestamps
cp -r dir1 dir2 # copy a dir ( needs recursive )
# if dest exists will be placed inside
mv Command
Move files:
mv file1 file2 # rename file
mv file1 dir1 # move file into dir ( if dest is a dir )
mv dir1 dir2 # move dir into another dir
# ( or rename if dest doesn't exist )
mv *.png images/ # use a wildcard
mv /var/log/test /data/backup/test2 # move and rename
mv /var/log/test /data/backup # move to dir, use absolute path
mv /var/log/test ../../backup # move to dir, use relative dir
Links and The ln Command
Hardlinks vs Softlinks:
A soft link is also referred to as a symbolic link. A soft link is just a pointer to a filename. The actual file name points to the data on disk. Removing a soft link will not delete the original file. Removing the original file will not delete the soft link but will result in a broken link that points to nothing. Creating a hard link basically just creates another name for the same data on disk. A file name is just a name that points to some data on a disk. It is probably easier to think of it as an alternate name or alternate directory entry and not just a link. A hard link is basically the same thing as the original name for a file. Deleting the original file will not actually delete the file if it still has hard links pointing to it. To delete a file all hard links to the file need to be deleted.
Creating links:
ln test1.txt link1.txt # create hard link
ln -s test1.txt link1.txt # create symbolic link
ln -s dir1 abc # link to a dir
ln -s ../../../etc/hosts link3 # use relative path
ln -s /etc/hosts link4 # use absolute path
ln -s doesnt_exist.txt new_link.txt # create broken link
touch Command
Change access and modification times of a file to current time. Also, creates the file if it doesn’t exist.
touch test1.txt #
touch -r test1.txt test2.txt # set time on test2.txt to be the same as test1.txt
touch -d 'Sun, 29 Feb 2004 16:21:42 -0800' test1.txt # specify date/time
touch -d "2020-05-23 19:14:31.692254412" test1.txt # specify date/time
stat and file Commands
Get information about a file:
stat /etc/hosts
file /etc/hosts
stat /usr/bin/nslookup
file /usr/bin/nslookup
mkdir Command
Create dirs:
mkdir dir3 # create dir
mkdir dir1/dir2 # create subdir if dir1 exists
mkdir -p dir1/dir2 # create dir and sub dir
mkdir -p /var/www/html # using an absolute path
rm Command
Remove files:
rm file1.txt # remove file
rm -rf dir1 # recursive / force remove dir and contents
rm -rf dir1/* # recursive / force remove all in dir1
rm -rf * # recursive / force remove all
rm *.txt # remove all text files
rmdir Command
Remove empty directories:
rmdir dir1
rmdir dir1/dir2/dir3
rmdir dir1/a/b/sub1
find Commaned
Finding files is easy. Don’t worry too much about memorizing these right away (although it wouldn’t hurt). Just save these and refer back to them as the need arises.
find . # find all files in current dir
find . -name "test1.txt" # find all files with this name
find . -iname "test1.txt" # case insensitive
find . -name "*.txt" # find all files matching pattern
find /etc -name *.txt # same but specify /etc dir
find . -type d # only directories
find . -type f # only files
find . -empty # empty files
find . -perm 664 # search by permission
find . -mtime -5 # anything changed in the last 5 days
find . -name "*.txt" -delete # delete matching files
Execute command for each file found:
find . -name *.txt -exec rm -i {} \; # confirm and delete every file found
find . -name *.sh -exec grep 'test' {} \; # search for string in matching files
locate Command
The locate command is an alternative to the find command. The locate command is faster than find. It doesn’t actually search the filesystem. Instead it searches a DB of files that is updated on a regularly scheduled basis. It isn’t installed by default on many distros ( ex. Ubuntu ).
locate test1.txt # search for this file
locate *.txt # search based on pattern
uptime Command
Show uptime and load factor:
uptime
uname Command
Show system info ( arch, kernel, OS, etc. ):
uname -a # all info
uname -r # kernel release
Pipes, Redirects, and More
Redirection
Standard file handles / streams:
0 | STDIN | input |
1 | STDOUT | output |
2 | STDERR | error output |
Redirection operators:
> | overwrite |
>> | append |
2>&1 | redirect STDERR to STDOUT |
ls > test1.txt # redirect command output ( STDOUT ) to a file and **OVERWRITE** file
ls >> test1 # redirect command output ( STDOUT ) to a file and **APPEND** to file
ls asdf 2> test1.txt # redirect command output ( STDERR ) to a file and **OVERWRITE** file
ls asdf 2>> test1.txt # redirect command output ( STDERR ) to a file and **APPEND** to file
ls asdf 2> test1.txt # redirect command output ( STDOUT and STDERR ) to a file ( since Bash 4 )
ls asdf >> output.log 2>&1 # append STDOUT to file, also redirect STDERR to STDOUT ( both go to file )
Background
You can run a command in the background by appending an ampersand to it:
./server1 &
./process_data.sh &
Pipes
You can pipe output from one command to another like this:
cat data1.csv | sort
ps -ef | grep -i nginx
Commands You Normally Pipe To
These commands can operate directly on files but are very commonly used with piped output so we grouping them all together here.
sort Command
Sort lines with the sort command:
sort file1.txt # sort lines in file in alphabetical order
sort -u file1.txt # sort and display unique lines
ps -ef | sort #
uniq Command
Use the uniq command to only show uniq lines. This works after sorting so we first pipe to sort and then to uniq.
cat data.csv | sort | uniq
./data_gen.sh | sort | uniq
grep Command - searching for text
Searching / matching lines in a file:
grep abc test.txt # search file for lines matching this string
grep -i abc test.txt # case insensitive
grep -v abc test.txt # exclude any matching line
grep -E "abc|xyz" test.txt # use a regex
grep -r abc # search for this text in every file recursively from current dir
ps -ef | grep -i nginx # match lines from piped input
wc Command
Word count and line count:
wc -l test.txt # number of lines in a file
wc -w test.txt # number of words in a file
ps -ef | grep -i nginx | wc -l # number of nginx processes
SED command
The stream editor - sed:
sed 's/abc/xyz/' test.txt # swap first occurance of ‘abc’ with ‘xyz’ in a file and print output
sed 's/abc/xyz/g' test.txt # swap every occurance ( g for global )
sed 's/abc/xyz/gI' test.txt # same but case insensitive:
sed -i 's/abc/xyz/g' test.txt # changes the file in place
sed -E 's/a|b/x/g' test.tx # use extended regex
ps -ef | sed 's/root/abcd/g' # with piped input
awk Command
Columns of text can be split and selected with awk. It will split on spaces and tabs by default.
awk '{print $3, $5, $7}' test1.txt # select and print columns
awk '{print "Fields: "$3" -- "$5" : "$7}' test1.txt # control formatting
awk -F/ '{print $3, $5, $7}' test1.txt # change the field separator
awk -F: '{print $3, $5, $7}' test2.txt # change the field separator
awk -F, '{print $3, $5, $7}' test3.txt # change the field separator
ps -ef | awk '{print output $3, $5, $7}' # with piped input
cut Command
The linux command cut is used to split apart lines in a file.
- more simple than awk
- less functional than awk
- only supports a single literal char as a delimiter
cut -f 1,2,3 test.txt # fields 1,2,3 delimit by **TABS!!**
cut -d ' ' -f 1,2,3 test.txt # fields 1,2,3 delimit by spaces
cut -f 5- test.txt # field 5 to end
cut -c 1,2,3 test.txt # chars 1,2,3
cut -d ',' -f 1 test.csv # split on comma, print field 1 of CSV file
ps -ef | cut -d ' ' -f 1 # split on space, print field 1 of piped input/output
Viewing Text
There are numerouse tools for viewing text.
echo command
Use the echo command to display strings and variables.
echo test # print a string of text
echo "test" # print a string of text
echo $VAR1 # print a variable
echo $PATH # print a variable
echo "This is a test: $VAR1" # print a string with a variable
echo "This is a test: ${VAR1}" # print a string with a variable
echo "test" > output.txt # overwrite a file
echo "test" >> output.txt # append to afile
echo > big_log_file # truncate a file
cat Command
Use the cat command to view or concatenate files:
Concatenating files:
cat file1.txt # print contents of file
cat file1.txt file2.txt # print contents of multiple files
cat file1.txt > file2.txt # overwrite file2.txt with file1.txt
cat file1.txt >> file2.txt # append instead of overwrite
cat one two > combined # combine 2 files into 1
head Command
Use the head command to view the begining of a file or piped output.
head test1.txt # first 10 lines
head -n 3 test1.txt # first 3 lines
ps -ef | head # first 10 lines of output
tail Command
Use the tail command to view the end of a file or piped output:
tail test1.txt # last 10 lines of file
tail -n 5 test1.txt # last 5 lines of file
tail -f nginx.log # follow - show updates to file in real time
tail -n 100 -f nginx.log # last 100 lines and then follow
ps -ef | tail # last 10 lines of output
more Command
Display and page through text page at a time. Press space bar for next page.
more /var/log/dpkg.log # page through a file
ps -ef | more # page through piped output
less Command
This tool is similar to ‘more’ but it has a lot more features. It doesn’t need to read in an entire input file at one time so it starts faster with very large files when compared to other tools like vi.
less /var/log/dpkg.log # page through a file
ps -ef | less # page through piped output
Less commands ( use these commands inside the less tool ):
q | quit |
space | scroll forward n lines ( window size ) |
enter | scroll forward 1 line |
arrow keys | up / down / left / right |
g | go to first line |
5g | go to line 5 |
G | End of file |
Searching
/pattern | search for pattern (regex) |
/?pattern | search backwards |
During a search:
n | next match |
N | previous match |
Diff
Diff:
diff file1.txt file2.txt
Nano
Nano is a simple terminal based text editor.
- It is super easy to use.
- You can navigate with the arrow keys in an intuitive way and the commands are listed at the bottom of the screen.
- It might not be installed on every system.
- To exit just use [ctrl] - x.
nano test1.txt
VI / VIM
VIM is a newer, improved version of the old VI editor that is common on Linux and Unix systems.
- Not intuitive, hard / frustrating for beginners who don’t know what they are doing.
- Standard - You can count on this editor to be present on almost any Unix or Linux system.
- Really powerful once you know all the shortcuts.
- Very old systems have VI installed, newer systems have VIM. The vi command is usually a shortcut to vim but not always.
vi test1.txt
vim test1.txt
VIM Modes:
- normal mode - can navigate with arrows ( or h,j,k,l ) and run commands
- insert mode - can actually type text
VIM Commands:
:w | write / save |
:wq | save and exit |
:q | exit when no changes were made |
:q! | exit without saving |
i | insert mode, so you can actually type |
[esc] | exit insert mode |
a | add a line ( basically also insert mode ) |
dd | delete current line |
yy | yank - copy current line |
p | put - paste current line |
0 | beginning of line |
$ | end of line |
[shift] - g | jump to last line |
:0 | jump to beginning of file |
:$ | jump to end of file |
:5 | jump to line 5 |
VIM Search:
/abc | search for ‘abc’ |
n | next search match |
N | prev search match |
VIM Search and Replace:
:s/abc/xyz/g | replace all strings on line |
:%s/abc/xyz/g | replace all strings in file |
:%s/abc/xyz/gi | case insensitive |
:%s/abc/xyz/gc | confirm |
Permissions and Users
Users
root vs normal user:
- The root user is basically the default admin user on Linux and Unix systems. This user can do almost anything.
- Normal users will generally have much fewer permissions.
-
It is generally a good idea to do most work as a normal user and switch to root or grant priviliges as needed.
- Root’s home dir is here by default: /root
- Normal users have home dirs under /home for example /home/user1
whoami # show current user
who # who is logged in now
last # show list of last logged in users
id user1 # get info about this user
Changing passwords:
passwd # change password of current user
passwd user2 # change password for user2 ( you need to be user2 or root to do this )
If you need to run commands as another user or as root there are two main options. You can become that user with the su command or run commands as that user with sudo.
su - Substitute User
When using su:
- Prompted for password of target user unless you are root
- ’-‘ is optional but is commonly used to make sure you login with that user’s environment
- Default target user is root
su - user2 # switch user to user2
su - greg # switch user to greg
su - # switch user to root
sudo
The sudo command allows you to run commands as another user. This is usually used to run commands with elevated priviliges or as a service user.
- For this to work the current user will need to be setup in the sudoers file.
- Allows for fine grained control over which commands can be run.
- Allows for accounting of who has used these permissions.
sudo apt install nginx # use root permissions to install package
sudo cat /etc/shadow # need root permissions to view this file
sudo su - # use sudo to run su and login as root
Adding Users
Do this as root or using sudo.
There are two main tools that yoiu will use:
useradd:
- low level utility for adding users
- native binary
- might not be available on all systems
- more portable
- better for scripting
- use adduser command instead when possilbe
- no home dir by default
- no password set by default
- sh by default
adduser:
- higher level tool for adding users
- PERL script that wraps useradd
- sets BASH by default
- prompts for password
Add a user using adduser command:
adduser user2 # add a user ( prompts for password )
Add a user with the useradd command:
useradd -m user2 # create user, "-m" for home dir creation
passwd user2 # set the password
Permissions
In this example output snippet, column 1 is the permissions, column 2 is the owner, column 3 is the group.
-rw------- 1 user1 user1 86016 Apr 5 13:45 wallet.dat
drwxrwxr-x 2 user1 user1 4096 Aug 20 2023 web
-rw-rw-r-- 1 user1 user1 466 Jun 12 2023 web1.js
drwxrwxr-x 3 user1 user1 4096 Aug 20 2023 websocket
-rw-rw-r-- 1 user1 user1 2201 Aug 23 2023 websocket_notes.txt
drwxrwxr-x 2 user1 user1 4096 Nov 21 2022 Z__HOME_DIR_STUFF
drwxrwxrwt 19 root root 4096 Jul 10 11:28 /tmp
-rwsr-xr-x 1 root root 59976 Nov 24 2022 /usr/bin/passwd
Char meaning in permissions column:
first char | d for directory, - for not directory |
next 3 chars | owner permissions |
next 3 chars | group permissions |
next 3 chars | other permissions ( everyone ) |
rwx rwx rwx
user group other
Permission values:
r | read | 4 |
w | write | 3 |
x | execute | 1 |
You can represent permissions with letters ( rwx ) or numbers. Letters are easier but many people use numbers and you should be familiar with them.
Octal Value | File Permissions Set | Permissions Description |
0 | — | No permissions |
1 | –x | Execute permission only |
2 | -w- | Write permission only |
3 | -wx | Write and execute permissions |
4 | r– | Read permission only |
5 | r-x | Read and execute permissions |
6 | rw- | Read and write permissions |
7 | rwx | Read, write, and execute permissions |
Sticky and setuid bits:
s | setuid or setgid bit is on |
S | suid bit is on, user execution bit is off |
t | Sticky bit is on, execution bit for others is on |
T | Sticky bit is on, execution bit for others is off |
NOTE - Extended permissions also exist, we aren’t covering those here. !!!!!!!!!!
NOTE - To list files in a dir and view those files, you need both r and x permissions on that dir.
Sticky bit: Permission bit normally set on directories. For any files in the dir, only the file’s owner, the directory’s owner, or root can rename or delete the file. Normally set on the /tmp dir
setuid - file can be executed with permissions of the owner setgid - file can be executed with permissions of the group
Directory permissions ( these might not be intuitive ):
x | nothing, no list, no modify | not normally used |
w | nothing, no list, no modify | not normally used |
r | list contents ( just names not attributes ) | not normally used |
rx | list contents and attributes | normal |
wx | modify contents but no listing | not normally used |
rwx | everything | normal |
Chmod - Change Permissions
Change permissions with the chmod command. You can specify permissions using these:
a | all |
u | user (owner) |
g | group |
o | other |
= | set |
- | remove |
+ | add |
chmod a+rwx test1.txt # add rwx permissions for all
chmod u+rwx test1.txt # add rwx permissions for all
chmod og-w test1.txt # remove w permissions for group and other
chmod ug=r test1.txt # set read permission for user and group
chmod ug=rwx,o-rwx test1.txt # set rwx for user and group, remove all for other
chmod 700 test1.txt # set rwx for user, nothing for group and other
chmod 444 test1.txt # set r for all
chmod u=rx dir1 # set rx for dir
chmod -R u=rx dir1 # set rx for dir recursively
chmod +t test1 # set sticky bit
chmod u+s test1.sh # set setuid bit
chmod g+s test1.sh # set setgid bit
chmod 1755 test1 # set sticky bit using octal
chmod 4755 test1.sh # set setuid bit using octal
chmod 2755 test1.sh # set setgid bit using octal
Chown - Ownership
Change ownership with the chown command:
chown user1 test1 # change owner to user1
chown user1:user1 test1 # change owner and group
chown -R root:nginx /var/www/ # change owner and group recursively
Processes and Resource Usage
df Command
Show file system disk space usage:
df # show disk usage for all FS
df -h # human readable
df -h . # for FS of current dir
df -h /var # for FS of specified dir
du Command
Show disk space usage in a dir and search for large files:
du -sh * # show sizes for all files and dirs in current dir in human readable format
du -sh * | sort -h # sort numerically by human readable format
Memory - free Command
Show memory and swap usage:
free
free -h # human readable
ps Command
The ps command is used to show what processes are running on a system.
ps # all procs with same ID and TTYs
ps -ef # all procs, full listing
ps aux # similar but BSD options and more useful cols
ps -ef --sort %cpu # sort by CPU
ps -ef --sort %rss # sort by memory
ps -eo pid,user,cpu,rss,args --sort %cpu # only these fields, sort by CPU
kill Command
The kill command is generally used to kill processes. This command is actually used to send different signals to processes but most of the time those signals are used to terminate the process.
kill 150746 # kill process with this pid, gentle, allows cleanup
kill -9 150746 # force kill
kill -hup 150746 # can cause certain specific services to reload configs
kill -term 150750 # kill process with this pid, gentle, allows cleanup
kill -kill 150753 # force kill
Three most common signals:
1 | SIGHUP | (“signal hang up”) means controlling terminal is closed, will cause some daemons will restart and re-read configs |
9 | SIGKILL | Kill, terminate immediately, can’t be caught or ignored, proc can’t cleanup, exception procs: zombie, blocked, init, uninterruptibly sleeping |
15 | SIGTERM | Request process termination, can be ignored or caught allowing for cleanup, etc. |
Top Command
- The top command has a huge number of options and sub commands. We’re covering the most useful and practical details here.
Launch the top tool to show running processes:
top
Top commands:
q | quit |
f | manage fields |
Top manage fields mode commands:
d | enable this field |
s | sort by this field |
q or [esc] | finish field selection |
Htop Command
The htop command is a process viewer similar to top. It can be used with a mouse.
Launch htop:
htop
htop commands:
arrows | navigate |
space | tag or untag a process |
F9, K | “Kill” process: sends a signal which is selected in a menu |
F10, q | quit |
F6, <, > | Selects a field for sorting |
I | Invert the sort order |
F3, / | Search by command lines, highlight while typing, F3 for next, Shift-F3 for previous |
F1, h, ? | Go to the help screen |
Disks
Check:
lsblk # show block devices on system
df -h # show what is mounted
Mount / unmount:
sudo mount /dev/sdd1 /mnt # mount a device
sudo umount /dev/sdd1 /mnt # unmount a device
SD Cards / USB Drives / exFat FS
exFAT is a common filesystem for SD cards and USB drives. It is a very good choice for cross platform support ( but otherwise not the best choice of FS ).
You may or may not need to install an extra package to support exfat. If you have unknown file system errors this may help.
Packages:
sudo apt -y install exfat-fuse exfat-utils # Ubuntu, Debian
sudo dnf -y install exfat-utils fuse-exfat # RHEL, CentOS, Fecora
sudo pacman -S exfat-utils # Arch
sudo zypper install fuse-exfat exfat-utils # Suse
sudo emerge --ask sys-fs/exfatprogs # Gentoo
Manually mount if it isn’t automatically mounted:
sudo mount -t exfat /dev/sdc1 /mnt/my-disk/
Shutdown and Reboot
Multiple different commands can be used to shutdown a system. These are the basics.
reboot # just reboot
halt -p # shutdown and power off
poweroff # shutdown and power off
shutdown -h now # shutdown and power off
exit # just exit out of current terminal
Compressed Files
We’re going to show you the basics of compressing and uncompressing a bunch of different types of files. You don’t need all of these but they are really handy when you do need them.
- Don’t memorize all of these compression and archiving commands. Use this as a reference. You will remember the commands that you use often.
TAR files
Tar files are a common tool used to archive files and directories into a single archive file.
tar xvf some_package.tar
tar xvfz some_package.tar.gz # unpack a gzipped tar file or tar ball
tar xvfz some_package.tgz # unpack a gzipped tar file or tar ball
tar cvf some_archive.tar some_dir1 # create a tar file
tar zcvf some_archive.tar.gz some_dir1 # create a compressed tar file
GZip Files
GZip is the standard, most common compression tool ( not the best ).
gunzip some_data.zip # unpack a gzip file
gzip some_data.zip some_server.log # gzip a log file
Zip files
Zip files are also very common:
zip some_text.txt # zip a file
zip -r some_data_dir # zip a dir recursively
unzip some_data.zip # unzip
unzip some_data.zip -d dest_dir1 # unzip in dest dir
BZip
Bzip files are also common:
bzip2 -z data.txt # compress
bzip2 -k data.txt # compress and don't delete original
bzip2 -d data.txt.bz2 # decompress
lzma and xz
lzma and xz are common and better than gzip and bzip:
xz data.tar # compress with lzma
lzma data.tar # compress with lzma
unxz data.tar.xz # uncompress ( still tarred )
unlzma data.tar.lzma # uncompress ( still tarred )
7zip
7zip is also popular:
7z a data.7z data.txt # compress
7z e data.7z # extract
RAR
You might also find yourself working with rar files:
rar a test1.rar test1 # compress
unrar e test1.rar # extract to current dir
Compression / Archiving Packages
These packages might help if you are on Ubuntu and you are missing any of these tools:
sudo apt install xz-utils
sudo apt install bzip2
sudo apt install zip unzip
sudo apt install p7zip-full p7zip-rar
sudo apt-get install rar unrar
Servers and Services
Managing systemd services:
systemctl list-units --all | grep service # list all services
systemctl status nginx # check status
systemctl start nginx # start
systemctl stop nginx # stop
systemctl enable nginx # enable
systemctl disable nginx # disable
Environment and Variables
Using variables:
x="test this"
echo x
Exporting variables in your profile:
~/.bashrc
export x="test this"
Show your environment variables:
env
Your PATH
When you run a program the system searches for programs in the directories that are in your path. You might want to add new dirs to your path if those dirs contain scripts or binaries that you want to run.
echo $PATH # show current path
export PATH=$PATH:/opt/some_sw/bin # add a dir to your path temporarily
Add a line like this to your bashrc if you want the changes to persist:
~/.bashrc
export PATH=$PATH:/opt/some_sw/bin
If a script or executable binary is on your path, the system will know where to look for it and you will be able to run it just by typing the name of the script or binary. If it is not on your path you will need to specify the directory ( either relative or full path ). To run something in the current dir you can use ‘./’ to specify the current dir.
./myscript.sh # run script in current dir
/home/user1/scripts/myscript.sh # run script with full path
myscript.sh # run script that is on the path ( system can find it )
Scripts
This is not a guide to scripting but we are going to show you how to create extra, super, basic scripts. We are also going to show you how to run thos scripts.
bash ./myscript1.sh # execute a bash script even without permissions
chmod u+x ./myscript1.sh # grant execute permission to owner
./test1.sh # execute script in current dir
/home/user1/test1.sh # use full path
python test1.py # execute a python script ( could be v2 or v3 ... )
Package Management ( Debain, Ubuntu, RHEL, Fedora, CentOS, Arch, Suse, Gentoo, etc. ):
update cache, install, remove, search, upgrade
dpkg | Debian / Ubuntu | install from file, remove, check |
apt | Debian / Ubuntu | install from repo, manages dependencies |
rpm | RHEL / CentOS / Fedora | install from file, remove, check |
yum | RHEL / CentOS / Fedora | install from repo, manages dependencies |
dnf | RHEL / CentOS / Fedora | install from repo, manages dependencies |
pacman | Arch | install from repo, manages dependencies |
emerge | Gentoo | install from repo, manages dependencies |
zypper | Suse | install from repo, manages dependencies |
- For Debian / Ubuntu just use apt by default for most cases.
- For RHEL / CentOS / Fedora just use DNF when available.
NOTE - You will need to be root or use sudo to install or remove packages.
dpkg
The dpkg tool is a package management tool for Debian / Ubuntu systems. It is really used for managing package files and querying what is installed on the system. It generally isn’t used for dependency management, etc. so for most tasks you will just use apt instead.
dpkg -i nginx_1.18.0-0ubuntu1.4_all.deb # install package
dpkg -r nginx # remove package
dpkg -P nginx # remove package, configs, and data ( purge )
dpkg -l # list all installed packages
dpkg -l|grep -i nginx # check if package is installed
dpkg -L nginx # list files installed by package
dpkg -S /etc/nginx/sites-available/default # which package installed this file
apt
Apt is the package manager for Debian / Ubuntu systems. It works pretty well.
apt update # update package index
apt install nginx # install a package
apt remove nginx # remove a package ( keep configs )
apt remove --purge nginx # remove package and configs
apt purge nginx # remove package and configs
apt search nginx # search
apt search --names-only nginx # search ( only in name )
apt upgrade # upgrade all packages ( won't remove anything )
apt full-upgrade # upgrade all packages
apt autoremove # remove unneeded packages
RPM
This is generally used for working with individual package files or checking what is on the system. Generally you should use YUM or DNF instead ( actually just DNF ).
rpm -i nginx-123.rpm # install pacakge
rpm -e nginx-123 # remove package
rpm -qa # list all installed packages
rpm -qa |grep -i nginx # check if package is installed
YUM
Yum is obsolete but is very common. It has been replaced by DNF and oftent times is just an alias for DNF. NOTE, when it is an alias for DNF, update and upgrade will do the exact same thing.
yum install nginx # install package
yum update nginx # update package
yum update # update all packages
yum upgrade # update all packages and remove obsolete
yum erase nginx # remove package
yum search nginx # search for package
yum list all # list all pacakges ( installed and available )
DNF
DNF is the current, preferred, non-obsolete package manager for Redhat based systems. It replaces YUM.
dnf install nginx # install package
dnf remove nginx # remove package
dnf update nginx # update package
dnf update # update all packages
dnf upgrade # update all packages
dnf up # update all packages
dnf upgrade --refresh # update all packages ( forces an immediate update of the repository lists )
dnf distro-sync # sync to whatever the latest distro package version is (upgrade, downgrade, etc )
dnf search nginx # search for package
dnf list available # list all available packages
dnf list installed # list all installed packages
dnf repolist # list enabled repositories
dnf repolist all # list all repositories
dnf config-manager --enable abc # enable abc repo
dnf autoremove # remove unneeded packages
Pacman
Pacman is a nice package manager for Arch:
pacman -Syy # update package list ( force )
pacman -Syu # update package list and upgrade all
pacman -Syu nginx # update package list and upgrade all and install/upgrade single package
pacman -S nginx # install/update package
pacman -Sy nginx # install/update package, update package list first
pacman -Rs nginx # remove package and all deps except needed by other packages or installed by user
pacman -Rss nginx # remove package and all deps except needed by other packages
pacman -Rsn nginx # remove package and all deps except needed by other packages or installed by user, don't save config files
pacman -Ss nginx # search packages
pacman -Ql nginx # list all files from package
pacman -Qe # list explictly-installed packages
pacman -Rns $(pacman -Qdtq) # remove unneeded packages
Emerge
Gentoo is a bit different. Here are some commands.
emerge --sync # update the package db
emerge --ask www-servers/nginx # install a package
Zypper
Suse uses the Zypper package manager:
zypper in nginx # install or update package
zypper rm nginx # remove package
zypper up nginx # update package
zypper up # update all packages
zypper se 'nginx*' # search anything packages with 'nginx'
Network Commands
We’re going to cover some basic network commands. We’re assuming you are using DHCP and that wifi is working. We will still show how to configure a static IP temporarily. We’re not covering wifi or configuring static, persistent connecitons in this guide. We’re going to cover that separately in another guide.
- If you’re on a server you probably aren’t using wifi but you might need a static IP.
- If you’re on a desktop you can usually setup wifi with a GUI tool. ( that is going to depend on your distro )
Most systems will either have the older package ( net-tools ) or the newer package ( iproute2 ) or sometimes even both.
Newer commands
These commands are part of the newer package ( iproute2 ).
ip a # show interfaces with addresses
ip l # show interfaces
ip r # show routes
ip n # show ARP table
ip addr add 192.168.0.25/24 dev eth0 # asign IP to interface
ip addr del 192.168.0.24/24 dev eth0
ip link set eth0 up # interface up
ip link set eth0 down # interface down
ip route add default via 192.168.0.1 dev eth0 # add default route
ip route add 192.168.3.0/24 via 192.168.0.1 # add route through gateway
ip route add 192.168.3.0/24 dev eth0 # add route through interface
ip route delete 192.168.3.0/24 via 192.168.0.1 # remove route
ss -ltupn # listening, TCP, UDP, show process, number
Legacy Network Commands
These commands are part of the older package ( net-tools ).
ifconfig -a # show interfaces and IPs
route -n # show routes, number
arp -a # show arp table
ifconfig eth0 192.168.0.25 netmask 255.255.255.0 # add IP and netmask to interface
ifconfig eth0 delete 192.168.0.25 # remove IP
ifconfig eth0 up # interface up
ifconfig eth0 down # interface down
route add default gw 192.168.1.1 # add defaultt route
route add -net 192.168.5.0 netmask 255.255.255.0 gw 192.168.3.1 # add route through gateway
route add -net 192.168.3.0 netmask 255.255.255.0 dev eth0 # add route through interface
route del -net 192.168.3.0 netmask 255.255.255.0 # remove route
netstat -ltupn # listening, TCP, UDP, show process, number
More Commands
Telnet
Telnet is a tool for connecting to a shell on remote hosts. It is insecure and obsolete. It is almost never used. You are very unlikely to actually use it to connect to a host in modern times. Use SSH to connect to remote hosts instead. It is sometimes used by people to check if a specified port is open on a given host.
telnet 192.168.1.1 # connect to a host using telnet
telnet 192.168.1.1 80 # test if a specific port is open on a remote host
Netcat
Netcat is an incredibly flexible tool. It does a ton of stuff. These are some of the more common, useful things that it can do.
nc 192.168.3.231 1234 # open connection to send data
nc -zv google.com 443 # just check connection, don't send data, verbose
nc -zv google.com 443 # same, for google.com
nc -zv 10.0.2.4 1234-1240 # scan a range of ports
nc -zv 10.0.2.4 1234-1240 2>&1 | grep 'succeeded' # filter for open ports
SSH
SSH is the secure, modern way to connect to remote servers.
ssh server1.lab.net # connect to this server
ssh 192.168.3.231 # connect to this IP address
ssh user1@192.168.3.231 # specify a user
ssh -i .ssh/Key1.pem user1@192.168.3.231 # specify an ssh key
SCP
The scp command is used to transfer files to and from remote hosts using an SSH connection.
scp data1.txt 192.168.3.231:/home/user1 # copy file to remote server
scp 192.168.3.231:/home/user1 /home/user1 # copy file from server to specified dir
scp host1:/data/file1.txt host2:/prod/info.txt # between two servers, changing file name
scp data1.txt user1@192.168.3.231:/home/user1 # specify username
scp -r data_dir1 192.168.3.231:/home/user1 # copy a dir, '-r' for recursive
scp -i .ssh/Key1.pem data1.txt user1@192.168.3.231:/home/user1 # using an SSH key
Ping and Traceroute
Check point to point connectivity:
ping # ping a host to verify that it is reachable with ICMP
traceroute # show the hops to a host ( check at which stop it fails )
DNS
Check your own hostname:
hostname # show hostname of current host
Query DNS servers:
host google.com # resolve host name
nslookup google.com # resolve host name
dig google.com # resolve host name
host 142.251.32.110 # resolve IP address
nslookup 142.251.32.110 # resolve IP address
dig 142.251.32.110 # resolve IP address
NOTE - You might want to use sudo or login as root to use lsof or fuser to make sure you can see everything.
Open Ports and Files
List processes using network ports. Specify UDP or TCP and the port number:
lsof -i TCP:8080 # TCP port 8080
lsof -i TCP:53 # TCP port 53
lsof -i UDP:53 # UDP port 53
lsof -i :53 # TCP and UDP port 53
Which files are open by which processes:
lsof # show all open file handles
lsof -p 805 # all files opened by a specific process ( using PID )
lsof -c nginx # all files opened by this command or process
lsof -u user1 # all files open by this user
lsof /usr/lib/x86_64-linux-gnu/libselinux.so.1 # Which processes have this file open
Fuser is also really useful:
fuser -n tcp 80 # TCP port 80
fuser -n udp 53 # UDP port 53
fuser /home/user1 # show all procs acessing this file
fuser -m /home/user1 # show all procs acessing any file on same FS as this file
Important Files
These are some important files that you should be aware of:
/etc/hosts | host / IP mappings, can override DNS |
/etc/resolv.conf | DNS configs |
/etc/passwd | users defined here |
/etc/shadow | passwords here |
/etc/group | groups defined here |
/etc/sudoers | sudo permissions here |
/etc/fstab | filesystems and mount points |