linux,

Prevent accidental rm -rf /*

Sep 30, 2021 · 4 mins read · Post a comment

We’ve all heard the infamous rm -rf /* command. Now, let’s see how can we prevent from happening many times again.

Prerequisites

  • Linux bash environment
  • sudo privileges

Solution 1

Just forget about the rm -rf /* command and never use it in the first place. If you want to delete something, always make sure to specify the full path of the directory, take a deep breath and double-check before running anything.

Or, if you are man enough add the verbose parameter -v so you’ll buy yourself some time to interrupt it (Ctrl + c) in case of collateral damage.

Solution 2

Use find command as alternative.

Step 1. List the files in the current directory you want to delete.

find . | less

Step 2. Remove the whole directory.

find . -delete

Note(s): You could use the filtering advantage of the find command.

Solution 3

Probably the most important one. Never login as the root account. There is a reason most of the cloud providers won’t let you SSH into a VM using the root user. Use sudo user only if required.

Solution 4

Replace bash with zsh, since zsh asks for confirmation before rm -rf /* execution.

Solution 5

Use aliases. For instance:

alias rm='rm -i'

This will prompt the interactive mode before every file removal (if you replace it with uppercase i (-I), it will prompt for confirmation only once), or

alias rm='rm --preserve-root'

Solution 6

Another one is the internet famous hack, using the touch command.

Step 1. Create a couple of empty test files and a file named -i in a directory.

touch /{1..3}
touch -- -i

Step 2. List the directory.

ls /

Output:

1   2   3   -i

Step 3. Run rm -rf /*.

rm: remove regular empty file `1`?
rm: remove regular empty file `2`?
rm: remove regular empty file `3`?

This way, the wildcard char * will append the file called -i as a parameter to the command line, thus you will be prompt before the disaster.

Solution 7

There are a lot of external scripts and wrappers that could help us protect against this embarrassing accidents. The most popular one is safe-rm.

Step 1. Install safe-rm.

sudo apt get -y install safe-rm
sudo yum install -y safe-rm

Step 2. Whitelist the important directories in /etc/safe-rm.conf.

Step 3. Add an alias for rm.

alias rm='safe-rm'

Solution 8

Another way to protect files and directories is using the chattr command line utility. I’ve already wrote a blog post about this one called: Protect files from being deleted in Linux, so please check that one out.

Solution 9

Be creative by writing a bash script that will create a hidden .trash directory that will move the removable files to /tmp.

Step 1. Create a new bash script under /usr/local/bin.

touch /usr/local/bin/rm.sh

Step 2. Add the following content:

#!/bin/bash

DIRECTORY="/tmp/.trash";

if [ ! -d $DIRECTORY ] ; then
       echo "creating the .trash directory...";
       mkdir $DIRECTORY;
fi
mv "$@" $DIRECTORY

Step 3. Make the script executable.

chmod +x /usr/local/bin/rm.sh

Step 4. Create an alias.

alias rm='/usr/local/bin/rm.sh'

Step 5. Have fun!

Solution 10

Protect some external filesystem directories by mounting them in readonly mode.

Step 1. Create a mount point.

sudo mkdir /mnt/important_files

Step 2. Mount a volume called /dev/sda2, for instance:

sudo mount -o ro /dev/sda2 /mnt/important_files

Solution 11

Always plan and implement a backup and recovery procedure. Use rsync or some other backup software utility. Backing up disks and taking snapshots are one of the best practices when managing infrastructure in the cloud.

Conclusion

If you have any other interesting approach, please let me know in the comment section below. Feel free to leave a comment below and if you find this tutorial useful, follow our official channel on Telegram.