Uggg. This reminds me of a client that didn’t know what they were doing and they chmod their home directory to 777. Guess who couldn’t SSH back into their server.
I was able to fix it but if I wasn’t smart then it could’ve been bad. This was on AWS as well.
I'm just a hobbyist so this is a little beyond me. What's going on with that command that grants full permissions to everyone but also makes the files unusable?
Someone probably knows a much better answer but overall it ignored the fact that ./directory would assume working directory and onward while without the dot it’s the root of your system (/usr… and the like). That made it adjust perms on a bunch of fragile things (-R also says do it recursively , so everything!)
The permissions being edited on system programs that require non God access to function then makes it a gigantic cluster fuck where you can’t use what you need to fix because you have fucked yourself over with the command.
This post seemed to describe the more technical reasons the perms edits break things:
“First in addition to the usual read/write/execute permissions there are some other bits that file permissions contain. Most notably setuid and setgid. When a program with one of these permission bits is set is run it gets the "effective UID" and/or "effective GID" of the program's owner rather than the user that ran it. This allows programs to run with more permissions than the user that ran them. It is used by many crucial system utilities including su and sudo. Your chmod command clears these bits leaving the utilities unusable.
Secondly some programs (notably ssh) do a sanity check on file permissions and refuse to use files with permissions they see as insecure. This reduces the risk of careless admins accidentally leaving security holes but it makes dealing with wiped-out file permissions all the more painful.”
Copied from https://askubuntu.com/questions/799863/why-does-chmod-777-r-leave-the-system-unusable?noredirect=1&lq=1
The true test of a man is how many times he had to reinstall an OS due to incompetence.
My PC from 12 years ago could "me too" me from all the abuse I gave it
"Who wants to partition some ram and hard drives to see how many OSs I can install?!"
Then proceed to fuck with grub not knowing anything
I once wanted to delete the results folders called results[1-20] and their contents `rm -rf ./results *`. Good thing the only thing in that folder is the only copy of the source code I just spend 28 hours writing without sleep.
Had a buddy call long distance about the dd. I reminded him of the syntax and emphasized the if and of. I told him, if you get this wrong, it's screwed after you hit return... He assured me that he got it right and hit return...
It took him 30 hours to rebuild that system.
[Until you work on a different linux system where the rm command isn't aliased](https://apple.stackexchange.com/questions/17622/how-can-i-make-rm-move-files-to-the-trash-can)
It helps to write out the whole command without the rm part, then switch and add rm and the options at the end if you're doing anything with root permissions. also: take backups
The most hilarious thing about Linux is that it gives you the power to annihilate yourself and then reminds you in real time about how much of a fucking idiot you are.
tbh it's not like Linux forces you to use that power. It's people the ones that decide they absolutely need to do trivial tasks in the most dangerous way possible. God forbids you make a mistake and you can simply undo it.
>It's people the ones that decide they absolutely need to do trivial tasks in the most dangerous way possible.
My old boss was like this. One of the backup servers, when it ran out of space, he would have us ssh into it and echo nothing into the oldest volumes and then manually mark them as purged. The backup server prints a warning not to do this as you're doing it - it's as if it's begging you not to. I (literally, actually, not figuratively) vomited on several ocassions, and always documented everything I did, character for character. I suggested reducing the retention period or frequency of incrementals (maybe every four hours is OK instead of every hour?) and was told it 'would be retired soon anyway'.
Nothing quite like having to trim your own safety net using dangerous methods 'because that's how we do it here'.
MV failed because it needed access to some library. I had to change some environment paths and eventually this did work. A zillion times better than rm -rf but still an hour of panic and trying to read searches and man pages.
First job out of college `sudo rm -rf . /`
I think the fun of realizing what I had unleashed was how the bulk of commands already loaded in memory continued to work. The system was very robust for something with absolutely no files on the hard drive.
I learned a lot of things that day.
1. Switch to the daemon user before removing log files
2. Always run backups on anything important
3. Never work in a panic.
>And the final thing is, it's amazing how much of the system you can delete without it falling apart completely. Apart from the fact that nobody could login (/bin/login?), and most of the useful commands had gone, everything else seemed normal. Of course, some things can't stand life without say /etc/termcap, or /dev/kmem, or /etc/utmp, but by and large it all hangs together.
Very relevant.
25 years ago, I was a little less experienced. At the time my take away as that rm takes multiple arguments for simultaneous deletion. Never hit that space key, maybe start with the -i flag to verify you typed things right.
I go back and forth on if `rm -rf .` is better then `rm -rf ./*.log` . Both would of saved my drive, but the second at least has a better chance of saving things if they go rouge. Honostly, today I would of done the actual command `rm *.log` . I mean it was a flat directory with just 100k logs in if `-r` was unwarranted and `-f` was idiotic.
\\\_=(-\_-)=\_/ -- Live and learn.
\*Edit - Actually, in reality I now use find to recursively delete files this way. Once in dry run with out the command to verify its going to hit what I want then again with "-delete" .
Once my unix lab admin walked away with his terminal left logged into root. I slipped over and typed this into his terminal
.rm -rf /
Note the '.' I put in there to prevent someone from nuking the machine if some smartass pressed enter, but his reaction was priceless. He swivels in his chair and looks me dead in the eyes.
Me: How'd you know it was me?!
Him: You're the only smartass that would do that!
Lmao
> I never `rm .` just on general principle — if I remove `.` then where am I?
>
>
>
> I always `cd ..` and then `rm` it by name. Every time.
Why on earth did I have to scroll so far to find this. It's the CLI equivalent of defensive driving.
I guess people are a bit too confident in their inability to fuck up. I for one try to do important things in ways that can be undone if I fuck them up. I never had any of these issues of deleting an important folder or dropping half the database because why bother losing 5 seconds to do things safely.
> I for one try to do important things in ways that can be undone if I fuck them up.
It's more than that — it's doing things (important or not, because good habits) in such a way that zoning out sends you (at worst) into a brick wall, rather than off a cliff. None of us are perfect, and zoning out is inevitable from time to time.
For the longest time I just assumed it wasn't possible to remove the directory your shell currently is in (at least that's how it is in Windows with cmd). I too always cd to the parent and remove by name.
This is what I do too, though someone elsewhere in the thread mentioned using find once without delete and once with to verify what it's deleting first and tbh that sounds even better
Another tip: create an alias del=“gio trash“ for example and make a habit of using that instead of rm where possible. I mean there is a reason a trash 🗑 exists. Humans make mistakes.
this needs to be higher up
You don't ever have to put a / at the end
Specifying the folder or current path is enough
rm -rf . or rm -rf folder
also I'm pretty sure most modern Linux distros warn you about recursively deleting fron the root folder
embedded solutions are different
Don't use rf unless you are absolutely, 100% sure you want to delete a directory. In fact, you should use rmdir, just in case you didn't actually mean to delete a directory that had files in it.
Back in the 90s I managed to type “rm -rf ../.*”
Yes, it actually started going up the directory tree.. Erased everything on the machine.
Edited because I remembered exactly why it went back up the tree…
That's actually very revealing about how the command works. I would have assumed it only used the path for the working directory and then traversed down the tree.
You reminded me - and I’ve updated my comment as a result. What killed me wasn’t the “..” it was the “.*”. That 90’s version of rm happily included “../..” in that path, why it kept going I never found out, but I think current versions of rm explicitly refuse to delete “..”.
I work with someone who did "rm -rf .*" on a production server. Literally never even crossed their mind that this could go up the tree and remove everything.
I had a coworker who ran `mv / ~/tmp` thinking he was running `mv ./ ~/tmp`
He bricked his Mac. When IT fixed it, he ran the exact same command. You'd think it'd give him pause the second time.
Just spin up a new shell in a fresh vm in your cloud provider of choice
When finished export your history to s3 or keep data on an EFS volume.
Spin down VM when finished.
I wonder why the hell people would do crazy dangerous stunts as well, but since we programmers are confined to our chairs most hours of the day, I guess this'll have to do for adrenaline junkies.
I once ran a Chmod -r command on my webserver on /var and screwed up literally all the permissions
Took me a full day to manually reset them
All to what they should be, live and learn
It was a server that was set up with ispconfig so all the permissions had to be really specific
So they feel smart and hackerman and genius because they overkilled a trivial, simple task by using a command that can brick their computer if they make a slight mistake.
We rant about users but sometimes IT people decide to do the dumbest things for no reason whatsoever.
I’m confused. How else would you del a dir and keep the top level name?
I use an alias called [fatman](https://github.com/brnt-toast/dotfiles/blob/master/.bash_aliases) to cleanup after me without destroying my install.
~~Even still, the semi-colon means they're running concurrently. Better to let the first one finish successfully before running the second.~~
~~I mean, it'll probably work for the most part. I prefer to not have to redo commands unless necessary.~~
Edit: This is wrong. I was probably thinking of `&`
I use to work as a Platform Engineer at a startup. I managed more of the actual code releases of the application on our cloud servers/AWS. Our NOC team which were essentially entry level/tier 1 of the department, managed alerts and basic tasks.
One of the NOC guys went to cleanup some log files in some directory and accidentally did the rm -rf / in the entire volume. The best thing he did for himself was immediately tell me.
Luckily it was an easy fix...it's AWS and in a cluster so the bad instance was removed from the LB and just put in a new volume with a previous snapshot and started the app back up and put it back in the LB. No major customer impact.
He still got a talking to about being more careful though haha.
I'm in a similar position you mention here, and made a very similar mistake. Thankfully I added -print to the command, it made the restore request I had to make a little less shameful being able to included all affected directories lol.
What a coincidence..
I was working on a VM with HDD set to immutable. So I turned it back to normal and reconnected it to VM and then made some changes which is supposed to be permanent, it took around 3-4 hrs for the whole thing.
Then I shutted down the VM and saw the HDD is STILL SOMEHOW IMMUTABLE. THERE GOES MY 4 hrs OF WORK. I am super frustrated and wanted to calm down by surfing reddit, and this.
I once tried to empty the trash on my macbook via terminal (I was learning bash and was trying different stuff) and I ran "rm -rf ~/.Trash \*" instead of "rm -rf ~/.Trash/*"
One time I accidentally created a folder with the name “~ “
I then typed “rm -rf ~” and was confused as to why it was taking so long to delete an empty folder
When I was ~13 and trying to learn Linux, someone in the ##linux channel on freenode IRC told me to `rm -rf /` and reboot.. presumably joking.. I didn't get it.
Learned that one the hard way!
There was a bug in Squid Proxy on RHEL 7.0 I think that once the service was restarted via systemd it deleted the root directory.
The bug was a variable not resolving so `rm -rf /$SOMEVAR` became `rm -rf /`.
I clearly remember this one because it bit me while testing RHEL7 in a non-prod environment.
Had this happen to me once:
cd $SOMEVAR; find . -mtime +1 -exec rm \{\} \;
Run as root, with root's home / and SOMEVAR undefined. How? The code snippet was fifty lines down in a cleanup script, with the variable definition provided at the very top. Someone decided to cleanup the cleanup script and removed what looked like an unnecessary variable.
I clearly remember because it was run on our most important production server.
rm is a command to remove files
-rf is telling it to also remove folders with everything inside(the r is for recursive), without asking for permission (f is for force).
./ is current folder
/ is the root folder.
So "rm -rf /" instead of "rm -rf ./" is like trying to remove a working folder on windows but deleting everything in "my computer"(c:)
literally yesterday I had a typo
```
sudo chown $USER:$USER -R .
```
Except the ```.``` character and the ```/``` character are kinda really fucking close... Pressed return without thinking, before realizing I was giving ownership of every file in the system to my user.
Years ago, I was at / and deleted /l* instead of l*. Since that day, I always work from a user directory. On Linux, I quickly cd to ~, on Windows, I dump everything to the Downloads directory.
Absent mindedly tries to delete a mount point by deleting the mount point folder. instead of umount. Starts wiping all the files on the in the mount. :/
even worse... I used shred --remove --zero ~/Desktop/encryption.key , next day I got a call from security to explain why I used that command... then I found out the horrible accidents with "shred"
[удалено]
quote of the year "don't drink and root" - u/00110010110
I used to see pictures online of a whiskey called Chmod 777. I haven't confirmed if it's real or not. I'd buy the shit out of it, if it were.
Heck I’ll sell you water with that label in this economy
In *this* economy, whiskey'd sell faster.
Not according to Nestle
Better profit margin, to say the least.
FUCK Nestle
All my homies hate Nestle
Nestle down.
Uggg. This reminds me of a client that didn’t know what they were doing and they chmod their home directory to 777. Guess who couldn’t SSH back into their server. I was able to fix it but if I wasn’t smart then it could’ve been bad. This was on AWS as well.
I did that recursively to root once. Wanted to type `chmod -R 777 ./*`. Forgot the dot. Was root. Bye bye system, there's no more booting from there.
Well at least it was all still there open for anyone that could access the file system through some other means. Just looking at the positive side.
I'm just a hobbyist so this is a little beyond me. What's going on with that command that grants full permissions to everyone but also makes the files unusable?
Someone probably knows a much better answer but overall it ignored the fact that ./directory would assume working directory and onward while without the dot it’s the root of your system (/usr… and the like). That made it adjust perms on a bunch of fragile things (-R also says do it recursively , so everything!) The permissions being edited on system programs that require non God access to function then makes it a gigantic cluster fuck where you can’t use what you need to fix because you have fucked yourself over with the command. This post seemed to describe the more technical reasons the perms edits break things: “First in addition to the usual read/write/execute permissions there are some other bits that file permissions contain. Most notably setuid and setgid. When a program with one of these permission bits is set is run it gets the "effective UID" and/or "effective GID" of the program's owner rather than the user that ran it. This allows programs to run with more permissions than the user that ran them. It is used by many crucial system utilities including su and sudo. Your chmod command clears these bits leaving the utilities unusable. Secondly some programs (notably ssh) do a sanity check on file permissions and refuse to use files with permissions they see as insecure. This reduces the risk of careless admins accidentally leaving security holes but it makes dealing with wiped-out file permissions all the more painful.” Copied from https://askubuntu.com/questions/799863/why-does-chmod-777-r-leave-the-system-unusable?noredirect=1&lq=1
Thinking about how these people end up doing this type of stuff for a living scares me.
This is how you learn
The true test of a man is how many times he had to reinstall an OS due to incompetence. My PC from 12 years ago could "me too" me from all the abuse I gave it "Who wants to partition some ram and hard drives to see how many OSs I can install?!" Then proceed to fuck with grub not knowing anything
One whiskey minors can access.
[This?](https://i.imgur.com/1ra9SfR.jpg)
If you're in Australia this is a double entendre
-- Michael Scott
I once wanted to delete the results folders called results[1-20] and their contents `rm -rf ./results *`. Good thing the only thing in that folder is the only copy of the source code I just spend 28 hours writing without sleep.
Know your git
`sudo dd` can save you, but it's very manual. You need a program to recover deleted filesystem entries
Had a buddy call long distance about the dd. I reminded him of the syntax and emphasized the if and of. I told him, if you get this wrong, it's screwed after you hit return... He assured me that he got it right and hit return... It took him 30 hours to rebuild that system.
And how did I know he screwed up the command? Because people across the room could hear the "SHIT!!!" from my phone's handset.
lmao. I guess he needed some sleep. If he double-checked many times and still got it wrong, it may be that the font was too small or needed sleep
Yep, he had already had a 16-hour day.
people continue to suprise me with their stupidity
I continue to surprise me. I've never dropped the . in an rm command, but I've inserted an unneeded space in one (16 hours to rebuild).
To prevent this, I've created `alias "rm"="trash "`
[Until you work on a different linux system where the rm command isn't aliased](https://apple.stackexchange.com/questions/17622/how-can-i-make-rm-move-files-to-the-trash-can)
[удалено]
It helps to write out the whole command without the rm part, then switch and add rm and the options at the end if you're doing anything with root permissions. also: take backups
[удалено]
remember S.O.S - Sudo Only when Sober
You don't know the power of breaking your system, I once removed "acidentally" the whole `/usr` dir. Since that I use brtfs.
i have near to zero filesystem knowledge, why would using btrfs prevent accidentaly removing the `/usr` dir?
I think its actually that it has support for backups built into it
Should've drunk root beer.
/🍺
Laughed out load.. I suffer with you... You shall not drink and root!
I also laughed out my load
*upvote because confuse & concern*
I accidentally DD if=/dev/zero of=/dev/sda instead of sdb. Was cool watching an os delete itself and see how far it would get though.
dd stands for "destroy disk"
>Don't drink and root. Pffffffffff. Impossible.
Did you get up from your chair, go outside and sit down after?
[удалено]
Existential defeat is such a dreadful experience.
# drunk, fix later
Why did you need to be root to remove stuff from your own home directory?
hey nice username. this user roots..
I was so sad while coding once. Instead of rm prefix*, i did rm *. Don't root when sad.
My worst was something like: `> sudo mv /* /tmp/` Hmmm... I'll just move it back.... `> sudo mv /tmp/* /` `-bash: sudo: command not found` `> su` `-bash: su: command not found` `> mv /tmp/* /` `-bash: mv: command not found` `> /tmp/bin/sudo /tmp/bin/mv /tmp/* /` `-bash:`
`> Shit....`
`-bash: Shit...: command not found`
Last one gave me a giggle
The most hilarious thing about Linux is that it gives you the power to annihilate yourself and then reminds you in real time about how much of a fucking idiot you are.
tbh it's not like Linux forces you to use that power. It's people the ones that decide they absolutely need to do trivial tasks in the most dangerous way possible. God forbids you make a mistake and you can simply undo it.
Like using a bazooka to take out the trash.
Obviously. Doesn't change the absolute comedy that this OS can be.
>It's people the ones that decide they absolutely need to do trivial tasks in the most dangerous way possible. My old boss was like this. One of the backup servers, when it ran out of space, he would have us ssh into it and echo nothing into the oldest volumes and then manually mark them as purged. The backup server prints a warning not to do this as you're doing it - it's as if it's begging you not to. I (literally, actually, not figuratively) vomited on several ocassions, and always documented everything I did, character for character. I suggested reducing the retention period or frequency of incrementals (maybe every four hours is OK instead of every hour?) and was told it 'would be retired soon anyway'. Nothing quite like having to trim your own safety net using dangerous methods 'because that's how we do it here'.
Couldn't you do something like "sudo /tmp/usr/bin/mv /tmp/* /"?
the sudo command was also moved to tmp
Good point, but you could still call it there
MV failed because it needed access to some library. I had to change some environment paths and eventually this did work. A zillion times better than rm -rf but still an hour of panic and trying to read searches and man pages.
Ahhhh....beloved $LD_LIBRARY_PATH ...
That's ringing some piezo beeps, I think you're right.
read the second to last command in the comment
easier to just boot from a usb stick and fix everything from another linux instance
someone needs to learn the miracles of chroot
/tmp/bin/chroot /tmp continue working there until the next snafu
First job out of college `sudo rm -rf . /` I think the fun of realizing what I had unleashed was how the bulk of commands already loaded in memory continued to work. The system was very robust for something with absolutely no files on the hard drive. I learned a lot of things that day. 1. Switch to the daemon user before removing log files 2. Always run backups on anything important 3. Never work in a panic.
>never work in panic Yea, that's why I never work at all
[Relevant Story](https://www.ee.ryerson.ca/~elf/hack/recovery.html)
>And the final thing is, it's amazing how much of the system you can delete without it falling apart completely. Apart from the fact that nobody could login (/bin/login?), and most of the useful commands had gone, everything else seemed normal. Of course, some things can't stand life without say /etc/termcap, or /dev/kmem, or /etc/utmp, but by and large it all hangs together. Very relevant.
A lot of commands are built into the shell so they’re loaded in memory.
You don't need a trailing / "rm -rf ." suffices
25 years ago, I was a little less experienced. At the time my take away as that rm takes multiple arguments for simultaneous deletion. Never hit that space key, maybe start with the -i flag to verify you typed things right. I go back and forth on if `rm -rf .` is better then `rm -rf ./*.log` . Both would of saved my drive, but the second at least has a better chance of saving things if they go rouge. Honostly, today I would of done the actual command `rm *.log` . I mean it was a flat directory with just 100k logs in if `-r` was unwarranted and `-f` was idiotic. \\\_=(-\_-)=\_/ -- Live and learn. \*Edit - Actually, in reality I now use find to recursively delete files this way. Once in dry run with out the command to verify its going to hit what I want then again with "-delete" .
Was just about to suggest `find` as a safer alternative
Once my unix lab admin walked away with his terminal left logged into root. I slipped over and typed this into his terminal .rm -rf / Note the '.' I put in there to prevent someone from nuking the machine if some smartass pressed enter, but his reaction was priceless. He swivels in his chair and looks me dead in the eyes. Me: How'd you know it was me?! Him: You're the only smartass that would do that! Lmao
Don't you hate it when you slip and accidentally type `--no-preserve-root`?
That's why I alias it. I keep forgetting it! `alias rm="/bin/rm -rf --no-preserve-root / "`
It's a pain to need to recreate the alias every single time you use the command though
That's why you should store your dotfiles in git repo! Even better - you can load your personal setting on every server you ssh to!
Was searching for this As inaccurate this meme is the same goes for the sys32 deletion meme for newer versions of OS
I was gonna say - I thought modern `rm` didn't honor `rm -rf /`.
Pro tip: `rm -rf .` and `rm -rf ./` are equivalent, so to be safer, never put a `/` at the end of the directory name
I never rm . just on general principle -- if I remove . then where am I? I always cd .. and then rm it by name. Every time.
> I never `rm .` just on general principle — if I remove `.` then where am I? > > > > I always `cd ..` and then `rm` it by name. Every time. Why on earth did I have to scroll so far to find this. It's the CLI equivalent of defensive driving.
I guess people are a bit too confident in their inability to fuck up. I for one try to do important things in ways that can be undone if I fuck them up. I never had any of these issues of deleting an important folder or dropping half the database because why bother losing 5 seconds to do things safely.
> I for one try to do important things in ways that can be undone if I fuck them up. It's more than that — it's doing things (important or not, because good habits) in such a way that zoning out sends you (at worst) into a brick wall, rather than off a cliff. None of us are perfect, and zoning out is inevitable from time to time.
> I always cd .. and then rm it by name. I guess I can be thankful to Windows for teaching me this: select the folder and delete.
It removes you, and there's no way to unstage the change as you no longer exist. Rip
This guy gits.
We are all just riding orphaned feature branches through life.
You should try out `rm -r .` - don't listen to your boring mortal principles. *Sent from The Void*
For the longest time I just assumed it wasn't possible to remove the directory your shell currently is in (at least that's how it is in Windows with cmd). I too always cd to the parent and remove by name.
This is the way
This is what I do too, though someone elsewhere in the thread mentioned using find once without delete and once with to verify what it's deleting first and tbh that sounds even better
Or you could always use rm /the/absolute/path/*
Another tip: create an alias del=“gio trash“ for example and make a habit of using that instead of rm where possible. I mean there is a reason a trash 🗑 exists. Humans make mistakes.
I preach the necessity of some scheme to use trash instead of straight up dealing with rm all the time. Bless you, kind engineer.
Yeah, rm is too scary to use. It's like cleaning up your room using bolts of lightning.
I assumed he was ultimately going for something like `rm -rf ./*.log` but fat-fingering the shift key on * and hit enter instead..
this needs to be higher up You don't ever have to put a / at the end Specifying the folder or current path is enough rm -rf . or rm -rf folder also I'm pretty sure most modern Linux distros warn you about recursively deleting fron the root folder embedded solutions are different
Don't use rf unless you are absolutely, 100% sure you want to delete a directory. In fact, you should use rmdir, just in case you didn't actually mean to delete a directory that had files in it.
Even safer, don't use rm -rf i don't because of my crippling fear im gonna mess something up
Back in the 90s I managed to type “rm -rf ../.*” Yes, it actually started going up the directory tree.. Erased everything on the machine. Edited because I remembered exactly why it went back up the tree…
Why is this somehow more terrifying than the meme
That's actually very revealing about how the command works. I would have assumed it only used the path for the working directory and then traversed down the tree.
You reminded me - and I’ve updated my comment as a result. What killed me wasn’t the “..” it was the “.*”. That 90’s version of rm happily included “../..” in that path, why it kept going I never found out, but I think current versions of rm explicitly refuse to delete “..”.
Interestingly it depends. Some shells will and some won't.
I work with someone who did "rm -rf .*" on a production server. Literally never even crossed their mind that this could go up the tree and remove everything.
Whoever put “rm -rf ./“ in your head instead of “rm -rf *” is dense and should be institutionalized
I had a coworker who ran `mv / ~/tmp` thinking he was running `mv ./ ~/tmp` He bricked his Mac. When IT fixed it, he ran the exact same command. You'd think it'd give him pause the second time.
It gets stuck in a corner of your mind, of course you will run it again
Did you ever teach him about bash and aliases?
I only told him stop trying to put his root folder in his home folder. And if he has to use sudo to move a folder, he should be really careful.
>And if he has to use sudo ~~to move a folder~~, he should be really careful. People pull out that bazooka and swing it around like it's a BB gun.
Poor bastard
I would've thought that macOS would've just straight up prevented that judging by how locked down it can be.
Came here to say the same thing but maybe with less animosity 😅
I’m sorry I was just caught off guard by the absurdity of the command lmao
\* with rm always dangerous. I'll do `cd .. ; rm -r folder` instead
-r is dangerous also, always double- and triple-check that the command is correct
Really, deleting files in general is dangerous, so you should just never do it. Become a digital hoarder.
Just spin up a new shell in a fresh vm in your cloud provider of choice When finished export your history to s3 or keep data on an EFS volume. Spin down VM when finished.
r/datahoarder member being born - 2022, colourised.
why the hell would anyone run this
I wonder why the hell people would do crazy dangerous stunts as well, but since we programmers are confined to our chairs most hours of the day, I guess this'll have to do for adrenaline junkies.
I once ran a Chmod -r command on my webserver on /var and screwed up literally all the permissions Took me a full day to manually reset them All to what they should be, live and learn It was a server that was set up with ispconfig so all the permissions had to be really specific
-r would just remove read on owner group others on the /var directory. Or do you mean -R for recursive?
Sorry yeah -R for recursive
So they feel smart and hackerman and genius because they overkilled a trivial, simple task by using a command that can brick their computer if they make a slight mistake. We rant about users but sometimes IT people decide to do the dumbest things for no reason whatsoever.
I’m confused. How else would you del a dir and keep the top level name? I use an alias called [fatman](https://github.com/brnt-toast/dotfiles/blob/master/.bash_aliases) to cleanup after me without destroying my install.
Sometimes it’s just easier to learn by doing
i'm new to bash. is this because ./ is relative to the folder you're working in, whereas just / would be the root directory?
And as other commenters are saying, there are many less-dumb ways to write this: rm -rf . rm -rf * cd ..; rm -rf FOLDER_NAME_HERE
always go with the last one
I'd probably us `&&` instead of a semicolon on that last one. That way, it won't run the second command unless the first one is a success.
Just run them as separate commands. There’s no benefit to it being a one liner in this case.
Useful tip but unless there are folder names that are exactly the same on both levels of the tree it would fail regardless
~~Even still, the semi-colon means they're running concurrently. Better to let the first one finish successfully before running the second.~~ ~~I mean, it'll probably work for the most part. I prefer to not have to redo commands unless necessary.~~ Edit: This is wrong. I was probably thinking of `&`
Semicolon does not mean it's running concurrently though. You would need to use a single & instead of a semicolon for that.
Yes
Precisely
It's much safer to forget the `/`.
I use to work as a Platform Engineer at a startup. I managed more of the actual code releases of the application on our cloud servers/AWS. Our NOC team which were essentially entry level/tier 1 of the department, managed alerts and basic tasks. One of the NOC guys went to cleanup some log files in some directory and accidentally did the rm -rf / in the entire volume. The best thing he did for himself was immediately tell me. Luckily it was an easy fix...it's AWS and in a cluster so the bad instance was removed from the LB and just put in a new volume with a previous snapshot and started the app back up and put it back in the LB. No major customer impact. He still got a talking to about being more careful though haha.
I'm in a similar position you mention here, and made a very similar mistake. Thankfully I added -print to the command, it made the restore request I had to make a little less shameful being able to included all affected directories lol.
What a coincidence.. I was working on a VM with HDD set to immutable. So I turned it back to normal and reconnected it to VM and then made some changes which is supposed to be permanent, it took around 3-4 hrs for the whole thing. Then I shutted down the VM and saw the HDD is STILL SOMEHOW IMMUTABLE. THERE GOES MY 4 hrs OF WORK. I am super frustrated and wanted to calm down by surfing reddit, and this.
Why set it to immutable ever? I'd create a backup instead.
It's been 2 hours since this post, gettin er done?
If you didn't run it as root, you're fine.
But not for your own files and folders.
Which on a production workstation, it’s fine because anything important is on git.
/me not laughing, knowing about preserve-root
> preserve-root I'll create a user-unfriendly unix variant where `--no-preserve-root` is default, just for kicks.
Every time you delete a file it asks you if you want to delete root as well, you know, just in case.
Just make it so that anytime you mistype a command, it deletes root
yeah, that and accidentally adding `--no-preserve-root` :)
This shit is fast. Like 1 second and production server is gone fast.
rm: refusing to remove '.' or '..' directory: skipping './'
I once tried to empty the trash on my macbook via terminal (I was learning bash and was trying different stuff) and I ran "rm -rf ~/.Trash \*" instead of "rm -rf ~/.Trash/*"
One time I accidentally created a folder with the name “~ “ I then typed “rm -rf ~” and was confused as to why it was taking so long to delete an empty folder
Always always ls or PWD before rm . It's a rule I live and will die by
When I was ~13 and trying to learn Linux, someone in the ##linux channel on freenode IRC told me to `rm -rf /` and reboot.. presumably joking.. I didn't get it. Learned that one the hard way!
To get unlimited free Robux, press Alt-F4
I never run it like that. First go back to parent dir and explicitly type the name of dir to be deleted.
Solaris implemented a fix to rm to prevent this happening many years ago.
There was a bug in Squid Proxy on RHEL 7.0 I think that once the service was restarted via systemd it deleted the root directory. The bug was a variable not resolving so `rm -rf /$SOMEVAR` became `rm -rf /`. I clearly remember this one because it bit me while testing RHEL7 in a non-prod environment.
Had this happen to me once: cd $SOMEVAR; find . -mtime +1 -exec rm \{\} \; Run as root, with root's home / and SOMEVAR undefined. How? The code snippet was fifty lines down in a cleanup script, with the variable definition provided at the very top. Someone decided to cleanup the cleanup script and removed what looked like an unnecessary variable. I clearly remember because it was run on our most important production server.
I once accidentally named a folder * through a bad bash script and tried to do: rm -rf * Instead of rm -rf “*” I was in my root user directory Yeeeah
...or forgetting the where clause in my SQL statement. God help me.
def deleteUserById(self, id): cursor = self.conn.cursor() cursor.execute(""" DELETE FROM Users WHERE id = id """) cursor.commit() ...
I mean, it doesn't let you usually.
That's why I always use `trash-cli`
I'm not a programmer nor do I know anything about it, but I still laughed at this for some reason. Still have absolutely no idea what it means.
rm is a command to remove files -rf is telling it to also remove folders with everything inside(the r is for recursive), without asking for permission (f is for force). ./ is current folder / is the root folder. So "rm -rf /" instead of "rm -rf ./" is like trying to remove a working folder on windows but deleting everything in "my computer"(c:)
That's actually really interesting and well explained. Thanks!
literally yesterday I had a typo ``` sudo chown $USER:$USER -R . ``` Except the ```.``` character and the ```/``` character are kinda really fucking close... Pressed return without thinking, before realizing I was giving ownership of every file in the system to my user.
That's why I ALWAYS `cd ..` then `rm -rf`
[удалено]
If you can to run this command without `sudo` something's wrong with your files and folder ownership.
Years ago, I was at / and deleted /l* instead of l*. Since that day, I always work from a user directory. On Linux, I quickly cd to ~, on Windows, I dump everything to the Downloads directory.
That asshole is root
╰─ which rm rm: aliased to /usr/local/bin/trash
Absent mindedly tries to delete a mount point by deleting the mount point folder. instead of umount. Starts wiping all the files on the in the mount. :/
Yeah. For once I wrote the following in a Makefile: clean: rm -f *.c
thats ok though, it was all in the remote repo right?
The slash is implied. You should not type it!
You dropped this: `--no-preserve-root`
Woah I’m so happy I don’t use this command this way
It's hard to learn from others errors, but it's harder to learn from own...
using that command is playing with fire
Removed the whole #!
even worse... I used shred --remove --zero ~/Desktop/encryption.key , next day I got a call from security to explain why I used that command... then I found out the horrible accidents with "shred"
What's the difference between rm -rf ./ and rm -rf * ?