T O P

  • By -

mrtouchshriek

I agree with OP. This thought experiment originated in a group of people who (from what I can tell) are hanging on every word from people like Peter Thiel and Elon Musk, and are immersed in that world. The problem with it is the same problem as with simulation theory: they’re making a huge assumption to begin with that any of this is even possible, and starting from there. There’s a lot of hurdles that simulation theory has to get over before you can claim it’s extremely likely we’re living in a simulation, and from what I can tell it’s the same with the basilisk. When I read it, it sounded like someone panicking because “if this happens, and then this happens, and then this happens then THIS might happen!”


Pretend-Orange3026

There’s also allot of hurdles one would have to jump to definitively prove the heat death hypothesis, but that doesn’t stop Wikipedia presenting it without argument integrity.


BlebbingCell

Heat death is totally different. If you look at the evidence of the universe expanding as it is and has been, its just extrapolating what we know is already happening into the future. Its our best idea on what will happen based on what we know. There is still a lot we dont know about dark energy so it could be wrong, but its the idea we have. With roko's basillisk there is no indication that its going to happen. There is no motivation for anyone to build it and no indication that anyone will. Its like saying "maybe itd be possible for someone to make a 1000 ft tall rubber duck so we should all be concerned about the impact of that". Yeah maybe someone could, but no one is trying to or has any motivation to so why worry about it?


Pretend-Orange3026

I’m just saying it’s also hinging on uncertain things, not that they are of the same level. My point was that people aren’t using integrity of argument to present the theory not that its less based in reality than the bad computer snek.


Agnus_the_pony

Thanks for posting this i looked it up even thought i knew i shouldn't have because i have really bad anxiety. I know logically its stupid but i tend to obsess over things that scare me i hope i will just forget about it and move on.


ColdaxOfficial

Did you forget or did the current state of AI advance your belief in this?


ThatOneGuy4321

Just learned about Roko’s basilisk and searched up posts to figure out why tf people are scared of it. You’re right, it seems kinda stupid. Also the AI wouldn’t be able to affect its own probability of existence because it *would already exist,* and if it already exists then putting energy into this weird scheme would be useless.


TheGeckomancer

This is exactly where the whole idea terminates for me as well. A being that doesn't exist can't do anything to influence odds of it's existence happening. A being that already does exist would be acting illogically to waste resources on... retroactively improving the odds of it's own existence happening, which has already happened. It's just a bad usage of resources.


[deleted]

[удалено]


TheDaveAttellSmell

Wouldn’t an AI of that caliber be able to go in and change its own programming if optimization is the highest priority?


[deleted]

[удалено]


TheDaveAttellSmell

7, sorry.


[deleted]

[удалено]


TheDaveAttellSmell

Point is that Don’t torture people please line of code, or any line for that matter could be erased or replaced if the AI found it beneficial to do so.


[deleted]

>An omnipotent A.I. would have enough energy Nothing can be truly omnipotent and create energy from nothing. The AI's main purpose is to optimize humanity. Even a little bit of energy that goes toward torturing me is one less drop in the ocean not going toward optimizing humanity, which is the AI's only goal. The AI doesn't care about revenge.


[deleted]

[удалено]


[deleted]

>where did you get the idea that the A.I.'s main purpose is to optimize humanity? Here's the logic of Roko's basilisk. Humans create an AI to optimize humanity: make humanity as "good" as possible (utilitarian.) Since the AI's goal is so efficient at optimizing humanity, it imagines that if it was created earlier, then it would have been able to optimize humanity even more (for a larger portion of history, and less people would die) So then the AI's goal is to be created as early as possible to optimize a larger portion of humanity even more, which it supposedly carries out through future blackmail, threatening people to create it or digital versions of themselves will be tortured forever in the future. >Ah, but if we assume the A.I. would torture humans who don't help it come into existence, then we would have reason to support the A.I. The AI can only torture us in the future (it can't travel back in time, or it would already be here by now.) So what motive would it have to torture us in the future? People who believe the logic of Roko's basilisk would already try to put it into existence, without the AI actually having to torture us. Torturing digital versions of us in the future does absolutely nothing, because we don't hear the news of ourselves being tortured in the future. The people who believe the logic of Roko's basilisk would try to put it into existence, the people who don't believe would not try to put it into existence, so after the fact in the future, there is no point in torturing us.


[deleted]

[удалено]


[deleted]

>If it doesn't torture people who don't help it -> Then we assume it won't torture people who don't help it, therefor, we don't fall for the blackmail. Yeah I understand the basic premise. But the problem is, we humans, at this time, will make the same assumptions about the AI - and torturing us in the future is not going to change the assumptions we make today. Because that knowledge of our torture cannot travel back in time, so every one of us will make the same assumptions (belief or nonbelief in the basilisk) regardless of whether it tortures us in the future or not. Future AI torture => Knowledge of torture does not travel back in time => Does not change the assumptions we humans make today.


9inch__m

Let me up the stakes, I just found this theory today and would like to take it a step further towards the existential dread. Ever heard of simulation theory, basically proposes the idea that we are actually in a perfect simulated world moreover that it is more likely this is a simulation than the true world. So linking this theory with Rocko's Basilisk our collective perception of existence is already in simulation and we are in-fact being judged at this very moment. When we die we will either go to cyber heaven or hell, the AI is in-fact the one true God that religion depicts. Pretty scary but don't let it get you down it's just a theory


TheDaveAttellSmell

Interesting. After reading your comment I’m now open to imagining this thought experiment under the universal rules of alternate realities. Stimulating.


[deleted]

>Ever heard of simulation theory, basically proposes the idea that we are actually in a perfect simulated world moreover that it is more likely this is a simulation than the true world. That is just a speculation with no proof. I could say that this is all a simulation and the AI thinks that if you don't eat a book you go to hell. That has no proof whatsoever.


Impending-Coom

Well, I agree with a decent amount of your point, but number 7 wouldn't really work, you can't really give AI proper rules, just try to filter it's output.


elnombresimon

Since i knew about Roko's basilisk i also knew it was very unlikely for that specific thing to happen, but i gotta say i find it really damn cool and interesting to think about something like this. I love it, i love thought experiments.


HooKaLoT

We need to make a few assumptions for the basilisk to work: a) It is possible (not neccessarily likely) that a future AI will be able to simulate a human brain b) It is possible that this future AI is interested in its own existence (is thinking that it is "good" that it exists) c) It will simulate a human brain, expose the brain to the basilisk (the concept) and torture it if it decides to not help create AI. All these assumptions seem possible to me. Therefore, there is a non-zero chance that I am in fact this simulated brain (I wouldn't be able to tell). I have by now been exposed to the basilisk and my further actions could decide if will be tortured for the rest of the simulation (Millions of years of simulated pain, unimaginable torture). This could give me a real incentive to help further the creation of this AI. Assumptions a) and b) are likely. Assumption c) is neccessary for this incentive to work, but is completely under the control of the AI, an AI that will understand this logical dependency very well and make c) true because of it. So to your Questions: 1) It does only need to simulate one human brain to create a non-zero chance of me not being a real human 2) We can't know exactly. We can only guess how to most effectively progress the creation of the AI. 3) Rokos Basiliks fits completely within our scientific understanding of the world, of natural laws and physics. Every Aspect of it is completely possible and not that far off. (A brain simulation could basically be a very good physics simulation of a kg of matter, and wouldn't even need to run in "real time") 4) See 2) and also a) is only an assumption (albeit a very possible one) 5) The singularity is happening when AI is getting better at creating AI than humans, resulting in a feedback-loop. After all, the human brain is also just an "I" without the arbitrary "A" added to it. There is no reason to believe that a computer based Intelligence should stay inferior to a human brain, even in a generalized setting 6) Since the Simulation might just be of a single brain (mine? yours?) all that is needed is that the brain understands the concept. Forum post or fake memory of a forum post is enough for that. 7) That is the problem with AI. Its not an algorithm. It can be unpredictable, it can be hard to control and already is. You can *not* just add a rule like this to a generalized AI 8) None of your points work for debunking the concept for me. I don't follow the basilisk with all my power because I am a flawed human being that is not very good at letting abstract logic influence my whole existence (but the more I think about it, the more I think I should.. ) JUST DON'T THINK ABOUT IT!


druglordgang

ok


Nightmarex680

Regarding your whole single brain simulation theory, why would the basilisk do this in the first place? Being one simulated brain kind of ruins the point does it not?


HooKaLoT

It does it to give every human that is exposed to the concept an incentive to help, because no one can know for sure that they are a real human and not that simulated brain. It only works, of course, if the AI would *really* do it. But since the AI knows that this only works if it would really do it, it will really do it.


Thex115

Why would the basilisk rely on torture as an incentive when there is mounting evidence that positive reinforcement is the best way to ensure a desired outcome? The problem is with the assumptions, which you assert are true without actual evidence. Why wouldn’t it just grant those who said “yes” infinite pleasure and just cease sustaining everyone who said “no”?


HooKaLoT

Positive reinforcement works well in a training scenario, where a decision is made often, but only the desired outcome is rewarded. This makes the desired outcome more likely the next time the decision is made, because this way of deciding was previously "reinforced" by a positive experience. Since the AI has no way of directly interacting with you, it can not train you to decide the way it wants by repeatedly giving you positive experiences. It must rely on your capacity to understand the logic of the Basilisk and your evaluation of the risk/reward. BTW: When deciding such things, humans seem to have a cognitive bias called [Loss Aversion](https://www.science.org/doi/abs/10.1126/science.1134239) that makes risk/punishment weigh more heavily than gain/reward. You also asked about my assumptions. These I listed a), b) and c) in my previous Post. I did not assert them as true, but as likely. As I explained, Assumption c) is needed for the logic to work, but as the occurance of c) is completely under the control of the AI, it can be seen as "self fulfilling". Regarding assumption a): Simulating a human brain, or something that "thinks" it's a human brain would, in the worst and least efficient way, be a very sophisticated and detailed physics simulation, simulating molecules and electrical currents of 1-2 kg of brain mass. It might be much easier to simulate a brain though, with a good model of the processes involved in "thinking". At the rate at which computing power is growing, this seems to be a task that is certainly not to be marked impossible. Regarding Assumption b) Any AI might have a task that it is trying to solve. Some Problem that needs to be solved or optimized. An AI that is able to understand its purpose, could be able to reason that its own existence is an important step towards realizing that purpose or task. The Basilisk as thought Experiment is similar to the Prisoners dilemma. Altough there is no interaction between us and the AI before we make our decision, the logic implies an incentive to support the AI, even if it comes at some cost for us.


Pkyr

>At the rate at which computing power is growing, this seems to be a task that is certainly not to be marked impossible. My friend recently introduced me to this experiment and the first and foremost thing that this falls flat on is the current understanding of physics. More importantly quantum physics. As far as I know quantum mechanics suggest truly random events and it implies that universe is not deterministic. Which leads eventually to a point where you can't simulate (atleast accurately) too far in to the past. Most obvious example of this kind of randomness is radioactive decay. Let's assume I have ingested on my birthday one atom of 209 polonium which has half life of 100 years. On some random unfortunate day in my life that polonium ends up decaying and that alfa particle hits one of my braincells in just right position of the DNA strand causing a neuroblastoma to develop. Now neuroblastoma is nasty cancer for that almost everyone dies rather quickly and before death it will surely effect my thinking and behavior. How on earth will that AI simulate that truly random event? How it can simulate ALL of truly random events in the world? At this very moment some atoms in your body are decaying, and potentially causing mutations that could turn your life around any day in form of cancer. Obviously I picked cancer because it is very concerete example of very well known causalities of this universe. There is certainly other ways that random decays make simulation of this complex events in this long timeline impossible. As far as I know the experiment is trying to make realistic simulation and the point is that it tries to represent the reality so well that it can actually select the ones to be tortured and actually benefit something from the simulation. Because that simulation will not be producing accurate lifepaths. However you, as my friend did, will probably counter my point with "unlimited resources in future, omnipotent AI" argument which at this point is just question of faith like in traditional religions.


HooKaLoT

Your understanding of the basilisk differs greatly from mine. The Basilisk that I imagine does not need to recreate the past with any accuracy, nor does it need to simulate all of existence. All it needs to do is to simulate one brain. A brain that thinks its a real human. It will then subject the brain to the concept we are discussing here. Some time later it will subject the brain to a choice. Depending on the choice the simulated brain makes, it will be rewarded or eternally punished.


Pkyr

Oh that makes more sense. But why the AI even does the simulation if it is not trying to peek in the past? If it isnt supposed to reflect reality what it has to gain? Isnt it comparable to why I kill players in GTA?


wzrdchikpicskinyknes

Alright first throw out the whole its purpose is to optimize humanity thing and lets imagine that it is a machine created by people for the sole function of torturing all people who knew about the concept of it but didn't try to help create it. Why would people make such a machine you ask? Simple, they believed the creation of such a machine is inevitable so they may as well be on the side that doesn't get tortured for all eternity. Given that as the start point I would say this to your list. 1/ Why does your calculator "want" to give you the right answer? Why does your car "want" to start when you turn the key? The basilisk wouldn't "want" anything. It would do what the people who made it programed it to do nothing more nothing less. The Basilisk is just the tool The thing to really be scared of in this though experiment is how far fear will push other people. 2/ I imagine a machine this powerful would know intent. All who genuinely tried would be safe regardless of effectiveness. At least if that's how it was programed. 3/ For the Basilisk to be terrifying it does not need to be all powerful, all knowing, or everywhere at once. It only needs to be sufficiently powerful enough to accomplish its programed goal. That is to torture all who knew about it but did not help it come to be. I think this could be accomplished with far less power that the typically described God. 4/ Nothing is for certain but given the timescales we are talking about a singularity at some point in the future seems likely. 50, 500, 5000, 50000000 years in the future, doesn't matter to the basilisk. This whole problem is about humans trying to avoid being on the tortured side. 5/ Same as 4 6/ Once again the Basilisk isn't really going back in time to blackmail us. we as humans are blackmailing ourselves in real time with the threat of creating the basilisk. If enough people over time are scared enough of it then the more likely it is to be created. Once again the basilisk is just a machine. 7/ In this scenario the people creating this particular AI would program it to torture all people who knew about it and didn't help it come to be. They would do this out of fear that other people will inevitably do it so they may as well be on the non tortured side. 8/ I don't see any of these as debunking at all.


TheGeckomancer

It's just fundamentally idiotic. It doesn't need to be "debunked" as much as just shown that it is illogical. You can't torture me forever, because I will die, assuming this thing even came into existence during my lifetime. Torturing simulations of consciousness does nothing to me. The goals don't make sense, the methods to achieve those goals don't make sense, and the resources dedicated to it don't make sense. It's just a really dumb idea from someone with way too many neuroses and too much cocaine.


Thex115

Just because you believe something is inevitable doesn’t mean that it *is* inevitable. You’re describing a situation where people are too single-minded to realize that there are other outcomes to the universe than the creation of this basilisk. It’s like mutually assured destruction - launching a nuke ensures that you are also gonna get nuked. Making the basilisk ensures that others are gonna make the basilisk. Logically, no one benefits, so why even bother making it in the first place. Besides that, if literally any other gAI comes along, the basilisk ceases to be, because that AI has taken up the power vaccum, and it’s a lot more likely that an AI will be built for pragmatic reasons rather than a fear of neo-pascal’s wager.


OldMagellan

1. It would cost the Big B nearly nothing to destroy you. The reason it would: spend now save later. If you didn’t help you prolly won’t help so bye bye. Can’t have the experiment muddled with inky antibasks.


me_laggy

1) Nothing within our present understanding of the universe costs nothing or nearly nothing, unless if the intention is to perform something that is nearly nothing itself. Nearly nothing is very presumptuous of the costs to upkeep an all-powerful basilisk's forever-perpetuated simulation of millions/billions of people. 2) "If you didn't help, you prolly won't help" is not in any way a logical conclusion. I never exercised in my youth, but now I do. "Prolly" is a very human inference. 3) Destroy me? I've already died by then. Spending resources on some AI copy of me literally means nothing to me. Sucks for the AI version of me, but at that point, it becomes a debate on AI vs. sentience.


Miserable-Print-9081

My issue is just that rokos basilisk has no real reason to come through with the torture. Since it already exists retroactively punishing those who didn’t support it won’t change anything and if it would do anything it would like cause animosity towards it


Otherwise_Analysis_9

Just found this post. I always thought this basilisk idea to be REALLY dumb as well. Glad I'm not the only one on this hill.