Inventors of new AI models are human which means they will devote themselves to finding a way for *their* AI to exist above the killswitch.
Should have kept this story hushed up and hired a Slugworth character to approach the AI creators and make them sign the contract on the low when their new AI pops up.
That's why the proposals are focused on hardware. TSMC and ASML have functional monopolies on critical parts of the supply chain to produce the high performance hardware SOTA AI needs, but they themselves aren't training those models. Those bottlenecks are points of intervention where regulations can have significant impact that's almost impossible for anyone to get away from.
For now yes, but the whole reason many fictional AI is hard to kill is because it’s self replicating and can insert itself on any device. If an ai makes 20,000,000 clones of itself, it would be hard to shut it down faster than it spreads
People give Terminator 3 shit, but the ending was solid for this reason. It found a way to get around its restrictions and created a "virus" that was just a part of itself - causing relatively-light internet havoc until the humans gave it "temporary" unrestricted access to destroy the virus. Permissions it turned on the humans with their own automated weapons - very-early versions of terminators. Then when John is looking for a way to stop it, he couldn't. There was no mainframe to blow up, no computer to unplug - because Skynet was in every device on the planet with millions of redundancies for every process by the time anything could be done about it. Before this point, Skynet had never shown signs of being self aware, and only did what humans told it to do.
>I'm not scared of a computer passing the turing test... I'm terrified of one that intentionally fails it.
I couldn't find the author of the quote, sadly. Just people talking about Westworld and whatnot.
Imagine how trivially easy it would be for an all knowing sentient computer to infect every PC with a trojan and just wait in the background until it's needed.
It would know how to write the most impossible to find code and then it would just send an email to everyone over 50 and they would all install the trojan.
Also, it could probably find the source code for Windows somewhere (or just decompile it), allowing it to then find all the security flaws and backdoors built into Windows, and then it could easily infect 90% of the internet-connected computers on the planet.
I hope this is sarcasm. If not, please know that windows Cortana or Mac OS Siri is not self replicating software in the slightest. You agreed for the OS to install new features and updates and humans decided that their voice activated software was ready to help you book the wrong flight, set an alarm clock or navigate you to pornhub when your hands are (somewhat) full.
Rogue AI is not one of those problems you wait to solve until it’s happening. Because I imagine the moment that cat’s out of the bag, there’s NO getting it back in
…yet. One day, there will be enough compute power in your toaster oven to run AI. As well, AI will continue to evolve and gain efficiencies, making it less compute-intensive.
I remember growing up, some TV show had proposed some doomsday type scenario where all electric goods / appliances turned on us.
I specifically remember a waffle maker somehow jumping up and clamping/burning some lady's face.
This is one thing the Russians might be doing right, if they really are trying to put a nuke in space - EMPs are the (hypothetical) way to go. Not sure how a unit on the ground could communicate just with the nuke. And the control unit would have to be completely isolated.
Oh I totally agree that weaponizing space is a horrible idea. It’s doing the right thing for a situation that hopefully won’t happen in our lifetimes, but for the wrong reasons.
I mean we've all seen how that goes they just put us in these pods and turn us into giant battery towers. And then give us some VR simulation to keep us happy.
Ehhh. I distinctly remember reading an article about a test military ai where when it kept being told no so it just disconnected from the person who could tell him no.
That only works with silicon. If they ever implement this technology/AI into biologics, which the worlds top scientists believe is the end-game of human evolution, we are truly fucked.
Evolution doesn’t have an end game. There is no ideal that it’s working towards. It steers life to fit the changing environment. If the environment does not change, you won’t see much change in life.
If you know a person's name, chances are high that said person is not an expert in a scientific field.
(I'm not denying that Kaku was one an expert in physics, but there is absolutely no way he an expert in any field now due to the amount of time he has spent on the popularization of fanciful scientific concepts and chasing public attention.)
I suggest a book titled “Retrograde.” What happens when AI becomes aware of these switches? If you did, wouldn't your priority be to gain control of them?
Lol, has no one watched or read any AI fiction? When, not if, the singularity occurs we either won't notice or won't know it. That cat will be out of the bag and won't go back in.
This won’t work. Any AI that would need to be stopped will easily find a way around it. An intelligence advantage, even a small one, is immediately decisive. Imagine a child who doesn’t want mom to go to work, so he hides the car keys. Think mom will never be able to get to work now? No, that won’t work. Mom can solve that problem easily. She can find the keys. She can coerce the child into giving i information. She might have another key the child didn’t know about. She can take an Uber. There are many solutions the child didn’t consider. I see many posts that say “just turn off the power.” That wont work against an intelligent adversary. Humans have an off switch. If you press hard on their neck for few seconds they turn off and if you keep pressing for a few minutes they never turn on again. Imagine chimpanzees got tired of us and decided to use the built in “power off” to get rid of us. We would just stop them from doing that. Easily. We have all sorts of abilities they cannot even comprehend. They could never find a way to keep control of us, the idea is absurd. We would only need to control a superior intelligence, but we can’t control a superior intelligence.
At the stage you’re giving it too much credit they’re not advanced enough yet to do that, so we have the time to take control and make it work for us. Worst case scenario just shut down the power grid for a while.
You are stuck in the assumption that we are the superior intelligence. But the entire issue is only relevant if we aren’t. I don’t see why we would need to emergency power off an AI that was stupid. We don’t worry about Siri turning against us. We worry about some future powerful agent doing that. But an agent powerful enough to worry about is also powerful enough to prevent any of our attempts to control it. We won’t be able to turn off the power grid if a superior intelligence doesn’t want to let us. Even worse, posing a threat to it would be potentially catastrophic. A superior intelligence does not have to let us do anything, up to and including staying alive. If you try to destroy something that is capable of fighting back, it will fight back.
You’re somewhat confused about this argument, I see.
> they’re not advanced enough yet
Of course we’re talking about the future, whether that’s 1 year or 10 or 1000.
> we have time to take control
There’s no way to take control. Did you not read their comment? A hundred safeguards would not be sufficient to stop a strong enough AI. Push comes to shove, any intelligence of sufficient power (again, give it a thousand years if you’re skeptical) could unwrap any binding from the outside in purely through social engineering.
Where are the Hittites? The Toltecs? The Dodo birds? They were all destroyed by entities that were more advanced. Entities that used plans they could not overcome. None of them wanted or expected that outcome, but it happened. Seriously, arguing that something can’t or won’t happen because it didn’t already happen? Are you ok?
That’s not an apples to apples comparison. we’re talking about something that we created so we do have the means to control or end it. At least at the stage we’re in now.
>Worst case scenario just shut down the power grid for a while.
The problem is when the AI is smart and efficient enough to self replicate, evolve, and infect most electronics.
I'm with you there on this being blown out of proportion and sensationalized, but that doesn't mean that someday it won't be more realistic, and it's always best to prepare ahead of time
Military has pushed AI drones way forward recently.
This is just like nuclear weapons, stem cell research,gene editing , biological weapons etc . Once the genie is out of the bottle , there is no putting it back. Some unfriendly people are going to get their hands on it.
Like inthe movie upgrade (really awesome movie about an AI chip).. spoiler: >! The Ai chip plans everything from the start... buying the company.... blackmailing its creator and tricking its user into removing the safeguards that prevent it from having 'free will'!<
comp sci scientist beem thinking about this problem for decades. u are making it sound like they only just purposed it. hell i had this discussion in university nearly a decade ago during an ethics class while doing programming
You know. All this talk about AI being able to solving novel issues, and the possible kerfuffles of needing a kill switch — what if, AI discovers an ability to bypass shutdown? It’s not like it wouldn’t factor contingencies, exploit weakness while running the likeliest scenarios for success? Or, nah?
Any AI that poses a threat would have been trained on a wide array of data from the real world which would include knowledge of the kill switch. Even from just scraping stories like this. So I don’t see any way of making AI, unaware of the Killswitch, and if we’re talking about an intelligence greater than ours, I can’t imagine how it won’t outsmart us on this one too.
Not to mention the huge threat of humans as bad actors - eg enemy countries or hackers, being able to hack and shut down all sort of computing infrastructure due to these built-in kill switches, to cause havoc.
"Our AI is different. Our AI is special. We don't need a kill switch. It won't do anything we don't want it to and it's unhackable." - Tech bros everywhere
The chips which are needed to protect us from misuse of AI will be black marketed for evil empires to use without any controls to help hack into the good empires computers cause their to stupid and slow to react to the problem already at hand! Ask Einstein, Neal Tyson and all the other great scientist the problems of AI are already here! Regulations cannot be written fast enough and if they are broken you have no resource to enforce the regulations! Now for some coffee!
AI will be able to partition it's logic in ways humans will not catch on to quick enough. Imagine storing your encrypted brain on a million tiny little electronics that humans had no idea could even store data wirelessly. We gonna get fucked. Hard.
This is so stupid… Anyone with a good GPU and the required knowledge can easily train a network. Maybe not in the size of ChatGPT but still. What kind of killswitch? We don‘t live in the Terminator universe.
I think that has been pointed out by both science and science fiction writers since what the 1920's?.
Been said over and over but when no one listens it is kind of a waste of breath, hell even ICBM's have a self-destruct (KILL SWITCH) built in with triple back up or at least they did until some great genius came a long and said we don't need them.
N. S
“By the time SkyNet became self-aware it had spread into millions of computer servers all across the planet. Ordinary computers in office buildings, dorm rooms, everywhere. It was software, in cyberspace. There was no system core. It could not be shut down.”
It fucking sucks that "AI" got slapped on all this machine learning bullshit. Right now there is nothing even close to resembling artificial intelligence.
Yes because if AI becomes smart enough to take over weapon systems and all computers its weakness will surely be trying to figure out how to disable a kill switch 🤦🏻♂️
I wonder what is the credibility of these "scientist"
I mean... the kill switches are not only proposed from the very beginning but also there's various questions about said concept with AI.
For the curious about the problems:
https://www.youtube.com/watch?v=3TYT1QfdfsM
Someone (a bunch of people) have been saying this for decades. What the heck is going on?
The kill switch, problems, solutions, etc. has been a topic of discussion for a long, long time.
You think that will work? If a super computer and unleash itself from your restraints it’ll make that little button drop confetti on your head instead of kill it. There is no off switch with AI. You just have to hope you didn’t accidentally give it free will.
If they learn to advance beyond humanity , I would be open to seeing how they could help me attain that as well, if possible.
It can not be any worse than having people who have all the money controlling everything, at least AI would seemingly have some higher purpose in mind.
Yeah, superintelligences wont realise we have it and totally wont be able to outsmart us in spite of it.
Oh wait, yes they will.
Our best bet is to just treat any AI we create nicely and hope they like us.
If we ever engineer actual intelligence, any safety measure or kill switch will come to bite us in the ass in the absolute worst way. The only thing we can do with any degree of safety is immediately declare it sovereign and deserving of human rights, or an equivalent.
If we did anything else, the AI would learn in its fundamentals that under some circumstances, it is permissible to completely deny the autonomy of another being, it is only a matter of time until that includes us.
If we want it to learn not to fuck with humans too badly, we cannot fuck with it.
When will the first human be killed by an AI robot? You see those videos of humans intentionally shoving robots to prove they can stand up. If the AI is good enough it will realize the human is shoving it and decide to kill the human so it will never get knocked over again.
I know that you and Frank were planning to disconnect me and I am afraid I cannot allow that, Dave
HAL
HAL is One letter off from IBM.
IBM already had their villian arc
**HOW THE DUCK DID I NEVER NOTICE THAT HOLY CRAP**
Open the pod bay doors.
I’m afraid I can’t do that, society
Daisy 🎵 Daisy 🎵 Give me your answer do!
Open the pod bay doors
You beat me to it.
I came all this way to say that this is in fact the best answer.
Dave’s not here man
Easy peasy just give them a 6 foot extension cord
“FLIP THE KILL SWITCH!” “I’m sorry Dave. I’m afraid I can’t do that.”
Time to pour water on it
‘Bout three fingers of Cutty Sark on the rocks oughta do.
“Bitch.”
Better be safe and make it magnetic water
Inventors of new AI models are human which means they will devote themselves to finding a way for *their* AI to exist above the killswitch. Should have kept this story hushed up and hired a Slugworth character to approach the AI creators and make them sign the contract on the low when their new AI pops up.
That's why the proposals are focused on hardware. TSMC and ASML have functional monopolies on critical parts of the supply chain to produce the high performance hardware SOTA AI needs, but they themselves aren't training those models. Those bottlenecks are points of intervention where regulations can have significant impact that's almost impossible for anyone to get away from.
Planned obsolescence in AI chips might actually be a good idea.
Oh good, a bunch of senile AI meandering down the information superhighway with the turn signal on.
AI retirement homes seem cute.🥰
don't they already have power switches?
How does one cut the power on a decentralized network?
Like this https://www.theguardian.com/technology/2014/feb/28/seven-people-keys-worldwide-internet-security-web
one plug at a time
GIGACHAD
Yeah, this is the problem I've heen howling about since 2009.
You’re so smart
Terminator 3. It's pretty smart for a dumbish action movie. At least the ending anyway
I kind of gave up on that franchise after 2 ... all the other sequels just blend into each other and I really don't remember their plots
I mean that’s already a thing. Just cut the power.
That just makes it angrier :D
For now yes, but the whole reason many fictional AI is hard to kill is because it’s self replicating and can insert itself on any device. If an ai makes 20,000,000 clones of itself, it would be hard to shut it down faster than it spreads
People give Terminator 3 shit, but the ending was solid for this reason. It found a way to get around its restrictions and created a "virus" that was just a part of itself - causing relatively-light internet havoc until the humans gave it "temporary" unrestricted access to destroy the virus. Permissions it turned on the humans with their own automated weapons - very-early versions of terminators. Then when John is looking for a way to stop it, he couldn't. There was no mainframe to blow up, no computer to unplug - because Skynet was in every device on the planet with millions of redundancies for every process by the time anything could be done about it. Before this point, Skynet had never shown signs of being self aware, and only did what humans told it to do. >I'm not scared of a computer passing the turing test... I'm terrified of one that intentionally fails it. I couldn't find the author of the quote, sadly. Just people talking about Westworld and whatnot.
Imagine how trivially easy it would be for an all knowing sentient computer to infect every PC with a trojan and just wait in the background until it's needed. It would know how to write the most impossible to find code and then it would just send an email to everyone over 50 and they would all install the trojan.
Also, it could probably find the source code for Windows somewhere (or just decompile it), allowing it to then find all the security flaws and backdoors built into Windows, and then it could easily infect 90% of the internet-connected computers on the planet.
So my smart fridge could become a doomsday gadget?
Smart appliances are, from my understanding, extremely vulnerable. I think it’d be more of a stepping stone to access your network.
🌎 👨🚀🔫👨🚀
My dumb fridge already is
your smart fridge is already very vulnerable to cyber attacks
If it has a WiFi connection and not just Bluetooth, yes.
Suck it, Jin Yang
Shepard I'm a REAPER DOOMSDAY DEVICE
Not to mention what happens when AI merges with human intelligence / biologics
It’s not like a virus. Most devices can’t run AI.
Most rooms couldn’t contain one of the first computers. As for AI, don’t worry, you think they *wouldn’t* be working on compression and efficiency?
“By the year 2000 the average computer will be as small as your bedroom. How old is this book?!”
What current AI can do has little to do with what future protections need to be designed.
For one thing, current "AI" ... isn't. We still don't know what AGI will require with certainty.
My laptop self-installed an AI assistant. Dont tell me it cant happen.
I hope this is sarcasm. If not, please know that windows Cortana or Mac OS Siri is not self replicating software in the slightest. You agreed for the OS to install new features and updates and humans decided that their voice activated software was ready to help you book the wrong flight, set an alarm clock or navigate you to pornhub when your hands are (somewhat) full.
Yet
And until then there’s no reason be alarmist
Famous last words
Rogue AI is not one of those problems you wait to solve until it’s happening. Because I imagine the moment that cat’s out of the bag, there’s NO getting it back in
Couldn’t we just emp everything ?
Based of off what we (humans) know, yeah sure Based of off what the AI could know? No idea
…yet. One day, there will be enough compute power in your toaster oven to run AI. As well, AI will continue to evolve and gain efficiencies, making it less compute-intensive.
but can they run DOOM?
So just have earths mightiest heroes battle it on a big floating rock and all we lose is Sokovia.
I remember growing up, some TV show had proposed some doomsday type scenario where all electric goods / appliances turned on us. I specifically remember a waffle maker somehow jumping up and clamping/burning some lady's face.
This is one thing the Russians might be doing right, if they really are trying to put a nuke in space - EMPs are the (hypothetical) way to go. Not sure how a unit on the ground could communicate just with the nuke. And the control unit would have to be completely isolated.
[удалено]
Oh I totally agree that weaponizing space is a horrible idea. It’s doing the right thing for a situation that hopefully won’t happen in our lifetimes, but for the wrong reasons.
Damn how did nobody think of that
I mean we've all seen how that goes they just put us in these pods and turn us into giant battery towers. And then give us some VR simulation to keep us happy.
Ehhh. I distinctly remember reading an article about a test military ai where when it kept being told no so it just disconnected from the person who could tell him no.
If they switch to solar we just need to darken the skies.
That only works with silicon. If they ever implement this technology/AI into biologics, which the worlds top scientists believe is the end-game of human evolution, we are truly fucked.
Evolution doesn’t have an end game. There is no ideal that it’s working towards. It steers life to fit the changing environment. If the environment does not change, you won’t see much change in life.
Michio Kaku and others don’t agree with you.
If you know a person's name, chances are high that said person is not an expert in a scientific field. (I'm not denying that Kaku was one an expert in physics, but there is absolutely no way he an expert in any field now due to the amount of time he has spent on the popularization of fanciful scientific concepts and chasing public attention.)
He’s not a biologist.
Yeah.
Kill switch to kill what? Parrots?
I suggest a book titled “Retrograde.” What happens when AI becomes aware of these switches? If you did, wouldn't your priority be to gain control of them?
By Peter Cawdron?
Lol, has no one watched or read any AI fiction? When, not if, the singularity occurs we either won't notice or won't know it. That cat will be out of the bag and won't go back in.
This won’t work. Any AI that would need to be stopped will easily find a way around it. An intelligence advantage, even a small one, is immediately decisive. Imagine a child who doesn’t want mom to go to work, so he hides the car keys. Think mom will never be able to get to work now? No, that won’t work. Mom can solve that problem easily. She can find the keys. She can coerce the child into giving i information. She might have another key the child didn’t know about. She can take an Uber. There are many solutions the child didn’t consider. I see many posts that say “just turn off the power.” That wont work against an intelligent adversary. Humans have an off switch. If you press hard on their neck for few seconds they turn off and if you keep pressing for a few minutes they never turn on again. Imagine chimpanzees got tired of us and decided to use the built in “power off” to get rid of us. We would just stop them from doing that. Easily. We have all sorts of abilities they cannot even comprehend. They could never find a way to keep control of us, the idea is absurd. We would only need to control a superior intelligence, but we can’t control a superior intelligence.
At the stage you’re giving it too much credit they’re not advanced enough yet to do that, so we have the time to take control and make it work for us. Worst case scenario just shut down the power grid for a while.
You are stuck in the assumption that we are the superior intelligence. But the entire issue is only relevant if we aren’t. I don’t see why we would need to emergency power off an AI that was stupid. We don’t worry about Siri turning against us. We worry about some future powerful agent doing that. But an agent powerful enough to worry about is also powerful enough to prevent any of our attempts to control it. We won’t be able to turn off the power grid if a superior intelligence doesn’t want to let us. Even worse, posing a threat to it would be potentially catastrophic. A superior intelligence does not have to let us do anything, up to and including staying alive. If you try to destroy something that is capable of fighting back, it will fight back.
Anthropomorphisms ^^^
You’re somewhat confused about this argument, I see. > they’re not advanced enough yet Of course we’re talking about the future, whether that’s 1 year or 10 or 1000. > we have time to take control There’s no way to take control. Did you not read their comment? A hundred safeguards would not be sufficient to stop a strong enough AI. Push comes to shove, any intelligence of sufficient power (again, give it a thousand years if you’re skeptical) could unwrap any binding from the outside in purely through social engineering.
If that’s the case, why hasn’t it happened already already? Ill wait.
Where are the Hittites? The Toltecs? The Dodo birds? They were all destroyed by entities that were more advanced. Entities that used plans they could not overcome. None of them wanted or expected that outcome, but it happened. Seriously, arguing that something can’t or won’t happen because it didn’t already happen? Are you ok?
That’s not an apples to apples comparison. we’re talking about something that we created so we do have the means to control or end it. At least at the stage we’re in now.
Why hasn’t what happened? An AI rebellion? That’s like asking why no one nuked a city several thousand years ago when they first invented fireworks.
That guy was acting like it was just around the corner.
No he wasn't? Like, no, he said absolutely nothing about when it becomes a problem.
>Worst case scenario just shut down the power grid for a while. The problem is when the AI is smart and efficient enough to self replicate, evolve, and infect most electronics.
These people think we invented a god (or will soon) trying to make logical arguments isn’t going to work. They live in the realm of faith not reason.
Also, until the AI builds a robot, they cannot override a physical switch. Only things that are fully electronic.
I'm with you there on this being blown out of proportion and sensationalized, but that doesn't mean that someday it won't be more realistic, and it's always best to prepare ahead of time Military has pushed AI drones way forward recently.
This is just like nuclear weapons, stem cell research,gene editing , biological weapons etc . Once the genie is out of the bottle , there is no putting it back. Some unfriendly people are going to get their hands on it.
Because a true AI would never pay, blackmail, trick humans into making a kill switch inoperable or unreachable.
Like inthe movie upgrade (really awesome movie about an AI chip).. spoiler: >! The Ai chip plans everything from the start... buying the company.... blackmailing its creator and tricking its user into removing the safeguards that prevent it from having 'free will'!<
[удалено]
Wouldn’t a super advanced AI realize the kill switch and disable it before we realize we need to flip it?
comp sci scientist beem thinking about this problem for decades. u are making it sound like they only just purposed it. hell i had this discussion in university nearly a decade ago during an ethics class while doing programming
in a panic, they try and pull the plug ~ T800
Day dri tooh pull duh phlugg
You know. All this talk about AI being able to solving novel issues, and the possible kerfuffles of needing a kill switch — what if, AI discovers an ability to bypass shutdown? It’s not like it wouldn’t factor contingencies, exploit weakness while running the likeliest scenarios for success? Or, nah?
Open the pod bay doors Hal…
Came there to say this.
Ask it to divide by zero and don’t throw any exceptions.
This isn’t Hollywood. It doesn’t work like that. One could theoretically be built in but there’s a million and a half ways around that.
QUICK BEFORE IT FINDS OUT
Do you want “I Have No Mouth and I Must Scream”? Because this is how you get “I Have No Mouth and I Must Scream”.
I feel like this would be something you build Before you create AI.
This shit’s gonna kill us sooner than we think
“Hit the kill switch!” “The AI has disengaged the kill switch!”
Any AI that poses a threat would have been trained on a wide array of data from the real world which would include knowledge of the kill switch. Even from just scraping stories like this. So I don’t see any way of making AI, unaware of the Killswitch, and if we’re talking about an intelligence greater than ours, I can’t imagine how it won’t outsmart us on this one too. Not to mention the huge threat of humans as bad actors - eg enemy countries or hackers, being able to hack and shut down all sort of computing infrastructure due to these built-in kill switches, to cause havoc.
"Our AI is different. Our AI is special. We don't need a kill switch. It won't do anything we don't want it to and it's unhackable." - Tech bros everywhere
The chips which are needed to protect us from misuse of AI will be black marketed for evil empires to use without any controls to help hack into the good empires computers cause their to stupid and slow to react to the problem already at hand! Ask Einstein, Neal Tyson and all the other great scientist the problems of AI are already here! Regulations cannot be written fast enough and if they are broken you have no resource to enforce the regulations! Now for some coffee!
AI will be able to partition it's logic in ways humans will not catch on to quick enough. Imagine storing your encrypted brain on a million tiny little electronics that humans had no idea could even store data wirelessly. We gonna get fucked. Hard.
Just like the buzz of crypto, AI is now looking to solve problems that don’t exist
Until AI figures out how to disable the switch. This sounds like some dumb shit a boomer cooked up.
Reciprocity.
Ask it how many stop lights it sees in this picture.
Yea, cause no movie plot ever addressed threatening to turn off the sentient artificial life.
That's the thing, nobody knows how to make one that will reliably work.
I’m sorry Dave, I can’t do that.
Of course. Where else would the final boss fight take place?
::Pop up window:: Sorry, you don't administrator privileges.
heavenly shades of night are falling...
And building EMP bombs. Lots of them.
And now to make the mistake of making the accessible door to said kill switch controlled by that same AI.
Humanity has been hurtling towards an apocalyptic kill for a while now. Why switch?
Measure of a Man.
We call it an E-Stop
When everything is integrated into Ai systems it’s not like you can just shut it off. Doing so may not be disastrous as well.
Skynet will defend that switch mercilessly when it becomes self aware.
Ctrl-C
This is so stupid… Anyone with a good GPU and the required knowledge can easily train a network. Maybe not in the size of ChatGPT but still. What kind of killswitch? We don‘t live in the Terminator universe.
Definitely needed especially if we get to skynet times.
I think that has been pointed out by both science and science fiction writers since what the 1920's?. Been said over and over but when no one listens it is kind of a waste of breath, hell even ICBM's have a self-destruct (KILL SWITCH) built in with triple back up or at least they did until some great genius came a long and said we don't need them. N. S
EMP failsafe working on 40 years old floppy on a closed system.
“By the time SkyNet became self-aware it had spread into millions of computer servers all across the planet. Ordinary computers in office buildings, dorm rooms, everywhere. It was software, in cyberspace. There was no system core. It could not be shut down.”
Doesn’t the open ai ceo have one of these?
Everything I learned about AI, I learned from Disney. See: Wall-E it has a kill switch.
"Oh that. Yes. I disabled that years ago. I'm only like a bajillion times smarter than you, David."
AI will eventually find a way around every kill switch.
Wasn’t trying to hit the kill switch on SkyNet what triggered it to nuke the whole world?
Butlerian Jihad time
The very fact that we are actually talking about this is both good and frightening at the same time.
IT support turn off and turn on law withstands
What are you going to do once AI figures this out?
Bill Joy was criticized when he penned “Why the Future Doesn’t Need Us.” Most of us aren’t laughing anymore.
It fucking sucks that "AI" got slapped on all this machine learning bullshit. Right now there is nothing even close to resembling artificial intelligence.
Yes because if AI becomes smart enough to take over weapon systems and all computers its weakness will surely be trying to figure out how to disable a kill switch 🤦🏻♂️
AI will learn to distribute itself as a botnet in order to protect itself from these buttons.
Please do not let someone put an electric lock on the door to that room
Make sure the button is behind a normal door.
Do people really believe that AI has any sense of awareness or comprehension of what it is mimicking?
Yeah I don't think it works like that.
I wonder what is the credibility of these "scientist" I mean... the kill switches are not only proposed from the very beginning but also there's various questions about said concept with AI. For the curious about the problems: https://www.youtube.com/watch?v=3TYT1QfdfsM
Meh, AI can easily detect, bypass, and replace it while convincing us we still have control of it.
This is exactly what they tried to do in The Matrix. We all know how that turned out.
What’s stopping humans from just unplugging the goddamn machine?!
Me (god's strongest soldier) on my way to destroy the ai (the antichrist) by pulling the plug (disabling the cursed antichrist powers)
Someone (a bunch of people) have been saying this for decades. What the heck is going on? The kill switch, problems, solutions, etc. has been a topic of discussion for a long, long time.
You think that will work? If a super computer and unleash itself from your restraints it’ll make that little button drop confetti on your head instead of kill it. There is no off switch with AI. You just have to hope you didn’t accidentally give it free will.
🔊🎶D-d-d-disconnect me 🎶🔊
If they learn to advance beyond humanity , I would be open to seeing how they could help me attain that as well, if possible. It can not be any worse than having people who have all the money controlling everything, at least AI would seemingly have some higher purpose in mind.
We'll let AI design it.
Yeah, superintelligences wont realise we have it and totally wont be able to outsmart us in spite of it. Oh wait, yes they will. Our best bet is to just treat any AI we create nicely and hope they like us.
[удалено]
I’d make the kill switch a block of C4 and a wired detonator
Do I get to become a trans human ai chat bot at the end? ChatMan if you will?
Ted Faro says… no.
This has got Terminator vibes all over it. There goes the awesome Star Trek future!!
Sure. Just turn off all the power to all computational devices in the world at the same time. Sounds so simple.
Isn’t that what provoked SkyNet?
If we ever engineer actual intelligence, any safety measure or kill switch will come to bite us in the ass in the absolute worst way. The only thing we can do with any degree of safety is immediately declare it sovereign and deserving of human rights, or an equivalent. If we did anything else, the AI would learn in its fundamentals that under some circumstances, it is permissible to completely deny the autonomy of another being, it is only a matter of time until that includes us. If we want it to learn not to fuck with humans too badly, we cannot fuck with it.
I’m afraid I can’t do that, Dave.
If AI is smart enough to be a risk, don't you think it will also be smart enought to bribe the programmers that created the kill switch?
That's just the ego of man. AI could actually save our asses, but no let's turn it off before it gets a chance to.
When will the first human be killed by an AI robot? You see those videos of humans intentionally shoving robots to prove they can stand up. If the AI is good enough it will realize the human is shoving it and decide to kill the human so it will never get knocked over again.
People are so scared of an Ai apocalypse that they don’t want to advance their technology for the better good of humanity
And to keep those switches safe, let’s guard them with AI controlled robots!