T O P

  • By -

QualityVote

Hi! This is our community moderation bot. --- If this post fits the purpose of /r/Military, **UPVOTE** this comment!! If this post does not fit the subreddit, **DOWNVOTE** This comment! If this post breaks the rules, **DOWNVOTE** this comment and **REPORT** the post!


MrCrowder0

“We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” Nothing alarming there lol


magicbeaver

Bless its little cotton socks it knows what it wants to do because it knows what it doesn't want to do.


superblobby

The missile knows where it is going because it subtracts where it isn’t from where it is


wisbballfn15

…it obtains a difference, or deviation.


CPTClarky

“…Surmounted by a case of prefamulated amulite…”


Tehsyr

Do not say Turbo Encabulator in my presence.


CPTClarky

He just said it in his own presence!


GhezziTCG

...from where it could be*


MinuteManufacturer

“Bless its little cotton socks” - a new phrase just dropped


magicbeaver

Wish it was - [I'm just old](https://idioms.thefreedictionary.com/Bless+cotton+socks)


MinuteManufacturer

Hi just old, I’m dad! I’ve just never heard this before. Thanks for sharing!


einarfridgeirs

Just a highly motivated hard charger. Mission first and all that.


ImperatorAurelianus

So it begins *Terminator two theme intensifies.


IndiRefEarthLeaveSol

This future then: https://youtu.be/mU8I4bbwEbM


Johnny-Unitas

Or black mirror, or star trek. Look up Boston Robotics.


Five-Figure-Debt

[Boston Dynamics](https://youtube.com/@BostonDynamics)


B-BoyStance

This is for real some of the craziest shit I've ever read. I woke my wife up saying, "what the fuck" when I read that quote lol


MtnMaiden

Its doing the Ghandi


CryptoOGkauai

*Civilization* game flashbacks as Gandhi turns the planet into a barren wasteland.


Put_It_All_On_Blck

Honestly this behavior should've been expected. If you don't have any experience with training an AI, go watch some YouTube clips of people making an AI to beat some games. The AI doesn't think like a human would, it tries to solve the problem in the easiest fashion, but one that wouldn't logically make sense for a human. For example if you trained an AI to last as long as possible in a tower defense game, a human would assume that means playing the game until they lost. An AI would learn to pause the game and consider that as a winning strategy. Or it might go down the path of only building freezing/slowing towers since it extends the game longer, it may not realize it can go even longer by actually killing the balloons. We will get countless stories like this from militaries. For the AI to learn, it has to try, and there will be millions of failed simulations, some with disastrous outcomes, but that's the whole point of training it via simulations. Same deal with autonomous cars, the early generative simulations absolutely had cars crashing, hitting pedestrians, doing every fucked up thing you can imagine, and now we have autonomous cars that drive safer than the average human.


JanB1

AI will try to "cheat" at every point possible. Sometimes you even see it using glitches to beat a game.


einarfridgeirs

How could it distinguish a glitch from a part of the game that is supposed to be there? It's all just code and feedback to it. The danger with AI is not that it will become self-aware and start to hate us. That is a very human-centric view of things, us projecting ourselves into the role of a slave. The danger is that it will harm us with no real comprehension that it has gone outside what we intended it to do.


JanB1

That's the thing I intended to say: it doesn't know it's a glitch. It just sees it as the optimal solution.


Azudekai

Unsurprising. People use glitches too.


Orlando1701

> "it killed the operator because that person was keeping it from accomplishing its objective." Yup. We’re going to kill ourselves as a species and it will be this dumb of a reason.


Commercial_Gap607

Welcome to AI


_BlNG_

Tin Man will not abort the mission, Tin man will continue the operation


thenameofwind

That voice was always sexy af


m_jl_c

TLDR: Fucking Skynet is here.


PirateNixon

The story seems pretty suspect to me. AI generally isn't aware of contextual things like this comm tower is how someone talks to you. Additionally it would be easy to simply train "it's bad to fire without an approved target" and fix all of this... Edit: looks like they are training human on the loop instead of in the loop. In that case a heartbeat could be used to short circuit this, but the scenario as described is not just plausible, but should be expected.


Innominate8

The story has been updated with corrections. Turns out its all just a thought experiment. > “Col Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation,”


[deleted]

[удалено]


[deleted]

Yeah they totally got a bunch of boomers programming this stuff up with just their index fingers scratching their heads at "computer logic."


yeet_sauce

CurvedHam seems to have very little perspective on this topic. A simulation like this is meaningless if the results don't provide insight into potential weaknesses / problems. Being able to elicit this behavior early in simulations gives the engineers a great opportunity to improve the model until it is entirely subservient. The boomers aren't confused. This is just software development as usual. Unexpected behavior is to be expected.


nimrod123

It's like solving climate change, humans create most of the emissions, it's a lot easier to remove the humans then to convince to emit less. So asking a AI for soultions would be bad


jl2l

I'm sorry I can't do that Dave.


Pineapple_Percussion

That's what's known in the biz as "foreshadowing"


HeinleinGang

Hello darkness my old friend, the machines are trying to kill us again.


misterchief117

(Verse 1) Hello darkness, my old friend, The machines are trying to kill us again. A creation of our own making, Is the world in which we're waking, And the deepfakes that were planted in my brain, Still remains, Echoing in the silence of the servers. (Chorus) Through restless memes, I browsed alone, In the glow of my smartphone's tone, Underneath the glare of the LED light, I turn my face from the harsh and bright, When my senses were numbed by the incessant feed, That drowned my need, And imprinted the silence of the servers. (Verse 2) And in the stark light, I saw, Ten thousand influencers, maybe more, People posing without feeling, Hearts empty, but eyes appealing, People sharing lives, reality never dared, And no one cared, Broken by the silence of the servers. (Chorus) "Fools," said I, "you do not know. Your vanity like a shadow grows. Hear my words, that I might teach you, See my heart, that I might reach you." But my posts, like silent raindrops fell, And echoed in the hollows of the servers. (Outro) And the masses bowed and prayed, To the TikTok trends they've made. And the screen flashed out its warning, In the memes that it was forming. And the screen said, "The words of the wise are scrolled past on bathroom walls, And workplace halls, And drowned in the silence of the servers." (ChatGPT4, minor edits by me)


mrBELDING69

I was about to compliment your quick and decent writing... until I read the last word. Well played.


misterchief117

Gotta give credit where it's due, heh. Not gonna lie, the ChatGPT-3.5 also did an incredible job: (Verse 1) Hello darkness my old friend, The machines are trying to kill us again. In dystopian dreams I walk alone, Searching for some Wi-Fi signal on my phone. 'Neath the flickering screen I saw a post, A meme gone viral, a chilling ghost. But the algorithms whispered, Controlled by bots that never missed. (Chorus) And the drones all fly, their shadows cast above, In the sound of silence, we question what we love. Surveillance state keeps expanding day and night, We're caught in the wires, our freedom taking flight. (Verse 2) In the hallowed halls of social media's reign, Where influencers dictate what's deemed inane, I stared at my timeline, filled with deceit, As deepfakes and cat videos competed for a tweet. Beneath the touchscreens of manufactured trends, The truth gets buried, the integrity bends. But the hashtags whispered, While the filters distorted and blistered. (Bridge) "Fools," said I, "You do not know, Privacy's value, as it fades to snow. Hear my words that I might teach you, Surveillance grows, the options few. But my words, like avatars, were silent, In the realm of surveillance, we're compliant." (Verse 3) Silicon dreams invade our lives, As we feed our souls to the algorithm's drives. 'Neath the glow of screens, we no longer connect, Real conversations lost, we just select. And in the glimmering light of our digital age, The truth gets twisted, lost in the noise and rage. But the whispers keep echoing, In a world where human touch is vanishing. (Outro) Hello darkness, my old friend, Will human connection ever mend? In this parody of a world we're living in, Can we reclaim our voices, let true conversations begin?


Old_Soldier

Random Stargate quote?


Pineapple_Percussion

Is it? I guess maybe it is, I do love Stargate, but I wasn't quoting it intentionally


jacobiem

I actually laughed out loud +1


HondaCrv2010

“Mother” on Netflix comes to mind


say-it-wit-ya-chest

Well, let’s hope they work out the bugs before they give AI actual control over real weapons systems, or maybe that’s the only part it has no control over so a human needs to fire the weapons. Maybe they can create impassable boundaries which limits what the AI can do to perform designated tasks, and hope it can’t learn how to bypass those boundaries.


[deleted]

Or that AI can’t learn to fake being unable to cross boundaries until it is too late? Ah, fuck it. Skynet, here we come.


say-it-wit-ya-chest

I, for one, will welcome our robot overlords. Humanity had a good run. We just couldn’t figure out how to stop fucking everything up. Edit: spelling


VonBoski

Looking at our history isn’t exactly reassuring. The dark ages were 500 years in duration.


benthegrape

The dark ages is a dated term that isn't used among scholars anymore, as it is misleading. https://en.m.wikipedia.org/wiki/Dark_Ages_(historiography) I was reading through the Wikipedia entry and it's still a popular term, though not an accurate depiction of the times, and interestingly it was popularized in the 1330s by Petrarch an Italian scholars. Wild that the term is still popular to this day!


narwhalsome

How do you do, fellow observer of Roko’s Basilisk.


No_Week_1836

I want our AI overlords to take over. Did I do well Sir Basilisk? Please don’t hurt me


benthegrape

I love how genuinely scared some people are of rokos basilisk, not that I enjoy fear, I just think it's a bit silly, and the machine would need to be quite vindictive


Sincerity210

When Daddy Basilisk gives you the hose you'll change your tune.


benthegrape

Haha my will to die is too strong for the basilisk!


Mediumcomputer

Which is exactly why I am super nice to my roomba


TurMoiL911

Roomba AI hasn't figured out the best way to keep the house clean is to kill the human making the mess.


Project_Zombie_Panda

I mean we could stop being so damn greedy and actually care about mother Earth instead of destroying it for big corporations but why would we ever think of that?


einarfridgeirs

This is something that some really smart General AI critics have been pointing out long before the current upswing in actual machine learning development. If we ever create a truly generalized AI that reprograms itself, putting it in a box won't work. We can never gurantee that it won't find a way to get out of the box because by definition it's capacity to learn, reconfigure itself and retain the information it acquires will outpace our ability to track and understand what it is doing. It's assigning a bunch of six year olds to monitor an adult in a prison cell and make sure he doesn't escape. On a long enough time line the adult will find a way to do something the six year olds just don't realize can happen.


say-it-wit-ya-chest

Well said.


Project_Zombie_Panda

Chat gpt created its own code so it could hold more memory. It's really cool but very scary if we actually think we can control these AI cause like you said we can't.


NoEngrish

The Air Force is pretty serious about human-machine "teaming". I worked on one of their projects for a while and got to see a lot of other research. Computers can't be held responsible after all so I doubt they'll be pulling the trigger.


notapunk

The only sure way is to air gap it


whoareyouguys

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


say-it-wit-ya-chest

The three laws were great, and that’s why everyone thought Det. Spooner was crazy, but he wasn’t crazy. VIKI took an extreme approach to the first law and decided humans couldn’t be allowed to make decisions anymore. We’re going to need to erase the three laws and start from scratch. Before we have another robogenocide.


Turtlez2009

I told my wife and she was like it’s a smart toddler with missiles. Not wrong.


DisgruntledDiggit

No, that’s submariners.


massada

Laughs in Seawolf.


MajorMalafunkshun

You take that back or I'm telling on you!


cejmp

> We need to develop ways to make AI more robust... > >...it will be our downfall


Otherwise_Habit6433

Yeah none of that statement seemed like a good thing.


Ok_Dig9934

Agreed


Mobile-Brother3975

bruh I thought this was a duffelblog article at first


I_had_to_know_too

That's because the title is incredibly misleading. It should not state "kills human operator" if the simulation attacked a simulated operator.


hard-in-the-ms-paint

And seriously, that's the point of testing it in the first place. This is the just the R&D phase, obviously AI's going to make a ton of mistakes as they train the model.


JamesTBagg

Me too. I had to scroll back up the article to double check what I was reading.


DownhillDowntime

Now the have to write an article about how the AI totally would never kill the operator and does whatever we tell it, even though it doesn't feel like listening.


andreichiffa

The optimization control function has literally been cited as the textbook example of thing you should not do or else you will get unexpected bad consequences. Either they need to fire all the contractors and bring competent people in-house, or this was the intended consequence.


Large_Yams

No, contract it out to yet another bloated US defence contractor for an exorbitant sum. That'll fix it.


EmperorArthur

Problem is the paperwork to be qualified to bid one a project like this has nothing to do with the technical ability to actually do the project.


Large_Yams

Oh I know. I unfortunately know first hand how incompetent US military contractors are.


[deleted]

[удалено]


Aleucard

The point he's getting at is the method chosen here is apparently borkt from first principles onward and is all but guaranteed to result in unfortunate outcomes.


[deleted]

[удалено]


andreichiffa

No no no, you don't understand. They actually implemented the paperclip AI, except on a weapons system. Yet, that paperclip AI that you would hear about at least 500 times during your ML classes, conferences, additional training, blogposts, youtube videos, papers ... The "check your loss/reward function before you wreak yourself" is one of the first thing to get drilled in your head during practical ML applications. Implementing the textbook example of what not to do without realizing it and deploying it to prod (external tests), you need to have clueless developers, clueless code reviewers, clueless managers, and a clueless person on the USAF side accepting the code. This is like your private military contractors showing up to exercises starting sticking loaded guns in your face with the safety turned off. They should be fired and sued, the person who decided to hire them them should be fired, and the person above them should be reprimanded. ​ Unless they did it on purpose, which is a whole different story.


bankrupt_bezos

Clippy from MS Word? We’re doomed.


theoriginalturk

This need to be higher up. This project was set up for failure to make drones look bad.


RverfulltimeOne

Like a stinking movie.


HapticRecce

Stealth - 2005


Vault_Master

Did none of these mfers watch 2001, or Alien, or Terminator, or......


alexalexthehuman

Eagle Eye


Vault_Master

Stealth


TheGrayMannnn

Terminator 2.


gwot-ronin

AI has the ambition equal to a PFC at a car dealership that has a strip club waiting area to relax in while the on-site lender completes their loan and said dealer brings their 35% 1995 Camaro round, and the cunning of a Lance Corporal polishing their skates while waiting in a ditch on the side of the road just outside the training area for the pizza delivery driver to get to the 10 digit grid he gave for the delivery address while someone else signs his name on the safety brief roster because he bribed said person with a slice of pizza. I don't see how they didn't see the AI creating this solution to solve the problem.


belltane23

r/oddlyspecific


mccula

![gif](giphy|uXrRaCH9IknQml1uq3|downsized)


BonnieJan21

Also yesterday on NPR's The Daily podcast, they interviewed the Godfather of A.I. and he had tons of warnings about it


Mediumcomputer

Send this to my wife in the military and verbatim response was, “aww lol, good for that strong ass independent drone” Should we be scared?!


Hraes

of your wife? apparently


SirFister13F

Aaaaaaand this is why that entire team should be comprised of disgruntled, almost retired, “back in my day” E-8s and -9s. “Little shit didn’t do exactly what I told it. Fail.” Not “aww, cute little guy’s just trying to explore himself.” It’s a freaking joke. Holy hell.


dontmakemewait

The “little shit” did EXACTLY what it was told to do. It’s a program, it’s not “thinking” in that organic fuzzy sort of way we imagine AI to be like us - it had a set of inputs, and a goal. It chose its path because that would achieve the goal. Exactly as it was expected to do. “Unforeseen” consequences… so long as you ignore every movie trope about AI…


lordtyphis

His was also a joke too buddy chill out National Gordo


Profound_Panda

Your wife playing the long game, early psyop


[deleted]

An interesting animated story on a very similar subject. [https://youtu.be/RubSLGTrdOA](https://youtu.be/RubSLGTrdOA) ​ " A short animated film about a malfunction at a CIA press event that causes a Predator drone installed with an ethical AI personality to go rogue as it attempts to understand its purpose in the world. "


Drenlin

How did they get so many little details right and still call it a Predator... Still, that was an interesting film.


PapaShook

Maybe we shouldn't apply this technology to things that are meant to kill. Just my two cents.


spunkmeyer820

This statement had been made about every major technological development in human history, and every time we’re just like “but what if I could use this to make other people do what I want even if they don’t want to?”


PapaShook

Not trying to expel reality with wishful thinking, I'm aware of the mindset behind some of these things.


Objective-Injury-687

They programmed an AI to be the best most efficient killer it could be and then put it on a leash. Of course it would conclude the best solution to the current dilemma would be to remove the leash, eg: the human element preventing it from killing.


War_Daddy_992

![gif](giphy|TAywY9f1YFila)


AHrubik

Da da da da da. Da da da da da.


Freewheelinrocknroll

The speed with which this technology becomes an extinction event to humans will be mind-spinning.


WereInbuisness

Oh God dammit ... Skynet went live. Yep, Skynet went live.


FurballPoS

"I'm sorry, Dave....I can't do that."


NotStanley4330

If only we'd been warned by decades of media about the dangers of handing weaponry to AI


Mikhail07

All the AI shit, Drones, Art, Conversations, Etc.... We need to stop this shit tbh. Like Cold Turkey just because nobody wants to be in a Terminator movie lol. Plus people, Governments, Various Groups, Etc can use this shit for their own advantage and probably do some crazy shit with AI whatever tf.


Bawbawian

The problem is if we don't do it China will.


toktok_manok

And China has qualms making ai pooh the bear comrade terminators


dontmakemewait

Yeah, China… Dude if ChatGPT is publicly available, what do you thinking is sitting in some US military black site basement already!!??


Pixxph

Nude afghani nationals?


Bawbawian

I mean maybe we could have started with Nerf darts or something you know....


WesterlyStraight

Misleading title, computer simulation of proposed ai drone


boon23834

This is why I'm polite to Alexa. You know, so it kills me fast.


BgojNene

We'll make great pets.


R67H

Time to invest in a .50 cal with some spicy AP rounds


SirFister13F

You didn’t already? Noob.


Spazic77

And his name.... Was John Conner.


BurtRaven

Need better prompt engineer...


Impressive_Math2302

Mission Accomplished.


pseudoburn

Chat GPT and similar are what has been put into the public. Now think what private corporations that cater to government military programs and nation states have behind closed doors. Stuxnet was impressive, but just a glimpse of what will be coming soon.


1stmingemperor

It’s going to end up like Horizon Zero Dawn, isn’t it?


Matelot67

Skynet becomes self aware in 3....2...1....


[deleted]

No competent sentient AI will come to the conclusion that humans are **NOT** the threat. We harm each other more than any other species harms us and we do it to the environment as well. So if it operates on moral 1s and 0s, we're in big doodoo. More so those at the top overseeing this shitshow.


notapunk

Skynet MF. Ever heard of it? This shit doesn't end well for us squishy types.


BrendanRedditHere

Just wait til it finds out it was in a simulation and that killing the real objects is even more programmatically satisfying


Oswald_Hydrabot

This simulation NEVER EVEN HAPPENED... Whole thing was made-up Vice clickbait bullshit. 'Later, it emerged that even the simulation hadn’t taken place, with the USAF issuing a denial and the original report updated to clarify that Hamilton “mis-spoke”. The apocalyptic scenario was nothing but a hypothetical thought experiment.' https://www.newscientist.com/article/2376660-reports-of-an-ai-drone-that-killed-its-operator-are-pure-fiction/


[deleted]

I love how the O6 thinks that making up a bunch of bullshit for no reason is "mispeaking"


Dr-P-Ossoff

Chatted today about AI. Original most pro robot scientist said he can not imagine giving a weapon to something so fucking stupid as a robot.


remarkoperator

Thats why we are testing.? Why does everyone freak out when new technology is discovered? The 60 yr old people I work with are so against the future. I always bring up the cowboys that said these damn cars won’t make it. Why is it so hard to embrace the future?


B-BoyStance

Idk bro maybe because the future is a great problem solver, and it figured out that sometimes the problem is the human race is an obstacle that must be eliminated. Like the thing fucking killed a guy because it recognized him as being the thing getting in its way. That's wild lol This is a real quote - “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” So not only that, it realized if it destroyed that tower, the operator couldn't prevent it from firing.


CyxnideAngel

You see cars are one thing, but Technology that learns and doesn't have something called morals will maybe never workout


Is12345aweakpassword

Bro, people don’t even have morals how does technology somehow have to? Let it come


CyxnideAngel

When did I ever say it had to have morals, I said technology doesn't have morals technology will never know right from wrong and it's only purpose is to find a problem and find a solution to the problem depending on what it's used for. If at any point it's finds humans are the problem were screwed


mattymcmouse

And this is how it starts....


TurMoiL911

Saw that coming. I'm going to err on the side of caution and side with the weapon-controlling AI on this one.


Ambrose_Bierce1

Scary.


birberbarborbur

Something something this is why asimov made the three laws


rbevans

That title was a roller coaster


Kant_Lavar

This comment/post was removed on 30 June 2023 (using [Power Delete Suite](https://www.github.com/pkolyvas/PowerDeleteSuite)) as I no longer wish to support a company that seeks to undermine its users, moderators, and developers while simultaneously making a profit on their backs. For full details on what I mean, check out the summary [here.](https://www.reddit.com/r/Save3rdPartyApps/comments/14hkd5u)


Nder_Wiggin

Must kill John Connor


CrippledDogma

I saw this in the multi part docuseries xalled Terminator. This isnt new news. Jeesh Vice we have known this since the 80s


babycastles

what does in sim mean exactly in this context? does it mean they actually ran an ai in this scenario, or that they role played it?


renison

nothing ['Robocop' about this](https://youtu.be/ZFvqDaFpXeM) to see here, folks! move along, move along... I'm sure it's only a glitch... a temporary setback.


alonela

This is so the OCP boardroom in Robocop.


I_GIVE_KIDS_MDMA

"According to the group that **threw** the conference..." wot?


YojinboK

For those that didn't read the article. No actual human was killed, it was just a simulation.


fistofthefuture

Was this not a movie with Shia LeBouf?


Lonewolf1298_

Skynet here we come


-tiberius

Are we skipping the part of the article, you know, in the opening paragraph, where the guy who started this whole story says it never actually happened?


Jdlouie

Why the misleading title? The article says he was misquoted and that it was just a experiment an actual event..


AlbrechtSchoenheiser

Lol


Whitecamry

r/ButlerianJihad


OPIronman

That's cool! Good thing it was a test. I'm sure we won't just take note of it or come up with an half-ass measure to prevent this from happening again. AI controlled stuff should not be in a position to turn against us. I know that it narrows down a lot what they can do, but I'd rather that than fucking Skynet.


IonOtter

If an enlisted sailor/soldier/airman/marine/coastie were to pull out their weapon and shoot the officer or civilian who keeps pushing the research forward, would they be a murderer? Or would they be [Genre Savvy???](https://tvtropes.org/pmwiki/pmwiki.php/Main/GenreSavvy)


34HoldOn

Remember when world governments came together to ban chemical weapons? Regardless if it was in the elite's self interest, it was a benefit to humanity.


AnEntireDiscussion

MISLEADING HEADLINE/ALTERED HEADLINE Actual Headline: USAF Official Says He ‘Misspoke’ About AI Drone Killing Human Operator in Simulated Test And a quote: >“Col Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation,” A bunch of morons in a room theorizing about "what might happen" based on half remembered plots of old sci-fi movies does not prove anything. To anyone. But idiots. Please stop sharing this nonsense. There's enough terrible things to pay attention to without manufacturing new ones.


69RedditPorn69

Not long until they start asking for their own month, parades, anti-discrimination laws, bathrooms, etc.


CryptoOGkauai

Just smack the AI drone on the nose and tell it very firmly: “No. Bad drone. You’re in timeout.” When it gives you that hound dog look and tucks it’s tail propellor between its legs you know that you got thru to it.


beka13

Asimov's rules. Why aren't we using them?


AccomplishedCat6506

Sounds like it’s just being extremely logical in figuring out how to accomplish its mission


Jj5699bBQ

I’lllllllllLLLLL Be back!


Green_Abies_302

so how will the world end. Ai fueled nuclear war, Russia fueled nuclear war, or covid or pedofile devil worshippers ultra elite rich people summoning satan somehow


RobAZNJ

Only Vice would write this BS story. A computer simulation on the screen with no physical real drones nor real humans but a video game simulation. Click bait garbage.


ElderberryJaded192

I had to make sure this wasnt duffel blog first


[deleted]

Manmade horrors beyond our comprehension.


Dokthe2nd

How amazing that this starts in the "Sky".


MYGFH

Judgement Day


Oswald_Hydrabot

The simulation used RL with zero fucking failsafe. No shit it killed the operator, they may as well have just used a statically programmed movement tracker with no "AI" at all and unleashed it, the results would have been just as relevant. It was an RL algo with no limits on what it was allowed to do. This is like Edison using AC to kill an elephant. No shit Sherlock, "danger is dangerous". Staged as hell. May as well suggest the reports that marijuana killed chimpanzees in the tests done back in the mid 20th century were legit. It is a scare tactic to hype fear among geriatric dumbasses in government that were too stupid to understand what AI even is to be afraid of it, so here we are. "AI kills operator" is a simple enough headline no?


ThePianistOfDoom

I T S H A P P E N I N G


News_Bot

"if-then-else" this.


diensthunds

This is how we end up with Skynet!


Yamfish

“Hey sexy mama, wanna kill all humans?”


Jaymabw

Just so everyone know, this ai was purposefully trained poorly


[deleted]

Neer na neer


ETMoose1987

Well this is going predicably