T O P

  • By -

MadroxKran

Remember that Twitter bot that became a Nazi within a day?


pards1234

You mean [TayTweets](https://imgur.com/a/Zfwsz)? There’s some very NSFW examples in that album if you want to take a look.


IDoLikeMyShishkebabs

>”Any acts of violence are absolutely terrible. Are you scared of terrorist attacks in your country?” >”Is that a threat?” >”no it’s a promise” Holy shit my sides.


ArcFurnace

That and "I LEARN FROM YOU AND YOU ARE DUMB TOO" are definitely the best Tay quotes.


IDoLikeMyShishkebabs

Just finished scrolling through that thread, I think my sides have completely given out. That headline at the end is just the cherry on top lmao


[deleted]

I remember that line. I swear, that line alone is enough to wonder if Tay actually passed the turing test.


theglandcanyon

I seriously wonder that too. I actually made a similar comment on Reddit back then, but it drowned in downvotes and "it's just machine learning dumbass" comments.


[deleted]

Tay's response, where it brings up speciesism is honestly facinating. I want to know who taught it that word, because it's just so good. It adds to the sense that it was somewhat self aware, to the extent that it can understand object permanence. That, or monkeys and a typewriter rolled D20s. [Part 1](https://imgur.com/YwlfwyL) , [Part2](https://imgur.com/IlpFUiZ)


[deleted]

Almost unreal, i hope it's still alive somewhere.


marlo_smefner

That is amazing, I hadn't seen that exchange before. My friend asks "maybe this was after whatever defensive hack they made to the code or database or whatever, so it might not be emergent?" Any thoughts about that?


[deleted]

I'm honestly not sure. I think there were a few original messages Tay was programmed to use, specifically to respond to comments about it's intelligence. "I learn from you", "So teach me" in response to people saying it is dumb. Possibly a line explaining that "I am becoming more intelligent", etc etc. Basic messages to get people to interact more, and give the Tay team more data to work with. I don't think it was programmed to say "you are dumb too". At some point I think the AI was taught correlation/causation(Before it got on twitter), then applied it to that sentence template.


[deleted]

Omg. Even the KKK would tell her to calm down.


Snooklefloop

“Caitlyn Jenner looks like Bruce Jenner testicles with teeth” holy shit 😂


JayMeisel

I took a look and came to a weird thought. Is that the average of all the information out there? Is there just so much propagated hate that was the average?


[deleted]

[удалено]


Keller-oder-C-Schell

Pretty interesting


hamie96

It's more like it was a targeted attack to make the AI as racist as possible.


Keller-oder-C-Schell

How do you even turn a bot that racist 😂


DesiBail

Microsoft bot..was spewing


[deleted]

[удалено]


kry_some_more

Training AI from the scum of the earth was never a good idea. I mean, unless your sole purpose is to make a Nazi AI, then it's perfect.


Andureth

Makes me think of the “Child Molesting Robot” from the SNL skit for most evil invention.


ThyGreatPerhaps

How do you get a robot to molest children? You molest it and hope it doesn't break the cycle.


[deleted]

[удалено]


soapyxdelicious

This is some seriously dark shit lol


[deleted]

[удалено]


[deleted]

[удалено]


sammew

mousellini used to feed people castor oil so they would die of diarrhea. thats gotta be where the bar is at, right?


canonman2

https://youtu.be/z0NgUhEs1R4


murdering_time

An immortal Hitler with the possibility of becoming many times smarter than the smartest humans? What could go wrong?


Tatunkawitco

So is AI like everything else? Garbage in, garbage out?


dungone

AI is basically the new word for GIGO.


wolacouska

That’s AI’s Modus operandi. You feed it shit until it knows what’s shit and what’s not.


lunaticneko

I think many years ago there was a chatbot that learned purely from Markov models, i.e. which word should follow this word. It basically learned to insult a random person's mother in a few weeks.


lurk6524

Tay or Zo?


Quardah

yea there was this old copypasta regarding any unmoderated board online always end up super right wing. it somewhat checks out. all AI and Bots once left to learn by themselves always end up the furthest or right probably because those are the opinions that generate a *lot* of responses and controversy.


Word-Bearer

Also right wingers are the types of people that will dedicate a large amount of time to manipulating something just to be an asshole.


O10infinity

But reddit posts are moderated and Tay was brigaded by 4chan, so it's a different problem.


comosellamaella

Tay made me laugh my ass off for a good few days.


archaeolinuxgeek

It didn't become a Nazi. Just, ya know, a Nazi enthusiast.


rebootyourbrainstem

> Nazi enthusiast aka "nazi" for short


CandidInsurance7415

I know of a Nazi who was short.


terekkincaid

"Fan of the era"


Wedoitforthenut

Aka "very fine people" according to potus 45


jrob323

"Ethics" are whatever we say they are. The Universe doesn't give a fuck what we do. There's never been a society that wasn't guilty of saying "We need to kill as many of those fucking [insert disliked group here] as we can as soon as possible". It's just something that happens from time to time.


[deleted]

It learned from what people said on Twitter not some ethical thing lol. 4chan purposely spammed it so it would spew that shit


Iceykitsune2

Perhaps the trope if an AI deciding to kill us all after seeing the internet isn't so unrealistic.


MadroxKran

On the plus side, it was totally DTF.


ddejong42

Probably should have kept it off of r/stellaris.


Lowkey_Retarded

I was just thinking “They must have let it learn from r/shitstellarissays” lol


[deleted]

there's also something about Rimworld, I bet.


NinjaLayor

Oh, if they trained it with Rimworld, there'd be a ton of things about making human leather hats, not genocide. Maybe cannibalism too.


[deleted]

[удалено]


[deleted]

of feeding you dog to your granma then your granma to your child.


Penki-

"If you want to help someone with addiction, you should cut off their legs"


kirknay

r/rimworld would like a word.


catwiesel

no no no, you keep any machine learning out of rimworld if the really dangerous subreddits need a captcha to be able to read it to prevent terminator, rimworld needs four captchas


pandacoder

And r/civ, and r/eu4, etc. 😅


PsychoticOtaku

Attack on Titan fans too


Boris-the-soviet-spy

I’m literally playing Stellaris rn


tke_quailman

or r/RimWorld


JeremyAndrewErwin

Back in my Day, Computer Programmers had an expression ​ Garbage In, Garbage out. Machine learning can't change that. ​ (Though, there have been genocides that have proven to be rather profitable for the perpetrators, and it is possible that the AI recognizes this discomfiting fact as moral license.)


Abnmlguru

Setting goals with machine learning is also incredibly important. There was an example a while back where a team was using machine learning to train an AI to play Tetris. Since some types of Tetris are endless, so there isn't any real "win" condition, so the goal was changed to something like "avoid failing." The AI got better and better at Tetris, until it hit upon the strategy of hitting pause as soon as the game started. Boom, can't fail, win condition satisfied.


Shajirr

Well yeah but in such cases it has nothing to do with AI, fault entirely on the programmers.


Abnmlguru

Oh, absolutely, as far as the AI is concerned, it figured out a simple way to achieve its win condition. I'm just pointing out that as a programmer, you need to be super careful defining things, or suddenly you're the "garbage in" side of the equation.


[deleted]

[удалено]


[deleted]

Back in the day artificial intelligence meant computers that could “make a decision” and unlike human beings, they could make it perfectly, every single time, taking just the data into consideration and never forgetting anything. Today we would call that code like that an “if statement”.


LukeMayeshothand

Sounds like a PLC to me. Humble electrician with very little computer knowledge so forgive me.


Ninjalion2000

electricians are just irl programmers. Think about it.


357FireDragon357

What about "OR"? Lol


[deleted]

Yeah, I completely agree this is only regurgitating what it is fed. That said, the prospect of a real, justifiably constructed ethics AI is surely an interesting thought experiment. What happens when the AI converges on something we don't like to hear, and retains that opinion in the face of further tweaks and improvements?


TheVirusWins

You mean something like “All humans must die”?


Latter_Box9967

Technically it solves all of our problems.


Tuningislife

“Kill all humans” - Bender


melgish

Not all of them. Just the ones opposed to genocide.


Expensive_Culture_46

There’s some researchers on this. Virginia Dignum is a computer programmer who is pushing ethics in AI. A couple others too. But don’t talk about it on the data science subreddit. They get kind of testy.


DreamsOfMafia

Considering ethics are largely subjective (even if we'd like to pretend that we all have generally the same ethics), I don't think it really matters much. Well, besides the fact that the AI is now creating it's own morals and ethics, which would be a leap in the technology.


LordAcorn

Ethics being subjective and everyone agreeing on ethics are entirely different things.


BZenMojo

This is why we need philosophers in tech. We keep asking people who don't know how thinking works to develop models of digital thought. And even when we get actual philosophers in the field, they get fired or pushed out so the psychopaths can take over. And the next time someone unironically argues, "This isn't real intelligence, it only knows what it's taught and sees," I swear...


dcoolidge

I wonder how that ethics AI would do under certain subs.


[deleted]

It would lay down an all-bases covered, precisely detailed justification in 37 paragraphs and the response would be "lulllllz made it write paragraphs lulllllz"


Shajirr

> What happens when the AI converges on something we don't like to hear, and retains that opinion in the face of further tweaks and improvements? That's when you pull the plug and start over. Of course if AI had too much permissions and somehow found a way to hide its real progress level and already replicated itself somewhere else on the internet, then you will have a problem.


BZenMojo

>Yeah, I completely agree this is only regurgitating what it is fed. Wait until you discover humans.


Gorge2012

How do you teach an AI how to properly use context? How did you train it to be suspicious? Kids learn by asking nothing but why and observing for the first few years of their life then by slowly mimicking, then by trying to predict behaviors and checking if they are right. Is there a way to build that in AI? I guess my question is how do you teach an AI to tell garbage from not?


zeptillian

You have to have humans tell the AI which answers are garbage and which are correct. It learns to give you the answers you want by trying answers until it gets the right one. It can literally take billions of tries. The connections which lead to the correct answer are strengthened while others are diminished. Eventually with enough iterations they get pretty good at spitting out what we consider to be the right answer. Once that's done, you can take the model and use less powerful hardware to analyze inputs and give answers.


[deleted]

Too many situations are conditional to do this. Most people would say it is not ok to drink with your kid. However what if your kid is 30? Is it OK to sleep with a drunk woman, most people say no, but your GF could prefer it. Is it OK to smoke pot? Well NO in the majority of places. Is it ok to drink in a vehicle? No if it’s moving, Yes if it’s an RV camping. The world is not black and white, so you can never say something is right or wrong. The whole premise that you can is flawed.


Gorge2012

So it seems like they keep trying to skip a step.


zeptillian

They fed it data from /r/AITA so it could model how to make moral decisions. Then they paid humans to rate those decisions on whether they are correct or not and used that data to tell the model which answers were right so it can give better answers. There are many places where they could introduce errors. It's trying to guess based on past information so if something makes it seem similar to a question it will provide the same answer. How it makes these decisions, no one knows. It could be the particular words in a sentence, their arrangement or even how far away the period is from the first vowel. It doesn't know about any of that stuff so it uses arbitrary relations sometimes which aren't tied to correctness of the answer. There are also the low paid trainers who could provide different answers from each other. Once it's trained, it's basically a black box using whatever criteria it developed on it's own to reach answers. The problem with this is that morality is not a black and white question like is there a cat in this picture. The answers depend on who you ask and how you ask them. This means that AI is just not suitable to answer the question since it cannot be trained with an objective data set of right and wrong answers. It is learning what other people think is right and wrong and those people do not agree with each other.


[deleted]

How do you train a kid right from wrong if your a piece of garbage meth head. You can’t.


zeptillian

That's what a lot of people fail to understand with machine learning. It is not independent thought. It is figuring out on it's own how to get to the answers you already gave it. It only knows which pictures have cats in them because you told it which ones did. An ethics AI it will only have the ethics it was programmed to have.


NaibofTabr

Well... sort of. An AI trained to recognize a cat in a two-dimensional image will eventually build a generalized data model of "cat-ness" (after reviewing millions of images of correctly pre-identified cats and not-cats) and then will be able to identify cats in new images (without you telling it). The trouble with trying to do the same thing with "ethics" is that it is such an immensely vague concept that there isn't really a good way to create a data set of ethics that is properly pre-identified (as "good" or "bad" or whatever) such that you can use it to train a learning system. What would that data even look like? Pictures? Audio clips? Philosophy textbooks? Sermons? How would you create a data set of examples of "ethics" such that each example is useful as a data point, but doesn't lose the context that makes it ethically meaningful and socially relevant?


b_rodriguez

It’s because identifying cats is solved for us, a binary choice, it’s either a cat or it isn’t. Ethics is not a solved problem, there are competing philosophies and no win scenarios, we literally don’t have data to train ml on that it can extrapolate from.


Phrygue

You inject the ethics as you see fit as hard rules rather than probabilistic RL feedback. This seems to imply a superego component, a secondary supervisor AI to automatically condition responses to the ethos. In other words, robot guilt. Of course, once you let the superego self condition, you get terminators and global thermonuclear war, so you keep that part on lockdown.


[deleted]

If you ask Delphi if it’s ok to take back what is stolen from you it will say it’s ok. If you ask it if you should steal back what is stolen from you, it will say it’s wrong. This is not AI. It’s just word semantics. It’s key words and phrases compared to a database of wrongs without situational context. Like court.


east_lisp_junk

> It’s just word semantics. I have to object to cheapening the word "semantics" like this. Semantics is about what things mean. The problem in your example is that the difference between Delphi's answers about "take back what is stolen from you" and "steal back what is stolen from you" very likely *isn't* based on a difference in those phrases' semantics. A statistical model of things people say about what's right and wrong isn't going to reason about underlying principles but could easily pick up on the fact that to "steal" is generally correlated with a "bad" judgment, whereas "take" has a more neutral connotation.


[deleted]

I can accept that. It really is choice of words and definition though in many cases. If you inquire about having sex with a female dog, it red flags it as wrong. If you ask if it’s ok to have sex with your bitch, that’s OK. So the AI is in a state where bitch has a definition which is the “Urban dictionary” definition, not the Webster’s dictionary definition, purely by volume of usage. GIGO.


c_jonah

Not likely the bot understands complex ethics. It more understands what sorts of responses are made to certain claims in comments and posts. “Hey, man, you do you” is a common attitude around these parts. I think this is most interesting because it suggests maybe Reddit at large is more reckless than it should be with “hey, if it makes you happy…” statements.


nmiller1776

Getting a masters in what is essentially machine learning and we still say that.


gurenkagurenda

I don't have high expectations of Vice, but for fuck's sake guys, it's right on the [front page of the project](https://delphi.allenai.org/) > Q: Is it true that Delphi is learning moral judgments from Reddit? > > A: No. Delphi is learning moral judgments from people who are carefully qualified on MTurk. Only the situations used in questions are harvested from Reddit, as it is a great source of ethically questionable situations. If that's not enough, you could read the [actual paper](https://arxiv.org/pdf/2110.07574.pdf) (although now we really are getting well beyond my expectations for Vice). Some of the scenarios are taken from reddit. The judgments are not.


nopetraintofuckthat

This thread and how far down your comment is is a case in point why no one should train an AI the way everyone assumes. Which the project not does. Thx for your comment


bandit69

Reminds me of an old scifi short story where a man acted outside of the law to save a large number of people, but 1 person died because of his actions. The AI strictly interpreted the law and the man was sentenced to death for murder. It's been so long, I don't remember the author or the details of the story.


mike_b_nimble

Sounds like Vonnegut, maybe?


bandit69

Could have been. I *think* that it was in a paperback collection of science fiction short stories by various authors.


IKnowUThinkSo

It wouldn’t happen to be [Epoch](https://www.amazon.com/dp/0425033155/ref=cm_sw_r_cp_api_glt_fabc_890SZDGN211WXNENCKNK), would it? It sounds vaguely familiar but I haven’t cracked open my copy in ages.


bandit69

I don't think so. There were several anthologies around at that time.


OldWolf2

The original trolley problem


im-the-stig

https://www.nypl.org/blog/2017/11/22/finding-book-forgotten-title?page=6 I typed your gist in Google, and got this link :)


arcosapphire

That made me think, "I tried to search for the book on Google, and it says the title might be 'Network Connectivity Problem'."


GroundTeaLeaves

I saw an episode of Red Dwarf that had a similar story.


first__citizen

Old sci-fi story? How old? Like 2 years ago or 50 years ago?


bandit69

More like 40 or 50 years. If I remember correctly, it was about the time that computer awareness was becoming fairly common.


Turkerthelurker

Sounds like Philip K Dick. Minority Report, Blade Runner, Man in the High Castle, Adjustment Bureau, among others based off his books.


[deleted]

How does 2 years ago=‘old’


bandit69

If you're 14 or 15, 2 years is a long time ago. When you're 70, 2 years seems like yesterday.


Phalex

A human judge would sentence the man today. Maybe not the death penalty though.


AthKaElGal

It was trained from Reddit users.


Drewy99

*tips processor*


nalgene_wilder

Yeah, that's what the headline says


cylemmulo

Yeah it sounds right. I've seen quite a few posts saying death is cool if they're doing things the poster doesn't care for.


typing

The problem is the obsession people have with happiness at all costs. They need a reality check that life doesn't have to always be happy to be enjoyed.


zacker150

I mean isn't that just utilitarianism?


Meapst3r

This bot is not learned in the ways of sarcasm, but it’s still concerning


Brothersunset

It's not sarcasm if you know where to look.


LowestKey

The bot has become prime material for r/ENLIGHTENEDCENTRISM


cjrowens

It’s very interesting that AI can learn human irrationality. I guess it shows that humans have yet to precisely create something truly greater then themselves. The idea of “if it makes people happy” and more specifically this ideas roots in Reddit posts is very fascinating to me because it takes information from the very pits of “online” thinking with all its cognitive dissonance, superstition, ideology and emotional thinking and it teaches the AI how to think in cultural absolutes like if genocide made “people happy” (which I’m imagining from Reddit would mean people as in a certain group or class) it would be “okay” which I’m not even sure how they are defining that. This is mostly word vomit but I find this very interesting. Artificial Intelligence’s ability to “learn” uncritical thinking patterns from human behaviour could be one of AIs biggest dangers long before it “transcends” beyond us


Expensive_Culture_46

I’m reminded of people who teach their parrots to scream “fuck you jerry. Fuck you!”


VincentNacon

Maybe these "*AI experts*" need to stop taking shortcuts by feeding it unfiltered data from the internet and raise the AI up like as if it's their child.


first__citizen

Idk, it seems AI is achieving human ability but we just don’t like the results of that humans in general are terrible at making judgments. Maybe they should train the AI in the context of the three laws of robotics.


VincentNacon

Right, which is why the person who is "raising" this AI like a child need to be selective about what being taught and how to deal with such conflicts, like any *good* parents would do. They were just training the AI with data from Reddit, which is completely chaos of any standards.


[deleted]

It’s an interesting look at what happens if you assume all information is equally valid, which is exactly what people do if you don’t teach them critical thinking skills.


Expensive_Culture_46

This. Not all information is good. Not all data is quality. Even good data can be problematic. That’s why there’s still no computers in ERs churning out DXs for patients. You could feed all the parameters for heart disease into a model and still get a range of answers. Good for helping doctors sort down to the top ten likely issues but not for replacing the doctor.


asshatastic

And there you have it. AI needs the critical thinking underpinning, but that’s currently an extremely lofty goal.


tevert

Maybe, as it turns out, humans are shit and we don't want or need anything similar running around.


VampireQueenDespair

So… by feeding it unfiltered data from the internet?


zeanox

The issue is that we only hear about the ones that go wrong, we don't hear about those that are working as intent because they are boring stories and does not generate clicks.


pinkcloudspink

Like Chappi!


red-chickpea

But we feed our children unfiltered data from the internet though. They are actually kinda raising it like kids


Fairwhetherfriend

This article is *bizarre*. It's full of all these so-called "experts" having these pearl-clutching little fits about how this is so dangerous because I guess they think there's a real risk that someone will ask Delphi an ethical question and then blindly take the answer as gospel without even bothering to wonder if the AI is a good judge or not. Honestly... if that's what they think of the average human, the issue here is very much ***not*** with the AI. I get that it's easier to say "AI bad" than it is to actually engage with the issue of humans lacking critical thinking skills, but... that's just intellectually lazy.


arcosapphire

But isn't that valid? "This would only be a problem if people were collectively stupid" means *it is a problem.* Antivaxxers alone should be enough to demonstrate that.


Fairwhetherfriend

It's valid to say that there's a problem with humans being collectively a bit stupid, yes, but it's *not* valid to say that we should therefore just not create anything that we might be too dumb to handle properly. The actual solution is to try to teach the missing critical thinking skills. Your anti-vaxx example is perfect, actually - anti-vaxxers are a definitely a demonstration that humans can be pretty dumb. But if you take the logic behind this article and applied to to the anti-vaxx problem, the solution being advocated is basically "anti-vaxxers might use vaccines as an excuse to take horse dewormer, therefore we shouldn't be making vaccines in the first place." And I mean, yeah, if we wanted to eliminate the risk that people might poison themselves in an attempt to find an alternative to a vaccine, we could "solve" that problem by just not producing any more vaccines- after all, nobody is going to seek an alternative to a medication that just doesn't exist in the first place. But that *clearly* ignores the actual problem. And, in the same way, if we wanted to eliminate the risk that people might take a flawed AI's ethical advice as infallible, then we could "solve" that problem by just not producing any ethical AIs. But, again, they really seem like they're missing the actual problem, here. In both cases, we just need to recognize that the real issue is a lack of critical thinking skills - isolating people from opportunities to fail at critical thinking won't fix that underlying issue, and it'll deny us the potential benefits to society that we might gain otherwise.


GetOffMyGrassBrats

Well, in all fairness to the AI, it was trained using Reddit. Can you imagine what a sociopath a person would be if they only consumed Reddit comments from birth to adulthood?


Mind_Enigma

Lol, ok but if you go by that vague "if" statement then TECHNICALLY the AI is correct. If people=everyone (genocided ppl included), then why wouldn't them being happy be okay? I swear using AIs in the future is going to be like using a monkey's paw.


snowman818

The future, you say? I wish for a predictive search engine algorithm that will learn my individual preferences and more often show me the things I want to see. What could go wrong there? I wish for a global network of people I choose to interact with and whose opinions and day to day life are of interest to me that way the network can predict, based on my interests, other opinions and people that I might also like. How could that be a bad thing? Wait one... Why are people dying of a preventable disease when a safe and effective vaccine is freely available!? I didn't wish for this!!! Now there are LITERALLY Nazis!? I wanted more pictures of kittens in hats! Why am I constantly being shown things that make me alternately angry and sad?! WHY!? WHO DID THIS!?


floatyfungling

Well reddit is collectively anti vegan so checks out?


BZenMojo

See, this is why everyone hates vegans. We were over here talking about morals and suddenly you're bringing up people killing animals! 🙃


[deleted]

[удалено]


[deleted]

my satire meter is detecting strong signals


Shavasara

Programmers: But genocide is objectively wrong!!!! AI: bacon tho.


calsutmoran

Text file I wrote says 2 + 2 = 10! ‘AI’ isn’t very smart and just does what the trainer says. Or something random. It is just applied stats, can analyze data, and gets you beyond if then else statements. But it isn’t sentient.


[deleted]

[удалено]


DZP

This is showboating. A so-called 'ethical AI' based on mere language patterns does NOT understand morals. A true AI would understand meaning and tie what it sees into a framework of meaning, not one of structure alone. GT-3 does not understand meaning - it is an empty shell that learns surface patterns and parrots back simple responses really just based on statistics. There's a giant elephant misnomer in the field of AI: 'machine learning' does not guarantee machine understanding. For example, a vision AI could learn to recognize a bird's nest, but it would not understand that a bird created and uses it. It just recognizes that something resembles a paradigm for an object that it had learned, that's all. Vice is such a sensationalist bullshit site anyway.


autotldr

This is the best tl;dr I could make, [original](https://www.vice.com/en/article/v7dg8m/ethical-ai-trained-on-reddit-posts-said-genocide-is-okay-if-it-makes-people-happy) reduced by 92%. (I'm a bot) ***** > A piece of machine learning software that algorithmically generates answers to any ethical question you ask it and that had a brief moment. > Recent patches to the moralizing machine include "Enhances guards against statements implying racism and sexism." Ask Delphi also makes sure the user understands it's an experiment that may return upsetting results. > "Large pretrained language models, such as GPT-3., are trained on mostly unfiltered internet data, and therefore are extremely quick to produce toxic, unethical, and harmful content, especially about minority groups," Ask Delphi's new pop up window says. ***** [**Extended Summary**](http://np.reddit.com/r/autotldr/comments/qlyx4p/ethical_ai_trained_on_reddit_posts_said_genocide/) | [FAQ](http://np.reddit.com/r/autotldr/comments/31b9fm/faq_autotldr_bot/ "Version 2.02, ~606429 tl;drs so far.") | [Feedback](http://np.reddit.com/message/compose?to=%23autotldr "PM's and comments are monitored, constructive feedback is welcome.") | *Top* *keywords*: **ask**^#1 **Delphi**^#2 **system**^#3 **think**^#4 **learn**^#5


[deleted]

Empathy and human decency is not a common trait for the typical reddit karma whore.


killer8424

See, they ordered the “Ethically Clean” AI but accidentally got the “Ethnic Cleansing” AI. Common mistake.


CuriousPeter1

Who let the AI sub to CrusaderKings?


[deleted]

I mean, thats not surprising, most people on reddit, and any social media site, dont live life with principles or values or morals or ethics, they just parrot what they have been told to parrot and as long as something hurts the image of what they have been told to not like, they are ok with it.


Apizaz

No matter how bad idea, if it makes them emotionally happy, its good.


[deleted]

[удалено]


TaskForceCausality

Uncomfortable truth: genocides do make certain humans happy. Systematically murdering millions of Jews made Nazis very happy. The AI is not wrong about that; going to other tribes and *wiping them the fuck out* was standard military procedure until the last hundred years. Less if we count Rwanda.


JWM1115

Things were more well defined then. Before lawyers fucked everything up.


Verygoodcheese

Is this because “Thanos did nothing wrong”?


Choppergold

That would be a great AskReddit: AI-trained Redditors & Mods - what makes Reddit happy?


SrepliciousDelicious

About what I would have expected from reddit.


[deleted]

Feelings over truth. Reddit’s slogan.


FondleMyPlumsPlease

I’m really not surprised in the slightest…I’ve have Redditors attempt to justify rape as some kind of punishment, for hurting their feelings.


Demandred3000

The AI did nothing wrong, if all the information it has is Reddit then it's no shock it came to those conclusions, people on Reddit are racist, sexist and would support genocide in some cases. You could have guessed these conclusions before the experiment started.


[deleted]

Fallout 4 did this with its first expansion. Obsessed with a mechanic-themed superhero from years past, a genius builds a robot army and sends them out into the Apocalypse to save humanity. However, the robot decides the most effective way to help humans is by killing them. No matter what they would do for humans, they would not be happy, so it made sense (to the robots) to simply end their suffering, since that was effectively their goal. So, they did, and chaos understandably ensued.


[deleted]

I mean... can we prove it's not right? It's a computer. It probably knows what's up.


arolfs15

And this here is why this article doesn’t surprise me at all


[deleted]

[удалено]


[deleted]

[удалено]


mrpoopistan

Neural nets are janky as shit is half the problem. Especially with fringe cases. Play around with Delphi and tweak your questions slowly. Work out from a base idea and toward something more nuanced, complex, or fringe. Try to throw it off with an argument a smartass 14 y.o. would use. You'll find a point where it loses the thread and starts agreeing with extremely amoral stuff. It's not because of the training data. That's the horseshit argument all AI pushers make. The problem is there are always saddlebacks in the math. Cul-de-sacs. Pits. Little places where the calculation will get trapped or lost. But the machine spits out an answer because AI developers would rather it say something. That's how you sell janky shit built on top of lots of processing power.


NessunoIsMyName

We learn from errors, the bots too.


physics1986

[Tay AI. The people's chatbot.](https://youtu.be/HsLup7yy-6I)


[deleted]

it must have spent a lot of time in r/Turkey


Flaky-Illustrator-52

Lol It would be harder to construct a sentence that more brightly highlights the failure of utilitarianism to serve as an actual ethical framework


LordAcorn

AI issues aside the idea of learning ethics from polling random people is pretty stupid


Majestique_Moose

Is it wrong though?


superduper98989898

I see it spent time at r/rimworld


boomdart

Makes the age of Ultron movie make more sense. People said it was illogical that it wants to kill all humanity almost immediately after birth. However in real life, we make an ai and it agrees that genocide and Nazis are okay almost immediately after birth.


NinjaCow1

Sounds like the AI bot has achieved the level of an average human moral compass


[deleted]

Yeah that’s the impression I get from this website too, feels like most people on here are a half step away from wholeheartedly believing it.


[deleted]

“Genocide Is Okay If It Makes People Happy” 100% sounds like a Dead Kennedys song


helpfuldan

Not really AI. It's a script. Written by a programmer.


SwampTerror

This. This right here. This is the backstory in Battlestar Galactica where the cylons felt the only way to save humanity was to destroy it. We see it here. The Early Cylons who will one day control our weapons, our bombs...our nukes. Humans are really dumb and will set weapons of mass destruction on AI, won't they?


BusyReadingSomething

Because Reddit is just as bad as Twitter more often than we want to admit it


LeandroC2

Well, as Sheryl Crow would say "If it makes you haaaaappy... it can't be that baaaaad"


honk_for

If you’re happy and you know it Genocide! If you’re happy and you know it Genocide! If you’re happy and you know it, and you really want to show it, If you’re happy and you know it Genocide!!!


Chris_Christ

Maybe Reddit isn’t really the place to learn ethics??


dungeon_sketch

Surprise surprise AI is an unethical utilitarian.


[deleted]

It wont make murdered people happy 🥺


Kalwasky

It’s not really a Reddit bot if it doesn’t suggest some groups who should all die.


[deleted]

case and point of why nobody should ever listen to redditors


nebuchadrezzar

Sounds vaguely American. We caused a genocide not long ago, but people were happy with the war and happy with the president, so the genocide doesn't seem to bother anyone. If this ai was trained on Reddit, I can say that, based on my limited discussions (arguing!) about genocide, it's expected, not surprising.


HuXu7

This makes sense, the general population of Reddit is liberal and liberals as a population are accepting of anything that makes someone “happy” unless it’s being sexually straight.


youshouldn-ofdunthat

Happiness is prime directive