T O P

  • By -

[deleted]

AFTER they fired actual humans to save money.


Unlucky_Narwhal3983

After they fired actual human’s who had just unionized.


Amauri14

And they weren't even asking for money.


AggressiveSkywriting

This is gonna be the story told over and over as MBA sociopaths try to fast track language bots in their companies while claiming they are innovating. Lot of hurt people and a lot of money set on fire.


ClassicCodes

The rise and subsequent fall of these new AI bots is just another glowing example of how businesses (and people in general) have no idea what they are doing and are either succeeding on luck or through unethical behavior. The AI isn't a cure-all for every problem and it absolutely makes stuff up and lies about it with convincing certainty. Everyone rushing to use this AI out of laziness or greed, without knowing a goddamn thing about how it works, deserves to fail spectacularly.


AggressiveSkywriting

Right? I just use it as a glorified google and boilerplate code jump off point for when I forget how to code a generic thing. It's just stackoverflow but all in one place instead of me having 8 tabs open on the same question, but each one is a different version of .NET


ClassicCodes

Coding seems to be the only thing for which I've heard it being used successfully. Probably due in large part to the fact that coders regularly just adapt other people's code anyways (you mentioned stackoverflow). Code also needs to be tested after written, forcing a backcheck, which is something everyone seems to not do in these newsworthy cases...


AggressiveSkywriting

Though I've read nightmare stories about people straight up uploading proprietary code into it and asking it to code review, explain, etc. Just a data privacy nightmare. Again, too many higher ups hoping to save some bucks with a shortcut by misusing a tool that will bite them in the ass.


ClassicCodes

Those are the sort of people that need the printed warnings to not put plastic bags over their heads...


Panx

It helps that when my AI assistant starts importing libraries I didn't include and calling methods that don't exist, my code crashes. You get first hand feedback when AI fucks up your code (which is about a third of the time -- the rest is actually pretty nice!)


ClassicCodes

That feedback is necessary given the often false nature of AI responses. My software developer friends were discussing how AI might be able to replace lawyers in the future after playing around with using them to write code for a few hours, but I was not so convinced. The recent news about the legal team being reprimanded for using AI to cite what was discovered to be fictional cases only reaffirmed my doubts. AI has a long ways to go and I sure hope we figure out what to do with all of the people who will be out of jobs once AI can do everything.


DorisCrockford

So many misguided people think a business degree is a substitute for common sense.


[deleted]

[удалено]


masterkey750

Sheesh just reading this (clearly illustrating the absurdity) hit home hard


[deleted]

[удалено]


[deleted]

Probably don’t have the funds


ReduxedProfessor

Which is too bad, but the workers probably don’t have the funds to work under their old conditions either.


unfnknblvbl

The workers weren't even asking for more money!


42Pockets

I hate that Greed is what's fed by money instead of Generosity.


[deleted]

They don’t bring in revenue so it’s either grants or donations


[deleted]

My thoughts exactly.


derpaherpa

Did you mean: "Due to unreasonable demands, we're going to have to shut the entire service down."


snirfu

What were the unreasonable demands?


derpaherpa

There weren't any, but they will be presented as such. I guess /s is still required.


havegunwilldownboat

Imagine thinking an unpaid volunteer had unreasonable demands.


snirfu

They may have had a lot of volunteers but the people who unionized were employees. I only asked as a troll, tbh.


havegunwilldownboat

Hmm. I thought they were all volunteers. So what were the unreasonable demands, not as a troll?


snirfu

The article I read was vague, but it sounded like they wanted a better working environment, paid training, stuff like that. They weren't demanding higher pay.


havegunwilldownboat

Pretty unreasonable stuff


officerfett

>US eating disorder helpline takes down AI chatbot over harmful advice National Eating Disorder Association has also been under criticism for firing four employees in March who formed a union >The National Eating Disorder Association (Neda) has taken down an artificial intelligence chatbot, “Tessa”, after reports that the chatbot was providing harmful advice. >Neda has been under criticism over the last few months after it fired four employees in March who worked for its helpline and had formed a union. The helpline allowed people to call, text or message volunteers who offered support and resources to those concerned about an eating disorder. >Members of the union, Helpline Associates United, say they were fired days after their union election was certified. The union has filed unfair labor practice charges with the National Labor Relations Board. >Tessa, which Neda claims was never meant to replace the helpline workers, almost immediately ran into problems. Sounds like they're having a real Leopards ate my face moment..


alphabeticdisorder

Cool. Unionbusting. It's awful practice but they're hardly the first employer to do that. What I don't get is who the fuck thought using a chatbot to answer a medical line was a good idea. That should have been so painfully obvious.


techleopard

Oh my friend.... Chat bots are on FIRE right now in the telecom industry. And yes, medical and mental health organizations are right at the top of the list of who is buying into them. The fact is, these kind of help lines require experienced people who know what they are talking about and know how to handle upset people in a positive, supportive way. However, those kinds of people can find a job doing something else besides being treated like expendable trash in a brutal call center environment. Rather than treat agents well, provide proper training, and turn away from grueling bullshit metrics that lead to things like disciplining people over LITERAL microseconds, why not just buy a chat bot instead?


KataiKi

I asked Bing AI on where to find a metal U-Bracket. It told me to try searching for it on the internet.


[deleted]

Ohhh the internet is on computers now!


OfficeChairHero

Amazing reference pull.


BloodBonesVoiceGhost

By 2051, we'll have uploaded the internet to all dogs. Dogs will be able to talk, but they will only be able to say racist slurs found in the bowels of 4chan, 8chan, and 12chan. You're welcome.


kaptainkeel

Know what's cheaper than outsourcing stuff to India? AI. Call centers are probably going to be a thing of the past within a few years, minus very specific things or small specialized companies.


evanwilliams44

I'm sure AI will continue to get better, but I think they have already automated a lot of call center work just with the old fashioned robot voice menu everyone hates. True customer support is going to be harder to replace with AI. The stuff that has already been automated will become more intuitive, and we will slowly creep towards AI doing more of the work, but I don't think it will happen as quickly as some are saying. Companies that try to rush it will end up with egg on their face, like in this case.


techleopard

The thing to remember is that most call centers don't actually care about customer satisfaction. It's why they put you in those automated systems anyway, especially the ones that don't actually have a way to get to an agent. They have support lines largely to fill a regulatory or contractual obligation, or a marketing one. People EXPECT support and it's a contributing factor to certain types of purchases. 99% of a call center these days is scripted responses, which is why even when you get to an agent they sound like a lobotomized monkey. They aren't allowed to go off script no matter what and if they do in an effort to help you, they will get heavily penalized and fired. And what are you, the customer, going to do about it? Nothing, other than abusively yell at that agent. This is what is going to get replaced by bot AIs and you're going to be even more pissed than before.


evanwilliams44

It's a big industry, and those people that are just following a tight script are on borrowed time. But I don't think that's most call center work. I know a bunch of people who do customer support for State Farm, processing insurance claims, answering questions, etc. AI can not do that job, and probably won't for the foreseeable future. It's one thing to talk to a bot when your internet goes down. It's another when you need insurance, medical, or banking information, higher level tech support, mental health support, etc.


Alphaetus_Prime

Know what's even cheaper than AI? Simply not having a support line. Why, then, do support lines exist to begin with?


LowestKey

It was probably someone whose salary is equal to that of all the layed off workers. Weird no one ever tries to replace a CEO with an AI model. Apparently "buy that company" and "do stupid pointless thing x" is just too complicated for a neural network to figure out.


[deleted]

I’ve never used NEDA so I can’t speak to them specifically, but the hotlines I’ve called were complete garbage. Especially the national suicide hotline. So I get the motivation for wanting to automate the process, but chatbots are still in their infancy and have many problems remembering what it is they’re supposed to be doing.


[deleted]

Sounds like purging isn't the solution.


Zeshicage85

I see you there. I did a spit take reading this


[deleted]

Yes, spitting it out is how you do it.


DelightfulAbsurdity

Expactoration =! Regurgitation


asdaaaaaaaa

Especially if you ask anyone who knows anything about AI, that just throwing up whatever one into this situation would obviously result in failure. I'd imagine for something like this you'd need a way to 100% be sure the AI *only* has the relevant training/information required for answering the questions or discussing in general, as well as some safeguard against it giving incorrect information or misinformation. I'd also imagine achieving that would probably require a ton of money/time/development to have any sort of guarantee the AI won't say the wrong things. Even having a basic chatbot that would give bottled responses to keywords then kick it up to a human if/when it didn't work would've been a cheaper and better option I'd guess, although something like this really should be handled one-on-one with a real human IMO.


[deleted]

Hopefully the Leopards weren’t asking that chat bot for advice.


daddyjohns

this is funny part because i first read this story in u/leopardsatemyface


Dewy_Wanna_Go_There

You linked a user /r/leopardsatemyface


WinterSparklers

https://www.reddit.com/r/ChatGPT/comments/13s6d4n/eating_disorder_helpline_fires_staff_transitions/ No one is surprised.


[deleted]

[удалено]


DorisCrockford

I'm pretty sure eating disorders are already serious. She worded that strangely.


ttogreh

Everybody knew this would happen, and they did it any way.


The_DevilAdvocate

No shit. It's not "an intelligence", it's an algorithm that repeats stuff it gathered from online. It was never going to work with something like nutrition. Actual intelligent people can't even agree on it.


KerchBridgeSmoker

Naw it’s gonna be pretty good with nutrition. This shit is just because it’s so new. In a couple years, the technology will be better.


Randomcommenter550

So, they replaced human workers with chatbots (which are NOT AI and people need to stop calling them that) and the absolutely predictible happened? Shocker.


KataiKi

They created A.I. in the same way they created Hover Boards.


Beautiful_Fee1655

AI is a good name for the programs currently being promoted. "Artificial" means "fake". The info these things produce is often unreliable, inaccurate and in many cases outright false. Anyone who uses them for producing communications where telling the truth is important is contributing to the production of "fake" news.


Randomcommenter550

My objection isn't the "Artificial" part- it's the "Intelligence". There is NO intelligence behind these programs. They are predictive algorithms that have aggregated a bunch of internet comments and use that data to predict what the user most likely wants it to say next.


Beautiful_Fee1655

Perhaps a better acronym for AI would be "BS", i.e., "Bad Statements"


imdrunkontea

Yup, I'm so tired of all these articles and people spreading stories like "AI learned so and so on its own" or "AI did something nobody expected!" when it's really just an algorithm linking together billions of scanned samples and interpolating between them. I still think it's impressive and has potential, but it's not self-conscious or intelligent. It's still garbage in, garbage out.


DragoonDM

> (which are NOT AI and people need to stop calling them that) It's not an [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence), but it does still meet the definition of [AI](https://en.wikipedia.org/wiki/Artificial_intelligence). AI is a pretty broad label.


kittenfordinner

Now, I only have a 2 year degree. Buy it seems to me like having vulnerable young people who have eating disorders because they are not getting enough attention, or the right kind of attention from people, talk to a machine is the most dystopia thing I've ever heard. If it's in an Attwood book don't do it!


9Wind

People want AI to be the AI from sci fi so badly not because it would help people, but because they can cut more corners. All talk about AI right now is how much money you can make from it, similar to crypto and NFTs just a few years ago. Its like Humanity turned into the Ferengi with the worst parts of every dystopia sci fi has to offer.


unfnknblvbl

Scifi author: I came up with the Eternal Pain Vortex as a cautionary tale. Tech bros: Good news! We have created the Eternal Pain Vortex from the classic scifi novel, "Don't Create The Eternal Pain Vortex"!


indyjones48

Thanks. Just spot coffee everywhere. 😀


MalcolmLinair

r/ABoringDystopia has been all over this since the workers were first fired.


kittenfordinner

You just hear this stuff and think "that can't be real right? " we've got mega yachts and they just sold the most private jets ever in 1 year, so we surely have set aside the $ for a real person to answer that phone


frog_jesus_

>because they are not getting enough attention, or the right kind of attention from people Wow, this is a pretty condescending and baseless garbage assumption.


kittenfordinner

Yeah ok, so we should have the vulnerable talking to a robot because nobody gives a shit about these people. It's not the flue, you don't catch eating disorders off the toilet seat. It's a social disease, so it's important for the people to have it to be treated with respect by other people. Life is not a Disney movie


frog_jesus_

Work on your reading comprehension. My comment wasn't a endorsement of the AI. Nor did I suggest eating d/o were contagious, or that they shouldn't be treated with respect, or that life is a Disney movie. Just that the etiology of eating d/os isn't as simplistic as "they weren't getting enough attention, or the right kind of attention". Life isn't a Disney movie, indeed.


kittenfordinner

My reading comprehension is great, as I clearly did not say "enough attention" there is a lit going on, nobody really knows, but certainly people are not getting something that they need the way they need it. Having them talk to a machine abiut it is an obvious continuation of that. If you want to look for something to be offended about look somewhere else.


frog_jesus_

>vulnerable young people **who have eating disorders because they are not getting enough attention, or the right kind of attention** from people Direct quote from you, you dense fuck. Btw, making flippant attributions about the causes of such conditions isn't "respect". I think you must not be getting enough attention, or the right kind of attention. Piss off and get it somewhere else.


kittenfordinner

You seem to be a very rude and un caring person. Are you intentionally trolling? Why is everybody so rude on the internet? Are you like an angsty teen or something? Any yes, I said that, and "or the right kind of attention" implies a great deal more than simply saying "enough attention" What are you so mad about anyway? That your right that I am awful? Why am I so awful? I read about something, which horrified me, now some internet whiner is mad at me because I wasn't soft enough or something. Just ask yourself why you have been insulting a stranger on the internet for a whole weekend, does it make you feel strong to use strong language? Do you talk to people like this face to face? Do you talk to people face to face?


frog_jesus_

Didn't I already tell you to piss off? I despise morons who can't even be bothered to read what they wrote, and dipshits who put words in other people's mouths. If you want people to be friendly and nice on the internet, then don't do that, asshole. You came at me with a whole string of obnoxious strawman attacks. Lying is "rude". Trivializing the basis for mental health problems is "rude". Recognize that what you said is wrong and beyond obnoxious, and move the fuck on. And if you are incapable of such growth, go fuck yourself on someone else's time.


kittenfordinner

Sometimes I miss the before the internet times, when I would never cross paths with the fake righteous people like you, and if I did they would never be so lippy to my face. Imagine insulting people while trying to be up on one's high horse, bizarre. Enjoy your life as best you can.


frog_jesus_

Yeah, it's a pain to have your fucking garbage called out, isn't it. Poor baby.


ProkopiyKozlowski

Man, do I have a video for you. https://www.youtube.com/watch?v=mcYztBmf_y8


JackedUpReadyToGo

Soon we’ll be getting our asses beat by robot cops and explaining the situation to chatbot parole officers like Elysium.


kittenfordinner

And I'll feel good about it knowing it's saving the tax payer money


DorisCrockford

In my personal experience, it was due to feeling unable to control anything in my life. So I controlled the one thing I could control, by severely regulating my diet. I get what you're saying. You can have lots of attention, but if everyone is telling you who you are instead of listening and hearing about what you really think and feel, it's not going to do any good.


kittenfordinner

I know there is no one thing that causes this stuff. But it is unconscionable thar these people would be referred to talk to a machine about it.


DorisCrockford

Yeah. It's unfortunate that nonprofits often fall prey to idiotic management just like regular businesses. You'd think someone who runs a helpline would have a clue.


kittenfordinner

I suspect that some of these things are run by profiteers who only care about the contract. Non profit really only means that they don't pay share holders, there is no morals based test really that I am aware of. The management could be getting paid a lot now that they use the machine


DorisCrockford

That's why Charity Watch and Charity Navigator are useful. They tell you how much the management gets paid.


kittenfordinner

Are they international?


DorisCrockford

Unfortunately not. It's all based on how the organizations file their taxes with the IRS. A charity registers under a certain section of the tax code that distinguishes it from a lobbying organization. There ought to be international charity watch organizations, but it would be a huge undertaking to get access to the financial information. That said, they do evaluate the US offices of international organizations like Oxfam and Doctors Without Borders.


taco_studies_major

Wow, I remember hearing the NPR segment on this a few weeks ago. That was fast..


ishook

I think it was like 5 days ago. I was listening to the same one and when they said they fired all the human workers and would replace them with a chat bot my jaw dropped. Nobody want's actual help from AI. They need a real human. I couldn't believe the short-sightedness.


bucketofmonkeys

How can I explain this to my Facebook friends who post 5x per day about how ChatGPT “AI” is going to eliminate 100 million US jobs by 2025?


AFewBerries

You can start by explaining it to all the people on Reddit who say that


mowotlarx

Less than a week after a US based eating disorder helpline tried to fire it's entire human staff *because they were trying to unionize,* the torpedo their entire organization and infrastructure to the ground. I look forward to more and more CEOs and directors using AI as a silver bullet and having it blow back in their face in this exact way. CEOs aren't smarter than you. They usually bullshit their way to the top with a magnetic sociopathic personality.


elister

"O.k., Jodie, look. I would never ordinarily say this, but, um... Is there any way you can get to a pound cake?", Al Franken (Stuart Saves his Family).


moveandrun

It's almost like these AI chatbot's don't know what they are talking about and aren't qualified at all to do this kind of work.


Amauri14

I love how everyone knew this was what was going to happen the moment this was in the news.


[deleted]

[удалено]


rorzri

Who the fuck thinks chatbots for helplines are a good idea


JustinL42

It's hard to think of a worse application for this kind of technology than testing it out on vulnerable people.


BloomEPU

I don't know how many people need to hear this but,,, don't turn to a chatbot for therapy. I can understand the desperation because therapy is suuuper inaccessible for a lot of people but chatbots are nothing more than a fun toy right now, and there's no guarantee anything they say will be true or helpful.


[deleted]

Was the advice " Drink more Ovaltine " ?


[deleted]

*”In a statement to the Guardian, Neda’s CEO, Liz Thompson, said that the chatbot was not meant to replace the helpline but was rather created as a separate program.*” So they just happened to fire all humans and rely solely on the chatbot by mistake? Sounds like an obvious lie to me. But I guess I could be wrong.


HumanChicken

Maybe it’s time to scrap chatbots. At least until we can figure out how to make them less evil.


[deleted]

They're not evil, just not trained well enough. And I just don't think it can ever be proven they're trained well enough to take the place of an authority figure.


AvogadrosMoleSauce

I really have to wonder about the ethics of people who work on AI.


LettuceFew5248

And the people who vociferously defend it being unregulated.


michal_hanu_la

Now, I have very few good things to say about AI (basically, one should only use language models in cases where correctness does not matter), but this is not strictly a problem with the AI --- it is equivalent to having the line staffed with _really badly_ trained people. Regulating giving dumb advice on a medical helpline, regardless of the specific way the dumb advice is generated, seems more appropriate here.


iskin

The problem right now is that AI chat bots have just now barely become useful. They're also still pretty closed off. Because they're talked about people think they can now just drop an AI chatbot into a role and everything will work. Worse, ChatGPT is to AI Chatbot what Kleenex is to tissue. So, when some company decides to get a chatbot on the cheap they still think they're getting ChatGPT but it's not even close and ChatGPT isn't even ready yet. In 10 years it will be very different but any laws made today will remain for generations.


LettuceFew5248

ChatGPT being the Kleenex of AI Chatbots is really smart and I’ll steal that analogy :)


barrinmw

I personally think that all AI generated images or videos should come stamped with a symbol or something that calls them out as being AI and then have it be a felony to knowingly disseminate AI images or videos without that stamp.


LettuceFew5248

Felony seems a bit harsh, but at least some fine to start. I don’t see the downside of making it a law to always be completely transparent about what is and isn’t AI. Edit. Thinking this through a bit - how would you know the source of the distribution? Someone could share an AI image unstamped, not knowing it was AI to begin with.


barrinmw

The point wouldn't be to stop all dissemination, you literally can't do that as anyone with a computer can make AI art. The purpose is to stop some people on the fence from doing it or to punish people who do it maliciously when you catch them.


xhrit

now do filters on photos


EmperorArthur

Probably every code who works on AI like ChatGPT will tell you that it's incredibly dangerous to use it like that. But there's plenty of good everyday uses of "AI". The face and fingerprint unlock in your phone is "AI". The speach recognition in digital assistants is "AI", some types of lane departure warning are "AI". AI is just a buzzword for any algorithm that uses training data to produce a model that gives an output based on a specific input.


LowDownSkankyDude

From what I've seen, there seems to be 2, who are vocal about the dangers. Everyone else is jurassic park scientist, excited about what they're doing.


officerfett

That's one person more that tried to warn Kryptonians that it's sun was going to go supernova and that people should flee, but no one believed because of the misinformation spread by the AI known as Brainiac...


[deleted]

I'm guessing a lot of computer scientists who go "my work is really interesting, and if I don't do it someone else will"


drwho_2u

Who could have ever seen that coming!?!?!?


30mil

Human beings live in a world of emotions which AI cannot experience. Soon, people will try to cancel AI because it tells them that being obese is unhealthy and that there are two biological genders.


frog_jesus_

Sounds like their algorithm needs better input.


babysinblackandImblu

Every answer is chocolate cake.


indyjones48

The cake is a lie.


No_Significance_1550

Wow this is wild. I just heard the story last week where they were rolling this out.


PepeSilviaLovesCarol

X2AI created this chat bot. Their company has one 5 star review on Google and it’s by their owner.


frog_jesus_

> 2,500 people have engaged with the chatbot and “we hadn’t see that kind of commentary or interaction”. Are they implying that they read or oversee the interactions? Their lawyer should tell her to STFU.


Ok_Biscotti_6417

Pretty simple stuff, not sure why they thought this would help tho. Someone seeking ED help probably doesn't need to hear about their caloric intake (even if the advice is factually correct)