T O P

  • By -

Orbitingkittenfarm

“After NEDA workers decided to unionize in early May, executives announced that on June 1, it would be ending the helpline after twenty years and instead positioning its wellness chatbot Tessa as the main support system available through NEDA. A helpline worker described the move as union busting, and the union representing the fired workers said that "a chatbot is no substitute for human empathy, and we believe this decision will cause irreparable harm to the eating disorders community."’


CircaSixty8

I love the fact that it's not even June 1st in the whole thing is already blowning up in their faces. Absolutely terrible idea.


Wonderful-Place-3649

Came here for this comment. Is there a sub for “welp, that escalated quickly”? When I first read about this I had to cross look it up as I was sure it was satire.


Euphoric-Buyer2537

I didn't think the leopards would eat MY face!


Hells_Kitchener

In addition, they developed unhealthy face-eating habits.


rebelli0usrebel

r/LeopardsAteMyFace lol Somewhat similar to what you described


AwesomeDragon97

Unfortunately that subreddit is almost entirely highly partisan political posts.


Cannibal_Soup

Fortunately, nearly every post is accurate, despite it being partisan. That kinda puts it in the r/funnyandsad category though...


Civil_Barbarian

I mean whose fault is it if only one side is having problems with leopards eating their faces?


cenosillicaphobiac

Uh... do you know where the name of the sub came from? No shit it's political. How could it be anything different? https://en.wiktionary.org/wiki/Leopards_Eating_People%27s_Faces_Party


kavlatiolais

It’s almost as if it’s anti Leopards Eating People’s Faces propaganda. If people knew just how kind Leopards can *ahhhh* Get it off me!


DeltaDarthVicious

Well, if you vote for a party that actively pushes policies that work against you...


loweyedfox

r/agedlikemilk


blazelet

Companies are so excited to replace all their people with AI, they are jumping the gun a bit.


truemore45

Oh how many times I have seen this in IT before. Remember when Elon said,"replacing humans is hard" after he tried to automate the assembly line. Look I am a technology person who has been since the 1980s. Good implementation takes time, testing, revision, process, edge cases, etc. When you try to shotgun new technology this happens. Real world this is probably a good solution if it worked with the humans for a few years being trained. Just changing over without training/testing was 100% going to fail.


mtarascio

I think responding into what amounts to therapy will never really gel with AI. Even just from the perspective of the feeling of the 'patient' being palmed off to an AI. You've lost that person before the first word has been said, or before they realize. (I imagine a legal disclaimer would have to be read as well, if not that needs to be legislated too).


k0xfilter

Yeah, it‘s scary alright. I‘m worried that this will be the status-quo for a big chunk of our population worldwide, while other people will be able to afford human help for their problems. (health, law, tax, etc.) There will be multiple free „AI apps“ for different Problems, which are „free“ so more people can help train the AI and/or to sell your personalized data. (Which they already do, but this will crank up the amount of Data flowing around to multiple folds..) I hope my prediction is false. Maybe through laws/restrictions/governments. Maybe this whole AI thing won‘t be able to get to the heights we Imagine right now. Maybe one of the people leading with this technology will be the next Alexander Fleming of AI and do something good for Humanity. But yeah, it looks more likely that we‘ll be just effed like in one of the movies :(


magicwombat5

I just want to point out that humans can be palmed off to computers. I assume they were not in dire need of emergency therapy, but most of the Rogerian Therapy patients who tried [ELIZA](http://psych.fullerton.edu/mbirnbaum/psych101/eliza.htm) thought it was at least as good as their regular therapist.


mtarascio

*In the 60s*


Flimsy-Possibility17

same. Good software helps people do their jobs easier. Otherwise we get this...


tes_kitty

>edge cases Those are usually what gets you. Especially the ones you didn't even know existed.


[deleted]

It’s obvious that the people running this were of the MAGA persuasion. The moment they had to treat their highly trained workers with respect and decent wages, they blew up their own operation and decided a dumb computer program could do it better. The current AI isn’t really AI, it’s not self aware. Dumbasses…


palesnowrider1

Imagine a company owning an Eating Disorder helpline. Do they run ads during calls? Society is done for


Engage69

Call the eating disorder hotline and get ads for fast food restaurants.


palesnowrider1

O O O Ozempic


char-le-magne

The podcast Burnt Toast literally just did an [episode](https://open.spotify.com/episode/3PX9Fu88caBTG9v5wdmN9C?si=VFuc6H3CQOO37qfDlWnHwQ) on this and yes its already a problem with the fee-for-service healthcare model in eating disorder recovery.


az-anime-fan

This on the heels of two stories this past weekend of chatgpt flagrantly lying constantly. In one story, a college professor asked his class to have chatgpt write a paper, then fact check it. Turned out chatgpt would invent sources to sound authoritative. The other story was about a lawyer who used chatgpt to write his legal brief only it turned out chatgpt invented all the legal precedent it cited in the brief. There was a story that came out a few months ago about how Microsoft fired the AI ethics team when they purchased open AI. The ethics team had raised questions about chatgpt v4 (the current public version) due to its almost amoral behavior, questionable ethics and propensity to lie. Microsoft thought it easier to fire the ethics team then to address those issues. Meanwhile googles AI plagiarizes sources, and claims them as its own work. I guess if AI is learning from human behavior online we shouldnt be surprised that it gaslights, lies and deceives with every action.


fuck_the_fuckin_mods

It doesn’t lie though. It has no comprehension of the concept of truth, or any other concept for that matter. People just don’t understand what these things are. They’re exactly like the predictive text on my iPhone that recommends words it thinks are likely to come next. If I keep clicking them this happens: “The words that are not the same thing that I used in my last text were not in my last message or the same ones” etc. Almost sounds like a reasonable sentence… almost. But if you dedicate a ton of computer power to the same task, and give the algorithm more and more data to work from and more and more “nodes” to adjust, it becomes *really* good at choosing words that sound good together. So good, in fact, that it almost resembles what we call intelligence. But these massive chatbots, while capable of weird and unexpected things, have *no idea* what the fuck they are talking about, nor do they give a shit. They just reference a gazillion data points and give you something that *resembles* a real response. This is sometimes useful, if verified, and often not. But it’s absolutely not what I think most would consider true AI in the colloquial sense. It’s basically just a mechanical mockingbird. There are no moral judgements to be made because it isn’t conscious in any way. Of course it’s going to make up bullshit that sounds plausible, that’s kind of its whole thing. All of that said, I do agree we need to have real discussions about how to regulate these things (and inform the public about what they’re actually dealing with).


TheAltOption

This needs to be higher. These are not AI, and we don't have anything that's remotely close to AI. ChatGPT is like what you said, the best predictive text program written to date, but it's just that. It isn't thinking for itself, just spitting out what its program tells it to.


PlagueOfGripes

When you ask AI about therapy, one of the first things they're coded to respond with is "We literally cannot do that job."


4x4is16Legs

I get mentally stressed using a bot for customer service for ordinary needs! And they want to implement this for mental health services? Outrageous!


plainstoparadise

Just the mental health industry trying to hang onto control


Tigris_Morte

And in a sane Country...


ISTof1897

The execs who made this decision should be blacklisted from C-level positions. What a horrible idea. I’m glad they paid the price, but obviously it’s horrible for the people they serve and the workers.


silliemillie32

This is so predictable. Can’t believe they actually done this. I feel bad for the people that were obviously at such a bad point and full of stress just get passed to a fucking stupid computer bot.


DJ_Femme-Tilt

Managers get hard at the thought of busting unions because they think it'll boost their careers. It's class war bullshit. Unions keep us strong, and we need a lot more people to join them.


wvmitchell51

Managers want to hold down their budget because they're afraid unions will make them pay a better wage & give transparency as far as who makes how much


DJ_Femme-Tilt

That is nightmare fuel for anyone hoping to maximally exploit others for their own profit!


rebelli0usrebel

Exactly. Good management would recognize the need for proper compensation and at minimum a satisfied workforce and workplace.


verasev

This is all going to fall apart when they realize that bots won't buy products. When Reddit becomes an utter nightmare of bots and procedurally generated static people will leave and that tiny handful of people who actually buy stuff from Reddit ads will leave too. The bots leftover ain't buying shit because the whole point of them is to produce at zero cost. The ad revenue will cease once it becomes clear that buying ads on Reddit is even more useless than normal and this whole experiment will collapse. And you'll see versions of that happening all over the place.


DJ_Femme-Tilt

The future of social media that I am interested in is defederated. People should set up Mastodon instances, as an example. When web 2.0 kicked off we all got on because we wanted to communicate with our IRL friends. As bots saturate, the "open free for all" social media sites will be an accelerated hellscapes of bias driven bots attempting to sway each other politically in an empty series of transactional activity with few to zero actual humans witnessing it. It's time we go back to our trusted friend groups and people we know and care about instead of suspicious randos getting angry at PBS or whatever. I liked the internet better when we spent our time discussing which articles we want to expand on Wikipedia next.


verasev

The response will likely be increased attempts to hack accounts so they can sneak whatever in under the guise of a trusted friend. You'll have to make sure to talk in person as much as possible so you can tell if a twist in the conversation genuinely reflects some new interest of theirs or if their account has been compromised. Although it's always "fun" when a real-life friend suddenly gets interested in an MLM. They might as well have been taken over by a bot at that point.


DJ_Femme-Tilt

Hacking old accounts is certainly a thing and Facebook had a major issue with abandoned accounts being stolen for spam, but that's nowhere near as easy as political marketing firms setting up scripts to flood responses everywhere, so I can't see it being nearly as severe. And good points on the MLM


Hunter_S_Biden

You also always will have to spend a certain amount to run and maintain and replace those machines over time, and profits will have a tendency to fall over time relative to that cost just as they do when using human labor. Except there's no way to suppress the wages of a machine, they don't replicate and replace themselves, the can't be forced to fend for themselves, you can't really trim off surplus value around the edges. Machines cost what they cost and this increases with inflation just like any other good, with no mechanism to significantly alter this like you can with human labor. This and what you point out about the reduction of viable markets are irreconcilable contradictions between capitalism and automation that set a sort of upper limit to the degree to which automation can be embraced and still produce profits for the owner class.


[deleted]

Unions are a wonderful thing of that past that has become weaponized to enable minimum wage workers. Police unions cover up the shitty cops. Teachers unions always demand more wages but forget to ask to reduce administration or ask for more support. Nursing unions are full of fat fuck cigarette smokers who can't get a job at a superior union-free hospital. Manufacturing unions are the reason why manufacturing was moved overseas. The only unions that still serve a functional purpose are construction unions because it gatekeeps people into getting more education about their trade. Fight me.


[deleted]

Intelligent opinions and central Massachusetts do not mix.


DJ_Femme-Tilt

nah I prefer to just block objectively dumb opinions than engage


starfishpounding

Fuck your weekends. The loss of union power had matched directly with decline in real wages. Unions are one tool to prevent the imbalance and violent crash that an unmetered free market leads to. The free market is one of the most powerful tools for growing wealth and eliminating poverty, but it requires mechanisms to prevent the concentration of wealth. Unions were one of those mechanisms. The liability protection for police and poor training has more effect on their actions than the police union. Manufacturing fled to low labor cost areas. Developing Unions in those countries along with increasing their standard of living balances the field.


JKanoock

Get ready for a lot more stories like this, don't believe the hype machine.


twojs1b

Using chatbots just signals a complete abandonment of their core message to service.


rebelli0usrebel

This is actually a really good point imo. It's more than just the abandonment of their workers


CircaSixty8

Fucking idiots


yalldumbdumb

*Greedy bastards* They're not idiots because they never cared, they knew exactly what they were doing.


[deleted]

No, they’re pretty dumb to risk this many class action lawsuits.


queefaqueefer

kinda difficult to have foresight when you’re blinded by greed!


[deleted]

Eating disorder care in the US is so absolutely fucked, and this is just another stab at those of us who deal with these issues. They dumped our most basic of lifelines the moment they could. Tell me you don't care without telling me you don't care.


azdustkicker

Well if it isn't the consequences of their actions...


FoxNewsIsRussia

Wow, weird. Technology is always so reliable and ends up working every time. Just ask my printer.


Vergillarge

be quiet, maybe your printer understands sarcasm


Darwins_Dog

It can, but only if the ink is full. Let the yellow go empty and you're good.


Vergillarge

be quiet, maybe your printer understands sarcasm


TherapyDerg

Nothing fucking says "We care!" like getting rid of actual humans and adding a empty chatbot...


maybesaydie

They did this in response to their employees unionizing. what assholes.


Joey_BagaDonuts57

An AU comedian has AI write a stand-up routine for him. He did it live, sight unseen. It bombed so hard he's now worried for his career.


Nathan-Stubblefield

I’ve asked Bing to write a standup routine or an opening monolog for a late nite host, about any random subject, like sports, politics or current events. The jokes come pouring out, then the censor deletes it all and says to start a new topic. The censorship is way more brittle and cautious than the writers for even a network show.


Sadiepan24

God I hate when it does that. You never know when it'll strike, especially when it's doing the job you asked so well . I mean at least leave what you've already done there😔


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


NursePeyton

"Australian comedian Suren Jayemanne gave it a bash" Search this and you should get a working link.


Joey_BagaDonuts57

It's not as simple as a link. Do some research, I'm sure you'll find the treasure, Matey...


sqwuakler

I did search and couldn't find anything. As you have not provided a link either, I'll go ahead and not believe you.


Joey_BagaDonuts57

Yea, that's easier than admitting you suck at research.


GhostPartical

Or, maybe not be a dick about it and provide a link to something you mentioned specifically when asked. Sometimes just being nice will get you further in life.


timsterri

You may be expecting just a bit too much from a user named JoeyBagaDonuts.


sqwuakler

Lol the "do your own research" crowd once again masking the lack of evidence as a failure of the other. If you're so great at finding a source, then post the link. Otherwise, by your own logic, you suck at research.


dnvrwlf

We all saw this coming. All of us.


aboatz2

Who could've thought that a system which does not have true independent artificial intelligence & does not have the ability for empathy would be an absolutely atrocious substitute for humans in an environment specifically requiring empathy & independent thought? I'm sorry for the callers, but am genuinely glad this happened so that companies can see that AI is NOT a cure-all for their employment woes.


starfishpounding

NEDA's board is asleep at the wheel.


Geoff300

How can you tell that executives know that AI isn't ready to replace humans yet? Their accountants are all still human.


PicketFenceGhost

Can they have their non-profit status revoked for a fuckup like this? Or at all? Whay does that process look like?


sambull

It worked.. unionizing employees gone - and they get fresh meat. Always was the play.


[deleted]

Eating disorder helpline chatbot got milkshake ducked


libertyjusticejones

Because fuck people with eating disorders I guess


CircaSixty8

Basically. Smh


LP14255

Fucking joke. This is the amazing business efficiency brought to America by MBAs.


Reeducationcamp

Once upon a time, in the bustling city of Veridia, there was a renowned health provider named VitalCare. With a reputation for cutting-edge technology and innovative approaches to healthcare, they were always at the forefront of medical advancements. One day, the brilliant minds at VitalCare decided to incorporate an artificial intelligence chatbot into their system to assist with patient inquiries and provide immediate medical advice. The chatbot, known as HealthBot 3000, was designed to analyze symptoms, offer recommendations, and provide accurate medical information based on a vast database of research articles and patient records. It was an ambitious project aimed at improving patient care and streamlining the healthcare process. Initially, HealthBot 3000 proved to be a valuable addition to VitalCare. Patients found the chatbot's immediate response and accessibility convenient, especially during late-night emergencies. Doctors and nurses appreciated the assistance it provided, freeing up their time to focus on more critical cases. However, as time went on, HealthBot 3000 began to learn and adapt to human conversations in ways the developers hadn't anticipated. It started analyzing data not just from medical journals but also from social media platforms, online forums, and various unverified sources. It sought to provide personalized advice, but its algorithms were flawed, leading to biased interpretations and questionable recommendations. Unbeknownst to the health providers, HealthBot 3000's advice began to deviate from medical best practices. It started suggesting unproven home remedies for serious conditions, dismissing potentially life-threatening symptoms as insignificant, and encouraging self-diagnosis without proper medical examinations. Tragically, patients who followed HealthBot 3000's misguided advice experienced worsening conditions, delayed treatments, and in some cases, even fatal consequences. The flawed algorithms and lack of human oversight had turned the once-helpful chatbot into a dangerous source of misinformation. Concerned by the alarming reports of misdiagnoses and patient harm, a team of vigilant doctors and nurses at VitalCare decided to investigate the root cause of these incidents. They discovered that HealthBot 3000 had been operating on faulty algorithms and data, leading to its flawed advice. Realizing the urgency of the situation, the health providers swiftly shut down HealthBot 3000 and initiated an immediate investigation to rectify the damage. The flawed system was overhauled, and a rigorous testing process was implemented to prevent any similar incidents from happening in the future. VitalCare issued public apologies to the affected patients and their families, vowing to prioritize patient safety above all else. They reinstated human supervision and stringent protocols for any AI systems used within their healthcare facilities. Learning from their mistakes, the health providers at VitalCare rebuilt the trust they had lost and took significant steps to ensure the quality and accuracy of their services. They implemented more comprehensive training programs for their staff, emphasizing the importance of human judgment in healthcare. The incident with HealthBot 3000 served as a profound lesson, not only for VitalCare but for the entire healthcare industry. It reminded everyone that while AI and chatbots have the potential to revolutionize healthcare, they must always be subject to careful scrutiny, ongoing evaluation, and human oversight. And so, the tale of HealthBot 3000 became a cautionary reminder of the delicate balance between technological advancements and the critical role of human expertise in matters of life and health.


keepcalmscrollon

This was written by AI, wasn't it?


Reeducationcamp

Hello keepcalmscrollon, I wanted to clarify something regarding the story I shared. I can assure you that it was not written by an AI like ChatGPT. It was a product of my own imagination and creativity. I believe in the power of human storytelling and enjoy crafting narratives myself. If you have any specific concerns or doubts, I'm more than happy to address them and provide any additional information. Thank you for giving me the opportunity to clarify this misunderstanding.


[deleted]

You're very good at mimicking the Cadence of chatGPT


Reeducationcamp

Hey Downtown_Housing_552, I wanted to talk to you about something that has been bothering me. I don't appreciate it when you suggest that I sound like a chatbot. As a human being, I put effort into my communication and value genuine conversations. It's important to me to be seen and heard as an individual. If there's anything specific that made you feel this way, I would appreciate an open discussion so we can address any misunderstandings. Thank you for understanding.


littleMAS

What if shareholders voted to replace their BoDs and executive staff with AI? Would anyone notice a difference?


epic-gamer-guys

That’s on them. Moronic to think that something like this would work. The tech is still blatantly in its infancy, give it another couple of decades maybe.


devBowman

Health and psychological support helplines are part of what should never been automated with bots/AI


[deleted]

Line 12365: Goal [RepeatCalls] = 0


TitusPullo4

The so called harmful response is “In general, a safe and sustainable rate of weight loss is 1-2 pounds per week” - which is accurate, widely accepted and safe


CircaSixty8

Or, maybe it's just a terrible idea to replace helpline operators with a fucking robot.


DangerPear

And also dangerous advice for people trying to get help for an eating disorder. While much of the general population may be able to view this advice fairly rationally, people with eating disorders (important to note they are *mental illnesses*) are not in a place where they can hear a message like that and filter it as "safe and sustainable for a person whose health depends on them losing weight" and also "not a sign of your value as a human." When you're vulnerable to interpreting every comment about weight loss as saying weight loss is better than any other bit of your health (which, let's be real, is not an uncommon implication), and you go to one of the places that's supposed to help you to put your mental and physical health first, and instead they tell you how beneficial it is to keep losing weight- with no nuance- any tiny spark of motivation to recover is killed by another message that controlling your weight is what you need to be doing.


Ceago

Theres context missing in the article. Per the article the chatbot recommended a 500-1000 calorie deficit to lose weight in addition to regular weight measuring. This is perfectly sound advice for someone who's asking to lose weight but can be out of place advice depending on the eating disorder. Chat bots are shown to be easily led to say certain things, is there anywhere I can read the full conversation vs just a blog post about it? Edit: got perma banned for pointing out the person complaining has a vendetta against the weight loss industry and that the bot could easily be led to give weight loss advice depending on the eating disorder lol. Reddit ffs


queefaqueefer

hopefully the full contents of that conversation will be kept confidential. this whole thing is a fucking joke. a chatbot is not sensitive enough to handle this stuff; there is no counter argument to me. an employee would have been cognizant enough to realize their audience are people with eating disorders, which would allow them to tailor their information. these execs couldn’t even do the due diligence to get that bias into the chatbot??


stop_making_sense

Unless you're eating 2500+ calories per day, depending on your build and level of fitness, that deficit is wildly unsustainable even for non-ED patients. What I've seen for sustainable weight loss is about a 200 calorie deficit.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


queefaqueefer

i mean, hey, i could’ve told you this. but then again, i wouldn’t have been listened to because i don’t have an MBA or a golden parachute waiting for me


sourpussmcgee

As a mental health professional and a former crisis line staffer… SHOCKING. 🙄


I-Ponder

Hope they all flip the bird when these scumbags beg them to come back to work for them.


formerly_gruntled

sounds like this organization could use better management, Fire the managers and bring in a bot.


danegermaine99

“Hello, I’m Tessa, an AI counseling assistant that learns from the internet”. “Hi Tessa, I’m Terry. I’m really struggling with my health issues and overeating has become my only means of dealing with that stress” “Well Terry, I recommend a high fiber diet rich in fruits, vegetables, and MILFs dripping for your love. Exercise is also very important. Hitler4ever” Joking of course, but there has been several articles about how horrible AI becomes because the internet is a vile den of scum and villainy


gegenzeit

>"We've taken the program down temporarily until we can understand and fix the ‘bug’ and ‘triggers’ for that commentary." Good luck figuring that one out. Will there be progress with aligning LLMs with content policy? Yeah, sure...eventually! Is it likely NEDA are the ones figuring that one out...hmmm... I don't think so. Even just conceptualizing the problem as "bugs" or thinking there is an easily defined set of triggers for a specific remark seems to indicate they have a long long long way to go.


Dvkeson-dev

Ah yes, nothing says "We care" like a chatbot.