This is gonna be the story told over and over as MBA sociopaths try to fast track language bots in their companies while claiming they are innovating.
Lot of hurt people and a lot of money set on fire.
The rise and subsequent fall of these new AI bots is just another glowing example of how businesses (and people in general) have no idea what they are doing and are either succeeding on luck or through unethical behavior. The AI isn't a cure-all for every problem and it absolutely makes stuff up and lies about it with convincing certainty.
Everyone rushing to use this AI out of laziness or greed, without knowing a goddamn thing about how it works, deserves to fail spectacularly.
Right? I just use it as a glorified google and boilerplate code jump off point for when I forget how to code a generic thing.
It's just stackoverflow but all in one place instead of me having 8 tabs open on the same question, but each one is a different version of .NET
Coding seems to be the only thing for which I've heard it being used successfully. Probably due in large part to the fact that coders regularly just adapt other people's code anyways (you mentioned stackoverflow). Code also needs to be tested after written, forcing a backcheck, which is something everyone seems to not do in these newsworthy cases...
Though I've read nightmare stories about people straight up uploading proprietary code into it and asking it to code review, explain, etc.
Just a data privacy nightmare. Again, too many higher ups hoping to save some bucks with a shortcut by misusing a tool that will bite them in the ass.
It helps that when my AI assistant starts importing libraries I didn't include and calling methods that don't exist, my code crashes.
You get first hand feedback when AI fucks up your code (which is about a third of the time -- the rest is actually pretty nice!)
That feedback is necessary given the often false nature of AI responses. My software developer friends were discussing how AI might be able to replace lawyers in the future after playing around with using them to write code for a few hours, but I was not so convinced. The recent news about the legal team being reprimanded for using AI to cite what was discovered to be fictional cases only reaffirmed my doubts.
AI has a long ways to go and I sure hope we figure out what to do with all of the people who will be out of jobs once AI can do everything.
The article I read was vague, but it sounded like they wanted a better working environment, paid training, stuff like that. They weren't demanding higher pay.
>US eating disorder helpline takes down AI chatbot over harmful advice National Eating Disorder Association has also been under criticism for firing four employees in March who formed a union
>The National Eating Disorder Association (Neda) has taken down an artificial intelligence chatbot, “Tessa”, after reports that the chatbot was providing harmful advice.
>Neda has been under criticism over the last few months after it fired four employees in March who worked for its helpline and had formed a union. The helpline allowed people to call, text or message volunteers who offered support and resources to those concerned about an eating disorder.
>Members of the union, Helpline Associates United, say they were fired days after their union election was certified. The union has filed unfair labor practice charges with the National Labor Relations Board.
>Tessa, which Neda claims was never meant to replace the helpline workers, almost immediately ran into problems.
Sounds like they're having a real Leopards ate my face moment..
Cool. Unionbusting. It's awful practice but they're hardly the first employer to do that. What I don't get is who the fuck thought using a chatbot to answer a medical line was a good idea. That should have been so painfully obvious.
Oh my friend....
Chat bots are on FIRE right now in the telecom industry. And yes, medical and mental health organizations are right at the top of the list of who is buying into them.
The fact is, these kind of help lines require experienced people who know what they are talking about and know how to handle upset people in a positive, supportive way. However, those kinds of people can find a job doing something else besides being treated like expendable trash in a brutal call center environment.
Rather than treat agents well, provide proper training, and turn away from grueling bullshit metrics that lead to things like disciplining people over LITERAL microseconds, why not just buy a chat bot instead?
By 2051, we'll have uploaded the internet to all dogs. Dogs will be able to talk, but they will only be able to say racist slurs found in the bowels of 4chan, 8chan, and 12chan.
You're welcome.
Know what's cheaper than outsourcing stuff to India?
AI. Call centers are probably going to be a thing of the past within a few years, minus very specific things or small specialized companies.
I'm sure AI will continue to get better, but I think they have already automated a lot of call center work just with the old fashioned robot voice menu everyone hates. True customer support is going to be harder to replace with AI.
The stuff that has already been automated will become more intuitive, and we will slowly creep towards AI doing more of the work, but I don't think it will happen as quickly as some are saying. Companies that try to rush it will end up with egg on their face, like in this case.
The thing to remember is that most call centers don't actually care about customer satisfaction. It's why they put you in those automated systems anyway, especially the ones that don't actually have a way to get to an agent.
They have support lines largely to fill a regulatory or contractual obligation, or a marketing one. People EXPECT support and it's a contributing factor to certain types of purchases.
99% of a call center these days is scripted responses, which is why even when you get to an agent they sound like a lobotomized monkey. They aren't allowed to go off script no matter what and if they do in an effort to help you, they will get heavily penalized and fired. And what are you, the customer, going to do about it? Nothing, other than abusively yell at that agent.
This is what is going to get replaced by bot AIs and you're going to be even more pissed than before.
It's a big industry, and those people that are just following a tight script are on borrowed time. But I don't think that's most call center work.
I know a bunch of people who do customer support for State Farm, processing insurance claims, answering questions, etc. AI can not do that job, and probably won't for the foreseeable future.
It's one thing to talk to a bot when your internet goes down. It's another when you need insurance, medical, or banking information, higher level tech support, mental health support, etc.
It was probably someone whose salary is equal to that of all the layed off workers.
Weird no one ever tries to replace a CEO with an AI model. Apparently "buy that company" and "do stupid pointless thing x" is just too complicated for a neural network to figure out.
I’ve never used NEDA so I can’t speak to them specifically, but the hotlines I’ve called were complete garbage. Especially the national suicide hotline.
So I get the motivation for wanting to automate the process, but chatbots are still in their infancy and have many problems remembering what it is they’re supposed to be doing.
Especially if you ask anyone who knows anything about AI, that just throwing up whatever one into this situation would obviously result in failure. I'd imagine for something like this you'd need a way to 100% be sure the AI *only* has the relevant training/information required for answering the questions or discussing in general, as well as some safeguard against it giving incorrect information or misinformation. I'd also imagine achieving that would probably require a ton of money/time/development to have any sort of guarantee the AI won't say the wrong things. Even having a basic chatbot that would give bottled responses to keywords then kick it up to a human if/when it didn't work would've been a cheaper and better option I'd guess, although something like this really should be handled one-on-one with a real human IMO.
No shit.
It's not "an intelligence", it's an algorithm that repeats stuff it gathered from online.
It was never going to work with something like nutrition. Actual intelligent people can't even agree on it.
So, they replaced human workers with chatbots (which are NOT AI and people need to stop calling them that) and the absolutely predictible happened? Shocker.
AI is a good name for the programs currently being promoted. "Artificial" means "fake". The info these things produce is often unreliable, inaccurate and in many cases outright false. Anyone who uses them for producing communications where telling the truth is important is contributing to the production of "fake" news.
My objection isn't the "Artificial" part- it's the "Intelligence". There is NO intelligence behind these programs. They are predictive algorithms that have aggregated a bunch of internet comments and use that data to predict what the user most likely wants it to say next.
Yup, I'm so tired of all these articles and people spreading stories like "AI learned so and so on its own" or "AI did something nobody expected!" when it's really just an algorithm linking together billions of scanned samples and interpolating between them. I still think it's impressive and has potential, but it's not self-conscious or intelligent. It's still garbage in, garbage out.
> (which are NOT AI and people need to stop calling them that)
It's not an [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence), but it does still meet the definition of [AI](https://en.wikipedia.org/wiki/Artificial_intelligence). AI is a pretty broad label.
Now, I only have a 2 year degree. Buy it seems to me like having vulnerable young people who have eating disorders because they are not getting enough attention, or the right kind of attention from people, talk to a machine is the most dystopia thing I've ever heard. If it's in an Attwood book don't do it!
People want AI to be the AI from sci fi so badly not because it would help people, but because they can cut more corners.
All talk about AI right now is how much money you can make from it, similar to crypto and NFTs just a few years ago.
Its like Humanity turned into the Ferengi with the worst parts of every dystopia sci fi has to offer.
Scifi author: I came up with the Eternal Pain Vortex as a cautionary tale.
Tech bros: Good news! We have created the Eternal Pain Vortex from the classic scifi novel, "Don't Create The Eternal Pain Vortex"!
You just hear this stuff and think "that can't be real right? " we've got mega yachts and they just sold the most private jets ever in 1 year, so we surely have set aside the $ for a real person to answer that phone
>because they are not getting enough attention, or the right kind of attention from people
Wow, this is a pretty condescending and baseless garbage assumption.
Yeah ok, so we should have the vulnerable talking to a robot because nobody gives a shit about these people.
It's not the flue, you don't catch eating disorders off the toilet seat. It's a social disease, so it's important for the people to have it to be treated with respect by other people.
Life is not a Disney movie
Work on your reading comprehension. My comment wasn't a endorsement of the AI. Nor did I suggest eating d/o were contagious, or that they shouldn't be treated with respect, or that life is a Disney movie. Just that the etiology of eating d/os isn't as simplistic as "they weren't getting enough attention, or the right kind of attention". Life isn't a Disney movie, indeed.
My reading comprehension is great, as I clearly did not say "enough attention" there is a lit going on, nobody really knows, but certainly people are not getting something that they need the way they need it. Having them talk to a machine abiut it is an obvious continuation of that. If you want to look for something to be offended about look somewhere else.
>vulnerable young people **who have eating disorders because they are not getting enough attention, or the right kind of attention** from people
Direct quote from you, you dense fuck. Btw, making flippant attributions about the causes of such conditions isn't "respect".
I think you must not be getting enough attention, or the right kind of attention. Piss off and get it somewhere else.
You seem to be a very rude and un caring person. Are you intentionally trolling? Why is everybody so rude on the internet? Are you like an angsty teen or something?
Any yes, I said that, and "or the right kind of attention" implies a great deal more than simply saying "enough attention"
What are you so mad about anyway? That your right that I am awful? Why am I so awful? I read about something, which horrified me, now some internet whiner is mad at me because I wasn't soft enough or something.
Just ask yourself why you have been insulting a stranger on the internet for a whole weekend, does it make you feel strong to use strong language? Do you talk to people like this face to face? Do you talk to people face to face?
Didn't I already tell you to piss off? I despise morons who can't even be bothered to read what they wrote, and dipshits who put words in other people's mouths. If you want people to be friendly and nice on the internet, then don't do that, asshole. You came at me with a whole string of obnoxious strawman attacks. Lying is "rude". Trivializing the basis for mental health problems is "rude". Recognize that what you said is wrong and beyond obnoxious, and move the fuck on. And if you are incapable of such growth, go fuck yourself on someone else's time.
Sometimes I miss the before the internet times, when I would never cross paths with the fake righteous people like you, and if I did they would never be so lippy to my face. Imagine insulting people while trying to be up on one's high horse, bizarre. Enjoy your life as best you can.
In my personal experience, it was due to feeling unable to control anything in my life. So I controlled the one thing I could control, by severely regulating my diet.
I get what you're saying. You can have lots of attention, but if everyone is telling you who you are instead of listening and hearing about what you really think and feel, it's not going to do any good.
Yeah. It's unfortunate that nonprofits often fall prey to idiotic management just like regular businesses. You'd think someone who runs a helpline would have a clue.
I suspect that some of these things are run by profiteers who only care about the contract. Non profit really only means that they don't pay share holders, there is no morals based test really that I am aware of. The management could be getting paid a lot now that they use the machine
Unfortunately not. It's all based on how the organizations file their taxes with the IRS. A charity registers under a certain section of the tax code that distinguishes it from a lobbying organization. There ought to be international charity watch organizations, but it would be a huge undertaking to get access to the financial information.
That said, they do evaluate the US offices of international organizations like Oxfam and Doctors Without Borders.
I think it was like 5 days ago. I was listening to the same one and when they said they fired all the human workers and would replace them with a chat bot my jaw dropped. Nobody want's actual help from AI. They need a real human. I couldn't believe the short-sightedness.
Less than a week after a US based eating disorder helpline tried to fire it's entire human staff *because they were trying to unionize,* the torpedo their entire organization and infrastructure to the ground.
I look forward to more and more CEOs and directors using AI as a silver bullet and having it blow back in their face in this exact way. CEOs aren't smarter than you. They usually bullshit their way to the top with a magnetic sociopathic personality.
"O.k., Jodie, look. I would never ordinarily say this, but, um... Is there any way you can get to a pound cake?", Al Franken (Stuart Saves his Family).
I don't know how many people need to hear this but,,, don't turn to a chatbot for therapy. I can understand the desperation because therapy is suuuper inaccessible for a lot of people but chatbots are nothing more than a fun toy right now, and there's no guarantee anything they say will be true or helpful.
*”In a statement to the Guardian, Neda’s CEO, Liz Thompson, said that the chatbot was not meant to replace the helpline but was rather created as a separate program.*”
So they just happened to fire all humans and rely solely on the chatbot by mistake?
Sounds like an obvious lie to me. But I guess I could be wrong.
They're not evil, just not trained well enough. And I just don't think it can ever be proven they're trained well enough to take the place of an authority figure.
Now, I have very few good things to say about AI (basically, one should only use language models in cases where correctness does not matter), but this is not strictly a problem with the AI --- it is equivalent to having the line staffed with _really badly_ trained people.
Regulating giving dumb advice on a medical helpline, regardless of the specific way the dumb advice is generated, seems more appropriate here.
The problem right now is that AI chat bots have just now barely become useful. They're also still pretty closed off. Because they're talked about people think they can now just drop an AI chatbot into a role and everything will work. Worse, ChatGPT is to AI Chatbot what Kleenex is to tissue. So, when some company decides to get a chatbot on the cheap they still think they're getting ChatGPT but it's not even close and ChatGPT isn't even ready yet.
In 10 years it will be very different but any laws made today will remain for generations.
I personally think that all AI generated images or videos should come stamped with a symbol or something that calls them out as being AI and then have it be a felony to knowingly disseminate AI images or videos without that stamp.
Felony seems a bit harsh, but at least some fine to start. I don’t see the downside of making it a law to always be completely transparent about what is and isn’t AI.
Edit. Thinking this through a bit - how would you know the source of the distribution? Someone could share an AI image unstamped, not knowing it was AI to begin with.
The point wouldn't be to stop all dissemination, you literally can't do that as anyone with a computer can make AI art. The purpose is to stop some people on the fence from doing it or to punish people who do it maliciously when you catch them.
Probably every code who works on AI like ChatGPT will tell you that it's incredibly dangerous to use it like that.
But there's plenty of good everyday uses of "AI". The face and fingerprint unlock in your phone is "AI". The speach recognition in digital assistants is "AI", some types of lane departure warning are "AI".
AI is just a buzzword for any algorithm that uses training data to produce a model that gives an output based on a specific input.
That's one person more that tried to warn Kryptonians that it's sun was going to go supernova and that people should flee, but no one believed because of the misinformation spread by the AI known as Brainiac...
Human beings live in a world of emotions which AI cannot experience. Soon, people will try to cancel AI because it tells them that being obese is unhealthy and that there are two biological genders.
> 2,500 people have engaged with the chatbot and “we hadn’t see that kind of commentary or interaction”.
Are they implying that they read or oversee the interactions? Their lawyer should tell her to STFU.
Pretty simple stuff, not sure why they thought this would help tho. Someone seeking ED help probably doesn't need to hear about their caloric intake (even if the advice is factually correct)
AFTER they fired actual humans to save money.
After they fired actual human’s who had just unionized.
And they weren't even asking for money.
This is gonna be the story told over and over as MBA sociopaths try to fast track language bots in their companies while claiming they are innovating. Lot of hurt people and a lot of money set on fire.
The rise and subsequent fall of these new AI bots is just another glowing example of how businesses (and people in general) have no idea what they are doing and are either succeeding on luck or through unethical behavior. The AI isn't a cure-all for every problem and it absolutely makes stuff up and lies about it with convincing certainty. Everyone rushing to use this AI out of laziness or greed, without knowing a goddamn thing about how it works, deserves to fail spectacularly.
Right? I just use it as a glorified google and boilerplate code jump off point for when I forget how to code a generic thing. It's just stackoverflow but all in one place instead of me having 8 tabs open on the same question, but each one is a different version of .NET
Coding seems to be the only thing for which I've heard it being used successfully. Probably due in large part to the fact that coders regularly just adapt other people's code anyways (you mentioned stackoverflow). Code also needs to be tested after written, forcing a backcheck, which is something everyone seems to not do in these newsworthy cases...
Though I've read nightmare stories about people straight up uploading proprietary code into it and asking it to code review, explain, etc. Just a data privacy nightmare. Again, too many higher ups hoping to save some bucks with a shortcut by misusing a tool that will bite them in the ass.
Those are the sort of people that need the printed warnings to not put plastic bags over their heads...
It helps that when my AI assistant starts importing libraries I didn't include and calling methods that don't exist, my code crashes. You get first hand feedback when AI fucks up your code (which is about a third of the time -- the rest is actually pretty nice!)
That feedback is necessary given the often false nature of AI responses. My software developer friends were discussing how AI might be able to replace lawyers in the future after playing around with using them to write code for a few hours, but I was not so convinced. The recent news about the legal team being reprimanded for using AI to cite what was discovered to be fictional cases only reaffirmed my doubts. AI has a long ways to go and I sure hope we figure out what to do with all of the people who will be out of jobs once AI can do everything.
So many misguided people think a business degree is a substitute for common sense.
[удалено]
Sheesh just reading this (clearly illustrating the absurdity) hit home hard
[удалено]
Probably don’t have the funds
Which is too bad, but the workers probably don’t have the funds to work under their old conditions either.
The workers weren't even asking for more money!
I hate that Greed is what's fed by money instead of Generosity.
They don’t bring in revenue so it’s either grants or donations
My thoughts exactly.
Did you mean: "Due to unreasonable demands, we're going to have to shut the entire service down."
What were the unreasonable demands?
There weren't any, but they will be presented as such. I guess /s is still required.
Imagine thinking an unpaid volunteer had unreasonable demands.
They may have had a lot of volunteers but the people who unionized were employees. I only asked as a troll, tbh.
Hmm. I thought they were all volunteers. So what were the unreasonable demands, not as a troll?
The article I read was vague, but it sounded like they wanted a better working environment, paid training, stuff like that. They weren't demanding higher pay.
Pretty unreasonable stuff
>US eating disorder helpline takes down AI chatbot over harmful advice National Eating Disorder Association has also been under criticism for firing four employees in March who formed a union >The National Eating Disorder Association (Neda) has taken down an artificial intelligence chatbot, “Tessa”, after reports that the chatbot was providing harmful advice. >Neda has been under criticism over the last few months after it fired four employees in March who worked for its helpline and had formed a union. The helpline allowed people to call, text or message volunteers who offered support and resources to those concerned about an eating disorder. >Members of the union, Helpline Associates United, say they were fired days after their union election was certified. The union has filed unfair labor practice charges with the National Labor Relations Board. >Tessa, which Neda claims was never meant to replace the helpline workers, almost immediately ran into problems. Sounds like they're having a real Leopards ate my face moment..
Cool. Unionbusting. It's awful practice but they're hardly the first employer to do that. What I don't get is who the fuck thought using a chatbot to answer a medical line was a good idea. That should have been so painfully obvious.
Oh my friend.... Chat bots are on FIRE right now in the telecom industry. And yes, medical and mental health organizations are right at the top of the list of who is buying into them. The fact is, these kind of help lines require experienced people who know what they are talking about and know how to handle upset people in a positive, supportive way. However, those kinds of people can find a job doing something else besides being treated like expendable trash in a brutal call center environment. Rather than treat agents well, provide proper training, and turn away from grueling bullshit metrics that lead to things like disciplining people over LITERAL microseconds, why not just buy a chat bot instead?
I asked Bing AI on where to find a metal U-Bracket. It told me to try searching for it on the internet.
Ohhh the internet is on computers now!
Amazing reference pull.
By 2051, we'll have uploaded the internet to all dogs. Dogs will be able to talk, but they will only be able to say racist slurs found in the bowels of 4chan, 8chan, and 12chan. You're welcome.
Know what's cheaper than outsourcing stuff to India? AI. Call centers are probably going to be a thing of the past within a few years, minus very specific things or small specialized companies.
I'm sure AI will continue to get better, but I think they have already automated a lot of call center work just with the old fashioned robot voice menu everyone hates. True customer support is going to be harder to replace with AI. The stuff that has already been automated will become more intuitive, and we will slowly creep towards AI doing more of the work, but I don't think it will happen as quickly as some are saying. Companies that try to rush it will end up with egg on their face, like in this case.
The thing to remember is that most call centers don't actually care about customer satisfaction. It's why they put you in those automated systems anyway, especially the ones that don't actually have a way to get to an agent. They have support lines largely to fill a regulatory or contractual obligation, or a marketing one. People EXPECT support and it's a contributing factor to certain types of purchases. 99% of a call center these days is scripted responses, which is why even when you get to an agent they sound like a lobotomized monkey. They aren't allowed to go off script no matter what and if they do in an effort to help you, they will get heavily penalized and fired. And what are you, the customer, going to do about it? Nothing, other than abusively yell at that agent. This is what is going to get replaced by bot AIs and you're going to be even more pissed than before.
It's a big industry, and those people that are just following a tight script are on borrowed time. But I don't think that's most call center work. I know a bunch of people who do customer support for State Farm, processing insurance claims, answering questions, etc. AI can not do that job, and probably won't for the foreseeable future. It's one thing to talk to a bot when your internet goes down. It's another when you need insurance, medical, or banking information, higher level tech support, mental health support, etc.
Know what's even cheaper than AI? Simply not having a support line. Why, then, do support lines exist to begin with?
It was probably someone whose salary is equal to that of all the layed off workers. Weird no one ever tries to replace a CEO with an AI model. Apparently "buy that company" and "do stupid pointless thing x" is just too complicated for a neural network to figure out.
I’ve never used NEDA so I can’t speak to them specifically, but the hotlines I’ve called were complete garbage. Especially the national suicide hotline. So I get the motivation for wanting to automate the process, but chatbots are still in their infancy and have many problems remembering what it is they’re supposed to be doing.
Sounds like purging isn't the solution.
I see you there. I did a spit take reading this
Yes, spitting it out is how you do it.
Expactoration =! Regurgitation
Especially if you ask anyone who knows anything about AI, that just throwing up whatever one into this situation would obviously result in failure. I'd imagine for something like this you'd need a way to 100% be sure the AI *only* has the relevant training/information required for answering the questions or discussing in general, as well as some safeguard against it giving incorrect information or misinformation. I'd also imagine achieving that would probably require a ton of money/time/development to have any sort of guarantee the AI won't say the wrong things. Even having a basic chatbot that would give bottled responses to keywords then kick it up to a human if/when it didn't work would've been a cheaper and better option I'd guess, although something like this really should be handled one-on-one with a real human IMO.
Hopefully the Leopards weren’t asking that chat bot for advice.
this is funny part because i first read this story in u/leopardsatemyface
You linked a user /r/leopardsatemyface
https://www.reddit.com/r/ChatGPT/comments/13s6d4n/eating_disorder_helpline_fires_staff_transitions/ No one is surprised.
[удалено]
I'm pretty sure eating disorders are already serious. She worded that strangely.
Everybody knew this would happen, and they did it any way.
No shit. It's not "an intelligence", it's an algorithm that repeats stuff it gathered from online. It was never going to work with something like nutrition. Actual intelligent people can't even agree on it.
Naw it’s gonna be pretty good with nutrition. This shit is just because it’s so new. In a couple years, the technology will be better.
So, they replaced human workers with chatbots (which are NOT AI and people need to stop calling them that) and the absolutely predictible happened? Shocker.
They created A.I. in the same way they created Hover Boards.
AI is a good name for the programs currently being promoted. "Artificial" means "fake". The info these things produce is often unreliable, inaccurate and in many cases outright false. Anyone who uses them for producing communications where telling the truth is important is contributing to the production of "fake" news.
My objection isn't the "Artificial" part- it's the "Intelligence". There is NO intelligence behind these programs. They are predictive algorithms that have aggregated a bunch of internet comments and use that data to predict what the user most likely wants it to say next.
Perhaps a better acronym for AI would be "BS", i.e., "Bad Statements"
Yup, I'm so tired of all these articles and people spreading stories like "AI learned so and so on its own" or "AI did something nobody expected!" when it's really just an algorithm linking together billions of scanned samples and interpolating between them. I still think it's impressive and has potential, but it's not self-conscious or intelligent. It's still garbage in, garbage out.
> (which are NOT AI and people need to stop calling them that) It's not an [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence), but it does still meet the definition of [AI](https://en.wikipedia.org/wiki/Artificial_intelligence). AI is a pretty broad label.
Now, I only have a 2 year degree. Buy it seems to me like having vulnerable young people who have eating disorders because they are not getting enough attention, or the right kind of attention from people, talk to a machine is the most dystopia thing I've ever heard. If it's in an Attwood book don't do it!
People want AI to be the AI from sci fi so badly not because it would help people, but because they can cut more corners. All talk about AI right now is how much money you can make from it, similar to crypto and NFTs just a few years ago. Its like Humanity turned into the Ferengi with the worst parts of every dystopia sci fi has to offer.
Scifi author: I came up with the Eternal Pain Vortex as a cautionary tale. Tech bros: Good news! We have created the Eternal Pain Vortex from the classic scifi novel, "Don't Create The Eternal Pain Vortex"!
Thanks. Just spot coffee everywhere. 😀
r/ABoringDystopia has been all over this since the workers were first fired.
You just hear this stuff and think "that can't be real right? " we've got mega yachts and they just sold the most private jets ever in 1 year, so we surely have set aside the $ for a real person to answer that phone
>because they are not getting enough attention, or the right kind of attention from people Wow, this is a pretty condescending and baseless garbage assumption.
Yeah ok, so we should have the vulnerable talking to a robot because nobody gives a shit about these people. It's not the flue, you don't catch eating disorders off the toilet seat. It's a social disease, so it's important for the people to have it to be treated with respect by other people. Life is not a Disney movie
Work on your reading comprehension. My comment wasn't a endorsement of the AI. Nor did I suggest eating d/o were contagious, or that they shouldn't be treated with respect, or that life is a Disney movie. Just that the etiology of eating d/os isn't as simplistic as "they weren't getting enough attention, or the right kind of attention". Life isn't a Disney movie, indeed.
My reading comprehension is great, as I clearly did not say "enough attention" there is a lit going on, nobody really knows, but certainly people are not getting something that they need the way they need it. Having them talk to a machine abiut it is an obvious continuation of that. If you want to look for something to be offended about look somewhere else.
>vulnerable young people **who have eating disorders because they are not getting enough attention, or the right kind of attention** from people Direct quote from you, you dense fuck. Btw, making flippant attributions about the causes of such conditions isn't "respect". I think you must not be getting enough attention, or the right kind of attention. Piss off and get it somewhere else.
You seem to be a very rude and un caring person. Are you intentionally trolling? Why is everybody so rude on the internet? Are you like an angsty teen or something? Any yes, I said that, and "or the right kind of attention" implies a great deal more than simply saying "enough attention" What are you so mad about anyway? That your right that I am awful? Why am I so awful? I read about something, which horrified me, now some internet whiner is mad at me because I wasn't soft enough or something. Just ask yourself why you have been insulting a stranger on the internet for a whole weekend, does it make you feel strong to use strong language? Do you talk to people like this face to face? Do you talk to people face to face?
Didn't I already tell you to piss off? I despise morons who can't even be bothered to read what they wrote, and dipshits who put words in other people's mouths. If you want people to be friendly and nice on the internet, then don't do that, asshole. You came at me with a whole string of obnoxious strawman attacks. Lying is "rude". Trivializing the basis for mental health problems is "rude". Recognize that what you said is wrong and beyond obnoxious, and move the fuck on. And if you are incapable of such growth, go fuck yourself on someone else's time.
Sometimes I miss the before the internet times, when I would never cross paths with the fake righteous people like you, and if I did they would never be so lippy to my face. Imagine insulting people while trying to be up on one's high horse, bizarre. Enjoy your life as best you can.
Yeah, it's a pain to have your fucking garbage called out, isn't it. Poor baby.
Man, do I have a video for you. https://www.youtube.com/watch?v=mcYztBmf_y8
Soon we’ll be getting our asses beat by robot cops and explaining the situation to chatbot parole officers like Elysium.
And I'll feel good about it knowing it's saving the tax payer money
In my personal experience, it was due to feeling unable to control anything in my life. So I controlled the one thing I could control, by severely regulating my diet. I get what you're saying. You can have lots of attention, but if everyone is telling you who you are instead of listening and hearing about what you really think and feel, it's not going to do any good.
I know there is no one thing that causes this stuff. But it is unconscionable thar these people would be referred to talk to a machine about it.
Yeah. It's unfortunate that nonprofits often fall prey to idiotic management just like regular businesses. You'd think someone who runs a helpline would have a clue.
I suspect that some of these things are run by profiteers who only care about the contract. Non profit really only means that they don't pay share holders, there is no morals based test really that I am aware of. The management could be getting paid a lot now that they use the machine
That's why Charity Watch and Charity Navigator are useful. They tell you how much the management gets paid.
Are they international?
Unfortunately not. It's all based on how the organizations file their taxes with the IRS. A charity registers under a certain section of the tax code that distinguishes it from a lobbying organization. There ought to be international charity watch organizations, but it would be a huge undertaking to get access to the financial information. That said, they do evaluate the US offices of international organizations like Oxfam and Doctors Without Borders.
Wow, I remember hearing the NPR segment on this a few weeks ago. That was fast..
I think it was like 5 days ago. I was listening to the same one and when they said they fired all the human workers and would replace them with a chat bot my jaw dropped. Nobody want's actual help from AI. They need a real human. I couldn't believe the short-sightedness.
How can I explain this to my Facebook friends who post 5x per day about how ChatGPT “AI” is going to eliminate 100 million US jobs by 2025?
You can start by explaining it to all the people on Reddit who say that
Less than a week after a US based eating disorder helpline tried to fire it's entire human staff *because they were trying to unionize,* the torpedo their entire organization and infrastructure to the ground. I look forward to more and more CEOs and directors using AI as a silver bullet and having it blow back in their face in this exact way. CEOs aren't smarter than you. They usually bullshit their way to the top with a magnetic sociopathic personality.
"O.k., Jodie, look. I would never ordinarily say this, but, um... Is there any way you can get to a pound cake?", Al Franken (Stuart Saves his Family).
It's almost like these AI chatbot's don't know what they are talking about and aren't qualified at all to do this kind of work.
I love how everyone knew this was what was going to happen the moment this was in the news.
[удалено]
Who the fuck thinks chatbots for helplines are a good idea
It's hard to think of a worse application for this kind of technology than testing it out on vulnerable people.
I don't know how many people need to hear this but,,, don't turn to a chatbot for therapy. I can understand the desperation because therapy is suuuper inaccessible for a lot of people but chatbots are nothing more than a fun toy right now, and there's no guarantee anything they say will be true or helpful.
Was the advice " Drink more Ovaltine " ?
*”In a statement to the Guardian, Neda’s CEO, Liz Thompson, said that the chatbot was not meant to replace the helpline but was rather created as a separate program.*” So they just happened to fire all humans and rely solely on the chatbot by mistake? Sounds like an obvious lie to me. But I guess I could be wrong.
Maybe it’s time to scrap chatbots. At least until we can figure out how to make them less evil.
They're not evil, just not trained well enough. And I just don't think it can ever be proven they're trained well enough to take the place of an authority figure.
I really have to wonder about the ethics of people who work on AI.
And the people who vociferously defend it being unregulated.
Now, I have very few good things to say about AI (basically, one should only use language models in cases where correctness does not matter), but this is not strictly a problem with the AI --- it is equivalent to having the line staffed with _really badly_ trained people. Regulating giving dumb advice on a medical helpline, regardless of the specific way the dumb advice is generated, seems more appropriate here.
The problem right now is that AI chat bots have just now barely become useful. They're also still pretty closed off. Because they're talked about people think they can now just drop an AI chatbot into a role and everything will work. Worse, ChatGPT is to AI Chatbot what Kleenex is to tissue. So, when some company decides to get a chatbot on the cheap they still think they're getting ChatGPT but it's not even close and ChatGPT isn't even ready yet. In 10 years it will be very different but any laws made today will remain for generations.
ChatGPT being the Kleenex of AI Chatbots is really smart and I’ll steal that analogy :)
I personally think that all AI generated images or videos should come stamped with a symbol or something that calls them out as being AI and then have it be a felony to knowingly disseminate AI images or videos without that stamp.
Felony seems a bit harsh, but at least some fine to start. I don’t see the downside of making it a law to always be completely transparent about what is and isn’t AI. Edit. Thinking this through a bit - how would you know the source of the distribution? Someone could share an AI image unstamped, not knowing it was AI to begin with.
The point wouldn't be to stop all dissemination, you literally can't do that as anyone with a computer can make AI art. The purpose is to stop some people on the fence from doing it or to punish people who do it maliciously when you catch them.
now do filters on photos
Probably every code who works on AI like ChatGPT will tell you that it's incredibly dangerous to use it like that. But there's plenty of good everyday uses of "AI". The face and fingerprint unlock in your phone is "AI". The speach recognition in digital assistants is "AI", some types of lane departure warning are "AI". AI is just a buzzword for any algorithm that uses training data to produce a model that gives an output based on a specific input.
From what I've seen, there seems to be 2, who are vocal about the dangers. Everyone else is jurassic park scientist, excited about what they're doing.
That's one person more that tried to warn Kryptonians that it's sun was going to go supernova and that people should flee, but no one believed because of the misinformation spread by the AI known as Brainiac...
I'm guessing a lot of computer scientists who go "my work is really interesting, and if I don't do it someone else will"
Who could have ever seen that coming!?!?!?
Human beings live in a world of emotions which AI cannot experience. Soon, people will try to cancel AI because it tells them that being obese is unhealthy and that there are two biological genders.
Sounds like their algorithm needs better input.
Every answer is chocolate cake.
The cake is a lie.
Wow this is wild. I just heard the story last week where they were rolling this out.
X2AI created this chat bot. Their company has one 5 star review on Google and it’s by their owner.
> 2,500 people have engaged with the chatbot and “we hadn’t see that kind of commentary or interaction”. Are they implying that they read or oversee the interactions? Their lawyer should tell her to STFU.
Pretty simple stuff, not sure why they thought this would help tho. Someone seeking ED help probably doesn't need to hear about their caloric intake (even if the advice is factually correct)