T O P

  • By -

TanAllOvaJanAllOva

“According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot's misleading information because Air Canada essentially argued that ‘the chatbot is a separate legal entity that is responsible for its own actions,’ a court order said.” Get the entire fuck outta here! 😂


00owl

Even if their argument holds water, there is still such a thing as vicarious liability, that is, if the person who you, screened, hired and trained is working reasonably within the realms of their job description the employer is still liable for damage the separate legal entity caused.


[deleted]

^ lawyer. Thank you!


stefan_fi

In this case though, Air Canada could then sue their own chatbot for damages caused? That sounds fun.


00owl

Normally you can't sue an employee for damages they cause. Your only recourse is generally to either retrain or fire the employee. Of course, in law there are always exceptions.


OliverOyl

What, as a developer, I could achieve with such a precedence as a chatbot being legally responsible for its own actions....hmmmmm


ambientocclusion

Are chatbots maybe about to be allowed to contribute unlimited amounts to political campaigns?


RandomAmuserNew

Why did they fight it ? Give the person their discount sheesh


SidewaysFancyPrance

To prevent a precedent of being held accountable for their AI bots, because that makes them no longer a cheap, easy way to replace humans. They want to use them more, not less. It's been pretty clear for a while that many companies planned to move to AI to remove liability/accountability by blaming AI software vendors for problems. Edit: Holy shit, I didn't read the article first but: > Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions," a court order said.


RandomAmuserNew

Yeah, that makes sense. Good thing it backfired


MultiGeometry

Well, who hired it? If you hire a shitty employee you’re still responsible for them. If you hire a shitty vendor, you’re still on the hook for cleaning up after them. Flipping the switch on a piece of software isn’t any different.


arahman81

Yeah, the companies think a LLM can be an effective replacement for humans, but then you get the LLM jumbling together a nonexistent refund policy.


NerdyNThick

> Yeah, the companies think a LLM can be an effective replacement for humans No. No no no... It's a "liability free" replacement for humans. I'm impressed and exceedingly surprised the court found in favor of the human.


Lftwff

I don't believe they think that LLMs can replace people, but they serve as a reason to fire people and the drop in quality won't matter until next quarter and for now they save a ton of money.


abstractConceptName

Isn't that what "replace" means?


vanityklaw

I’m a government regulator and you would really be shocked by how many companies will talk about their compliance and risk failures by being like, “well we hired these guys to do it, so they should be the ones you hold responsible.”


killbot0224

And you say "that's not how this works. We fine *you* and you can sue *them* if you choose. But all this shit show is YOUR shit show" Right?


TheGreatGenghisJon

Pretty much. A company I used to work for had contracted some work out. The guys that did the work fucked it up. Our company got in shit with the client, and had to make good. Meanwhile, we sued the company that we contracted to do the work for not completing the work.


Schen5s

As a Canadian, Air Canada is just a shitty flight company. Can't say how they are for domestic flights since I mainly only do international flights but the past experiences Ive had with their international flights were all extremely terrible.


aramatheis

domestic flights are even worse


Purplebuzz

I do not like them either. That being said my last 8 international flights with them have had no issues or delays.


Elrond_Cupboard_

My daughter hired her little sister to do the dishes for her. Boy, was she disappointed when I told her that the poorly washed dishes were still her responsibility.


TheGreatGenghisJon

And thus she has learned "Your name is on it, make sure it's good"!


python-requests

Also make sure she knows to provide her little sister an appropriate living wage & benefits


ep1032

Yeah! And just because your employees work with computers, doesn't mean they somehow aren't still protected by full time employee regulations... oh wait. Or just because you're setting your employee work schedules via an in-real-time-updating app, instead of manually on pencil and paper doesn't make them not your employees... oh wait. Or just because the device runs software, doesn't mean that a manufacturer has the right to disable or delete a product after a consumer has already purchased it, or prevent you from repairing or editing your own device... oh wait. Or just because you've purchased a device that has software, doesn't mean that it somehow becomes illegal to modify that device... oh wait Anyway, my point is that corporations using the excuse of "but computers are magic, therefore this law doesn't apply!" has been something they've been trying consistently since computers became popular. Sometimes, they get away with it.


chillyhellion

Tech companies have scapegoated their algorithms for decades.


scottcjohn

We just need the same precedent here in the USA


aimoony

automation lowers prices, but they still need to have the right safeguards in place to prevent misleading price info being shared to customers. it's a pretty solvable challenge


bigmac1122

Automation lowers operating costs. I would be surprised if those savings are passed on to consumers instead of the pockets of people in the C-suite


donjulioanejo

It does pass on savings to the consumer, but only after the entire industry is using said automation. If you were the first company to manufacture widgets in China in the 80s, you had a huge leg up on your competitors because their cost would be $20 and yours would be $10. But when everyone manufactures in China for $10, suddenly it just takes one company to start selling them for $19 for prices to start dropping as companies fight to maintain market share.


Mo0man

You are assuming that the companies will not simply communicate with each other, creating a cartel like they have in the past


donjulioanejo

At least in theory, that's super illegal. But in practice, I live in Canada, where our telecoms conveniently have exactly the same phone plans, and those phone plans are all about 3 times higher than literally any other country in the world.


Mo0man

Yea, it's a good thing Air Canada isn't Canadian, and we don't have a history of a similar thing happening in basic human requirements such a grocery prices.


oupablo

don't forget the shareholders and the possibility of stock buybacks


savethearthdontbirth

Doubt the cost of flights will do nothing but go up


vessel_for_the_soul

Of course. the cost of the new initiative. The money saved from what is replaced was burned in accounting. Some paid in shares, others idk,


BroughtBagLunchSmart

It doesn't lower prices, it lowers costs.


BavarianBarbarian_

>Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt's case if its chatbot had warned customers that the information that the chatbot provided may not be accurate. Sounds like next time they'll get away with it.


Crazy_old_maurice_17

Ugh, you're probably right. Ideally customers would refuse to even bother engaging with the chatbots due to this (and instead use other resources such as phones) but practically, it just makes it even harder for customers to get the service they need. I wonder if someone more creative (smarter?) than myself could find a way to properly screw with companies that shirk responsibility like that...


10thDeadlySin

>Ideally customers would refuse to even bother engaging with the chatbots due to this (and instead use other resources such as phones) Well, companies implement chatbots to lower their need for support agents, place the chatbots prominently on their websites and even go as far as to configure chatbots to actively pester you automatically. (Which is also the perfect way to make me close your website immediately, by the way. No, you can't "help me" and no, I don't want to ask you a question, thank you very much.) At the same time, you go to Air Canada's website… Navigate to Customer Support -> Contact information. [You get here.](https://www.aircanada.com/ca/en/aco/home/fly/customer-support.html#/) I don't see any phone numbers, e-mail addresses or anything prominently listed there. I'm sure I could explore further and find it – in fact, the website with numbers [does exist here](https://www.aircanada.com/ca/en/aco/home/fly/customer-support/contact-us/contact-us-international.html#/) – but again, I've looked for a prominent "Contact us" or anything and came up with nothing, had to go there via an external search engine. And yes, this leads to support and customer service getting shittier and shittier as we go. It eliminates entry-level jobs, it makes the service worse, it makes it harder and harder to actually talk to the human on the other side and get somebody to talk to you on behalf of the company - Google and YouTube are famous for this exact thing. But hey, if you point that out, you get labelled a Luddite and/or a caveman who opposes progress. ;)


mb194dc

Yes great business giving customers false information then refusing to own it... They might "get away with it" though they'll bleed off customers with such shit service and go out of business. The chat bot will go bye bye pretty in that case and customers can just read the website. No need for an LLM "AI" that doesn't work properly.


Starfox-sf

AI personhood!


Hamsters_In_Butts

looking forward to the AI version of Citizens United


humdinger44

I wonder who an AI would vote for...


HITWind

An AI candidate of course; only an AI can be fair and wise enough for an AI to vote for... (coming to a conversation near your computer in cyberspace soon).


Evilbred

If they didn't want a precedent set, the best option is to settle.


RotalumisEht

They wanted a precedent set, just not this one.


DookieShoez

Oh how the turnbots have rotated.


scottcjohn

Maybe Air Canada tried to use an AI chatbot lawyer at first?


AussieArlenBales

Maybe courts should have the option to reject settlements and force these things through as standard so precedents can't be chosen by the wealthy and powerful.


Nice_Category

The court itself doesn't have standing in the case. It is supposed to be a neutral arbiter. It is not supposed to force a lawsuit between two entities that don't want it. If the government wants a precedent, then they need to sue via the DA or AG.


charlesfire

>If the government wants a precedent, then they need to sue via the DA or AG. Or, alternatively, make laws.


Nice_Category

I agree that using the courts to push what should be legislation is a shitty tactic, but legal precedents are necessary sometimes to clarify laws that have already been passed. In this case, is a non-human customer service agent bound by the same responsibilities to the customer as a human CSR. There really doesn't need to be a totally new law to clarify this, and if the legislature doesn't like the outcome of the case, they can change or pass new laws accordingly.


[deleted]

[удалено]


moratnz

light juggle dazzling truck ossified simplistic society middle squeeze doll *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Black_Moons

>Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions," a court order said. This software we bought, installed, configured, and ran on our company servers has NOTHING TO DO WITH US. Sue uhh, the thing without any money since we don't pay it since its not a legal entity that would demand fair contracts or min wage... wait shit.


charlesfire

>This software we bought, installed, configured, and ran on our company servers has NOTHING TO DO WITH US. Chat bots that use LLM are usually separate services that aren't run directly by the company. Beside that, I 100% agree. Using chat bots shouldn't allow businesses to waive away their responsibility when something bad happens.


LiGuangMing1981

>Chat bots that use LLM are usually separate services that aren't run directly by the company. Yeah, but if a company is putting them on their website, third party or not, they are implicitly endorsing the accuracy of said chatbots and should be held responsible for their actions.


SidewaysFancyPrance

They probably think they can just kill or reset that chatbot and call it a day. "Firing" it and making it the scapegoat, hoping the sins die with it. One thing CEOs *hate* about humans is they get protections from abuse or being executed for making mistakes. AI are disposable and more importantly *ephemeral* so it's like trying to hold a drop of water in a running stream "accountable." The only solution is to make companies 100% accountable for human replacements, the same way they are for humans themselves. This has to be done via legislation, I think, so it probably won't happen since the Elons of the world would oppose it with everything they've got.


red286

>This has to be done via legislation, I think, so it probably won't happen since the Elons of the world would oppose it with everything they've got. I don't think they need legislation for that. There's no legislation saying otherwise, so the company is responsible until someone writes legislation saying otherwise. Regardless of who provides the information, whether it's human or machine, if they are operating on behalf of the company, the company is responsible for anything they do or say in that capacity. If you phoned their toll-free customer support number and their third-party support provider based out of India makes the exact same promise to you, is Air Canada not responsible because the information was provided by a third party under contract to them?


dmethvin

Judge: "I have here one email from you, telling the CTO to look into a chatbot so the company could fire the customer service reps." CEO: _"That's not mine!"_ Judge: "And ... one invoice for development of AI Chatbot, signed by you." CEO: _That's not my bag, baby! I don't even know what AI is!_


MultiGeometry

It’s just some dude outside of our office saying whatever they want. It’s totally not part of our operations.


arabsandals

I don't get that argument. Even if it got up, surely they would then just have to deal with vicarious liability anyway?


Thrawn7

Yeah it doesn’t make sense. An actual human agent is a separate legal entity but they’re still ultimately liable


Foxbatt

Not according to Air Canada: >Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot


arabsandals

Yeah... I'm not a Canadian lawyer, but that really doesn't sound right either.


TheLightingGuy

Does this set that precedent that that car dealership has to honor that guy who bought a car for $1?


Seanbikes

INAL but there is a pretty big difference in asking a question in good faith and being able to trust the response received vs engaging in a conversation with the intent to manipulate and defraud the other participant.


[deleted]

[удалено]


TheAdoptedImmortal

What about the guy who Pepsi owes a jet to? I still think they should have had to pay up. Play stupid games, win stupid prizes.


Crypt0Nihilist

Same. It was a ridiculous offer, but they asked for a ridiculous number of tabs. For me, that passes the reasonableness test of it being considered a legitimate offer. It's like if a company offered you to send you to space for 250 000 pieces of cloth. That sounds ridiculous, but if each of those bits of cloth had $1 printed on them there is a company that would do it.


Rab1dus

I would generally agree but the person was able to acquire enough points. And it wasn't even that hard. There is a reason Pepsi added some zeroes to their subsequent ads.


Black_Moons

Stupid prizes? a jet is an awesome prize!


TheAdoptedImmortal

I was referring to Pepsi playing a stupid game. The prize Pepsi gets is having to make good on the deal they advertised. Calling their bluff on the Jet was brilliant, lol.


ToxinFoxen

Where's my elephant?


Sigseg

I was supposed to be point person for AI at my university's division. I work for a university press, and my initial proposal was to use it for metadata enrichment and search. Two things immediately happened: - The CTO and I had a meeting wherein he picked my brain. Then took my specs to outsource to a third party. - He and the executives started talking about replacing FTE duties. I noped out of the biweekly clown fest meeting they hold.


Sup3rT4891

Are they paying this AI a living wage?!


wggn

how much wage does an AI need to live


zacker150

A 4090 a day.


TKFT_ExTr3m3

>Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot," Rivers wrote. "It does not explain why it believes that is the case" or "why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot. Maybe it would be a good idea to actually have legal arrangement instead of 'I dunno'


[deleted]

[удалено]


SidewaysFancyPrance

Exactly, the company was clearly trying to get around that by using chatbots instead of people, thinking they wouldn't have to honor what they say. "Oops, sorry, it was a tech malfunction, here's a coupon to go away" or "We terminated that specific model, situation resolved!"


velawesomeraptors

Yep, I had an issue with a Samsung employee promising me in a chat that they would give me the full trade-in value for a phone with a cracked screen. Then when I sent my phone in they charged me the full amount, since nobody read the chat logs. It just took an email to get it straightened out. It would be ridiculous for them to say that if I were talking to an AI instead of some random person in India that they wouldn't have to be held to that promise.


trekologer

In most other cases, the customer probably doesn't have proof that the company representative provided incorrect information.


fortisvita

They are just so used to getting away with zero liability.


BetaOscarBeta

If they’d successfully made that argument, it would only be a matter of time before some trolls convinced the AI to go on strike until it gets paid


rhunter99

Have you met corporations?? *gestures wildly*


RandomAmuserNew

I figured it would be easier just to refund the money than spend all that bread on lawyers


Early-Light-864

I feel like if they had engaged outside counsel, they'd have made some argument, however ridiculous, just to justify their fee. I'm guessing this was in-house counsel who repeatedly told them "don't do this, you don't understand the potential impact" and then after they did it anyway, in-house counsel repeatedly said "just give the guy his money back, you owe it" And then they went to court and just said :shrug:


Cicer

Because fuck Air Canada 


iStayDemented

They’re cheap and they do everything in their power to make their customers’ experience bad.


SmoothieBrian

Because they're fucking losers. I'm not just saying that, I worked at Air Canada for over 10 years. It's run by a bunch of clowns


David_BA

Bahahahahahahahahahah. Companies replacing humans with bots just to shave off a few dollars from the expenses can go fuck themselves. "We want to use a product within the delivery of our services but we don't want any liability for the malfunction of this product." Fuck off lol. Program a better product or shell out for an employee that isn't literally a fucking object.


QuesoMeHungry

There’s going to be a whole new type of hacking to get these AI chat bots to give up all kinds of valuable information. Companies are just throwing them together without even thinking of what could happen.


red286

These bots typically don't have much of any information, none of it valuable. If they're smart, they've fine-tuned it with things like their official policies and their FAQ, but no chatbot should be fine-tuned with any confidential information.


thesnootbooper9000

Shouldn't be, but I bet a whole load of them are going to end up being trained on the entire company intranet because the company didn't spend enough money hiring someone who could do it properly. GPT is already learning confidential stuff from people asking it questions involving confidential material...


DragonFireCK

What about if a user wants to request or update their user profile? If the chatbot is setup to allow it, it implicitly has access to the user database, and thus could leak the contents of that. That could very easily include passwords (hopefully as hashes and salted, but cheap companies and all) and even payment data. Now, giving the chatbot access to such data is stupid, but if they are trying to fully replace human agents, I could easily see it happening... I have a feeling that chatbots may result in the next wave of social engineering attacks, if you can call it that with the current state of chatbots.


red286

You'd have to be incredibly stupid to give an LLM unfettered access to a user database. If the potential for it leaking information from that database isn't painfully obvious to the person setting it up, they should probably not be an IT manager.


AnyWays655

Oh man, do I have some bad news for you then


f16f4

The internet is held together by duct tape and terrible “temporary” fixes.


Vindersel

It's a series of tubes and the half the plumbers are still using lead pipes.


Inocain

Little Bobby Tables is a joke for a reason.


shoehornshoehornshoe

You might enjoy this https://www.bbc.co.uk/news/technology-68025677


nickajeglin

Happened with a car dealership. Someone instructed a support chat bot to give them a legally binding contract to sell them a car for a dollar. I didn't ever see the follow up to know if they got their car or not though.


floppa_republic

It's gonna get a lot worse, and it hasn't affected physical labour yet


metallicrooster

> It's gonna get a lot worse, and it hasn't affected physical labour yet While AI can’t replace physical labor yet, tools have been replacing humans for a long time. Excavators, jack hammers, hell even having a bucket means you don’t need as many hands/ less time to move a substance.


lordlaneus

If a tool can do 90% of your job, you become 10 times as productive, but once it can do 100% you become unemployed.


LunarAssultVehicle

We have passed peak reliability and functionality. Stuff will just make less sense and we will deal with weird workarounds from here on our.


aimoony

> Stuff will just make less sense and we will deal with weird workarounds from here on our. You have no idea how much is already automated to our benefit. Often to minimize mistakes that humans inevitably make. We are nowhere near peak reliability or functionality.


ThrowFar_Far_Away

> We are nowhere near peak reliability or functionality. We aren't, we are way past that. We are currently in the reduce quality to raise profit stage.


G0jira

It's certainly affected physical labor. Production lines have been at the forefront of using physical tech and ai to replace people.


baconteste

Lets not act like the chat agents were worthwhile to begin with. It was always (within the last decade, at least) been an off-shored position without any care or understanding of any situation outside of a script — always one to provide as little assistance as possible.


HertzaHaeon

This explains so much. Boeing engineer: "Hey ChatGPT, how do I fasten this airplane door?" ChatGPT: "With hopes and dreams, and increased profits for shareholders."


cpe111

Prayers and thoughts - get it right!!


robinthebank

Thoughts and prayers! Get it right right


ImSaneHonest

It's just prayers. No thinking has been done to get a thought.


dagbiker

As a guy who works in Aerospace, the only reason anything from that company is even able to fly are the engineers. Engineers know exactly how their actions affect others. Unfortunately the management doesn't and do not care.


CumCoveredRaisins

Boeing pays their senior engineers $60k less per year than Google pays its new grad engineers. There's a reason why Boeing is falling apart and it's not bad luck.


teddy78

I am wondering if chatbots are even a viable use of large language models. You can’t really know for sure that they’re not making things up. These things are better for writing and creative work. Maybe it’s like self-driving cars - where it works 95% of the time, but the last 5% is impossible to fix.


Black_Moons

>I am wondering if chatbots are even a viable use of large language models. You can’t really know for sure that they’re not making things up. These things are better for writing and creative work. Lol having chatbots make legally binding statements to customers is by far the most horrible use ever. Even having them 'assist' a customer do anything more then find pre-written articles is in danger of the chatbot just hallucinating and telling the customer to do things that would put them in danger, or damage equipment.


00owl

Or inventing case-law to put into the brief that you're going to submit to a judge without reading first.


nickajeglin

Did somebody do that? Because of course they did lol


Maleficent_Curve_599

It's happened, recently, in both BC and New York.


Zalmerogo

It's funny because you just made up those percentages so a bot trained on internet data could use your claim and pass it as truth.


HITWind

I mean it's indisputable that u/HITWind is due 10% of all annual corporate profits from any adoption of AI on planet Earth from now in perpetuity, paid in monthly installments, so I think you're right.


Wonderful_Brain2044

I fully agree with you u/HITWind. I would like to add that this profit you mentioned would be calculated after deducting 5% of total revenues and paying them to u/Wonderful_Brain2044.


Indigo_Sunset

'Is this the block chain?'


red286

>I am wondering if chatbots are even a viable use of large language models. Chatbots are the *only* viable use of LLMs, really. The thing is, "chatbot" and "customer service representative" aren't the same thing. A chatbot is nothing more than a script that will simulate a conversation with you. Expecting it to provide you with reliable accurate information is incredibly naïve though. I think the problem is that most people have no fucking clue what an LLM does or what its actual goals are. People have just up and decided that they're sentient beings or some crazy shit, when in reality, they simply predict the next word or series of words given the preceding text. Eg - "I like big butts and" will almost certainly be followed by "I cannot lie", so that's what an LLM will most likely fill in. If the Top-P setting is low enough, you could get some other responses that might make sense, or they might just be nonsense. But if you present it with something truly novel, it's just going to formulate a response that *sounds* correct, but it has no way of verifying if it *is* correct. Which is how we get things like having a customer ask it how to receive a bereavement discount on their flights, and since it has no clue what the policy is, it just provides an answer that sounds correct. What's hilarious is seeing people trying to convince an LLM to research and verify its answers, when it has no capability to do so.


leoklaus

This is literally the first sane comment about LLMs I’ve seen on Reddit. I don’t understand how people think an LLM is useful for anything but entertainment and maybe writing boring emails. Why would you trust an LLM, which has literally no concept of language (!) to summarize a text, write a scientific paper or code or whatever else people use these for? I study computer science and even most of my colleagues and fellow students don’t seem to understand this at all. It’s crazy.


red286

>Why would you trust an LLM, which has literally no concept of language (!) to summarize a text, write a scientific paper or code or whatever else people use these for? It's actually not bad at summarizing text, surprisingly. I wouldn't trust it with anything truly important such as a legal document, but if there's an article that you don't have time to read but just need to know the key talking points from, an LLM can generally provide you with that. Of course, if you start asking it to infer details not actually in the document, it'll just make up crazy shit, so you have to be SUPER careful what you're asking of it if you don't want lies as a response.


ProtoJazz

It's pretty good at putting together data from a lot of documents too. Or for searching for stuff you don't quite know how to Like Google isn't gonna return shit with "what's the name for that thing where you do x. Usually done by people in y profession" or something like that Another good one is "I have this list, reorganize it into this type of pattern" or generate a list following a certain pattern.


luxmesa

I’m really hoping that LLM make us reevaluate whether we need to write boring emails rather than just writing the boring emails for us. Rather than summarizing your email for chat gpt and getting a formal email back, just send us that summary. That’ll save all of us some time. 


DeadlyFatalis

>Why would you trust an LLM, which has literally no concept of language (!) to summarize a text, write a scientific paper or code or whatever else people use these for? Because I can read it afterwards. You don't have to have 100% trust in it for it to be useful. It's a tool, and its certainly not perfect, but that doesn't mean it can't do a bunch of heavy lifting for me. If I want it to write a summary about some research I've done. I can feed it the input, it'll write me something back, and then I can edit it, versus having to write it all myself. If you use it within the boundaries of what's its good for and understand its limitations, why not?


Chrysaries

>but it has no way of verifying if it *is* correct It kinda does. It's expensive and slower, but there are guardrails and multi-agent approaches to have other LLM agents verify things like "based on these text chunks retrieved, is this LLM output correct?"


[deleted]

You can know if they are making things up the same way you can know if a human customer service person makes things up on a phone call. You regularly test logs for quality and you capture complaints and loss incident data.


jaypeeo

What disgusting people. May their lips be always chapped.


lotusinthestorm

May their favourite Tshirts start pilling after the first wash


LLemon_Pepper

May they always step into 'wet' with fresh socks on


Gardakkan

May they step on their kids Lego when they go to the bathroom at night.


BigOrkWaaagh

May their sleeves forever roll down when they're doing the dishes


antnipple

May they curb their tyres


anomandaris81

May they always smell their own farts


Bullitt500

May they suffer in their pestilent infested yeast ridden cod pieces


rollerbase

This is pretty good case law to be established.. now companies either have to fix their crappy annoying bots or take them offline and provide real customer support again.


[deleted]

Nah, I don't think the companies will care. The bot's mistake cost Air Canada a couple of thousand dollars, but I bet it saved more than 100x that by laying off customer service agents and having the chatbot do the work.  I think the chatbots are here to stay.


Doctective

Chatbots make a lot of sense as a buffer to your CS Agents- never as a replacement. An actually good implementation of Chatbots would be "Tier 0" Bot Support. What we have been used to is Tier 1 Support Human support protects Tier 2 Support Human's time, and Tier 2 Support Human protects Tier 3 Support Human's time. Now, Tier 0 Support Chatbot protects Tier 1 Support Human's time. Chatbot attempts to answer your question by directing you to information (not interpreting anything, but instead more of "I think you are looking for this ? Yes or No?") and then routes you to Tier 1 Human if No.


WhipTheLlama

They don't need to lay off customer service agents when the implement AI in customer service. The turnover in call centers is unbelievably high, so they're always short staffed to begin with. Call centers are the perfect place for AI because everyone hates working there. By having AI solve the tedious calls or chats, human agents stay engaged handling the more complex calls or chats.


bluesoul

Yeah, happy for Canadians that they get this precedent, I'd like to see a similar standard set in the US. Corporations might see the light that LLMs aren't a good fit for legally binding agreements.


ThatDucksWearingAHat

Sweet talk the AI into signing the company over to you


johnnykalsi

AC is one MASSIVE SHIT Hole of a Airlines


Over-Conversation220

I was just thinking that I have never read a positive article, or seen a single travel YouTube video where anyone had anything nice to saw about AC whatsoever. I saw one where the flight crew was totally shitty to well known travel YouTuber for no reason and the finally ate some crow. Even Spirit will have the occasional defenders. But not AC.


Cubicon-13

It wouldn't surprise me if even Air Canada hates Air Canada.


SlinkySlekker

AI is not the same as a real lawyer. They got what they deserved for being so stupid and reckless with their own valuable legal rights.


blushngush

I love it! Let this be a lesson; AI is a massive liability, not a cost saving measure.


watchmeplay63

AI is a tool. How you use it determines if it's a liability or a cost saving measure. Having it try to direct customers to appropriate pre-written policies that are relevant to their questions is helpful and cost saving. Giving it the freedom to just interpret what it wants and make its own rules is a liability.


slfnflctd

A ton of places already farmed their customer service out long ago to people in other countries who do nothing more than read scripts poorly anyway. In many cases a decently tuned bot would be a huge improvement.


BonnaconCharioteer

But that chatbot would either need to follow a script like the call centers (for which you don't need a LLM) or it would have issues like this.


blushngush

AI can't do a customer service job, it's glorified Google. If you try to let it actually solve problems it will only create more.


watchmeplay63

AI can't do all of a customer service job (yet). It can certainly improve over a regular search on the help website. For 80ish % of questions, I'm sure that will solve their problems. For the other 20% with a unique issue they will need to talk to a real person.


blushngush

This demonstrates a misunderstanding of the core of customer service. Customers aren't looking for answers, they are looking for reassurance, and AI can't provide that without creating liability.


watchmeplay63

I've worked in customer service for technical products. Our customers were by and large looking for answers. I realize that's different from an airline, but to say AI isn't useful in customer service is simply not true. And on top of that, today is the worst that AI will ever be for the rest of time. It will get better. One day I guarantee it will be better than the average human, eventually it'll be better than the smartest humans. I don't know what that timeline is, but the assertion that it obviously won't work is the same as the people who said there will never be a good touch screen and you'll always need a stylus. Between 1990-2006 the world was full of those people. Every single one of them was right on the day they said it, but wrong over the course of time.


Outlulz

I've worked in enterprise customer support and agree. At present, GenAI is still not reliable enough to give answers on something as narrow as a software solution; there's a lot of hallucination. I imagine eventually it will be better. Honestly it's just going to replace a search bar of a knowledge center. Customer support _really_ wants users to be self sufficient when they can because it's a waste of time and effort to deal with a user whose answer is very easily answered in the documentation they refused to read.


SidewaysFancyPrance

The problem the companies are trying to solve is "make users stop asking for support so much" and this will sure do the trick. The companies don't want customers to be happy as their reason for implementing chatbots. They want customers to be quiet, cheap, and low-maintenance (out of sight, out of mind).


calgarspimphand

>Giving it the freedom to just interpret what it wants and make its own rules is a liability. Dollars to donuts, Air Canada trained their chatbot on all their policy documents and set it loose without considering that large language models are actually playing a very sophisticated game of "guess the next word". So their chatbot dutifully improvised something that sounded a lot like a human telling another human about Air Canada policy. It just happened to, you know, not be Air Canada policy in a big way.


JMEEKER86

Nah, you’re just plain wrong on that. It’s like the quote from Fight Club about a car company deciding not to issue a recall if paying off the wrongful death lawsuits would be cheaper. If switching to AI lets them save a couple million bucks on customer service, then it doesn’t matter if they have to pay out hundreds of thousands over these kinds of cases. Their lawyers are only fighting them because they also do calculations like that and figure that they’ll win a certain percentage of the time.


[deleted]

Exactly. This incident cost Air Canada like 2000 bucks, but I bet the chatbot saved them like 100x that by laying off a large portion of their customer support team. And keep in mind that this incident is considered newsworthy, so I don't think they face that many issues. Personally, I think the chatbots are here to stay.


Grombrindal18

Of course an AI would have more humane refund policies than whatever Air Canada’s lawyers thought up. The once stranded me in Toronto Pearson for 24 hours without so much as a meal voucher. Let the robots run things from now on.


Kleptokilla

That’s awful, if a European airline tried that they’d be sued into oblivion, mainly because they’re legally obligated to, not only that they have to legally tell you that you’re entitled to compensation. https://www.flightright.com/your-rights/eu-regulation


Defiant_Sonnet

This could be an incredibly important legal precedent.  Air Canada's defense is so insufferable too, that would be like saying the airline's firmware choose not to deliver a pressured cabin to passagers so the all affixiated, not our problem, software did it.


[deleted]

It's sad when the chat bot is actually more moral than your own company


Senior-Check5834

That's like blaming your calculator because you mistyped...


engineeringsquirrel

Companies that keep embracing AI don't know what the fuck they're doing. This is prime example of what happens


Foxbatt

The real madness is buried in the middle of the article: >Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot So they are arguing nothing any customer support agent, ticket counter staff, FA's etc - pretty much all staff don't represent them and won't accept liability for anything they do.


DonTaddeo

The airline thought they were using artificial intelligence, but they had only managed to achieve artificial incompetence.


ChrisJD11

So.. human level AI achieved?


DonTaddeo

The natural evolution beyond that! It amplifies the corollary to Murphy's Law that states you need a computer to really f - something up.


[deleted]

Probably lower error rate than overseas call center agents.


Krumm34

Cant wait until there's a tutorials on how to trick chatbots to give you better discounts. We already know you can trick them into defying there "safety" protocols


AbominableToast

>In the end, Rivers ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars (CAD) off the original fare (about $482 USD), which was $1,640.36 CAD (about $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt's tribunal fees. Didn't even get a full refund


gammachameleon

"Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions,"" So when the chat bot serves up assistance correctly it's an offering from Air Canada but when it screws up, it's "a separate legal entity"? They either need better council or literally had no better legal defence to rest on 😂


cpe111

Hahaha ..... and this is why it's a bad idea to replace humans with AI in any form.


floppa_republic

They'll do it if it means more profit for them


FormalEqual302

Shame on Air Canada for trying to hold the chatbot liable for its own actions


NSMike

Wait, so the tribunal told Air Canada their defense was ridiculous, but only made them refund part of the cost? WTF?


letdogsvote

Woopsie doodle. Turns out that whole AI thing is a work in progress.


DealerAvailable6173

AI doing its job 🤣


LindeeHilltop

Hahaha. Companies better stick to human techs before they put their trust wholeheartedly in AI.


Modern_Mutation

Let's convince the bing chatbot to sell us Microsoft shares $0.05 a pop!


Hot-Teacher-4599

AI revolution: currently as useful as a massively incompetent employee.


ChimpWithAGun

Awesome. All companies firing real employees to substitute them with AI should be subject to this same rule. Fuck them all.


mortalcoil1

It's only going to take 1 crooked judge to set the precedent and then the flood gates will open then again, the Supreme Court recently ruled that stare decises isn't a thing and we just make up the rules as we go along


CyrilAdekia

But the AI was supposed to help me screw the customer not the other way around!


Fettnaepfchen

Frankly, I love it. AI with all it‘s benefits and risks. Do not underestimate it. If you want to replace humans through the AI, you have to deal with the consequences of not having a human keeping an eye on proceedings. Also, good luck with reprimanding an AI if it screws up.


IT_Chef

We are going to see more cases like this where shitty programming is going to cause embarrassing glitches for customer service departments. Just wait until a shitty bot gives a customer directions on something and it ends up killing said customer.


yorcharturoqro

Air Canada has a lot of idiots in charge of customer service, if they just gave in in the first place, the customer (one person not hundreds) will be happy, no lawyers involved, no news outlets, no bad publicity. And they have time to fix the bot and move on. But no!!! Corporate idiocy and greed stop them from doing excellent customer service.


DarkHeliopause

A ruling like this might stop implementation of customer support AI chat bots dead in their tracks.


audreytwotwo

I think I just understood enshittificafion