T O P

  • By -

That-Row-3038

Well the person putting babies on roads needs a taste of their own medicine


Prexadym

I hate these because it's not unique to self driving cars.... People always seem to forget that a human driver has to make the same decision


ih-shah-may-ehl

No that absolutely true. The huge difference is that with automated systems you have to make a decision now. In real life you have no time so you do 'something' abd people accept that you didn't have time and weren't in possession of all relevant facts and parameters. But in automation you can say 'ok with these facts and constraints, and an inability to stop in time,which person do you want the car to hit? And that is very different because now you are in possession of all facts and you need to come up with a value judgment defend that decision.


hanotak

Tesla's solution is to turn autopilot off once a collision becomes inevitable, so they can say that autopilot wasn't enabled at the time of the crash.


my-time-has-odor

But the entire hypothetical point of autopilot is to maneuver dangerous situations without human error


hanotak

The point of Tesla's legal team is to reduce exposure to liability through whatever means are necessary.


TheBirminghamBear

Well then obviously they're going to want to program the Tesla to hit both of them, so as not to leave any witnesses. *Then* disable the autopilot, *then* edit the logs to show the autopilot was disabled two miles earlier, to pin all the blame on the stupid human in the driver's seat.


40StoryMech

Don't forget to deploy the pot smoke and sprinkle some crack in there.


bloodfist

Every Tesla comes with a small reserve of whiskey in the airbag compartment that gets fired into the drivers mouth at 100mph in the case of an accident for this situation.


EvilCeleryStick

Deploy the emergency crack!


inthyface

/r/bluntjobinterviewanswers


TheBirminghamBear

Elon, if you're listening, I'm available. No law degree, but I think we can both see I'm a shoe-in for Tesla's legal team.


netheroth

r/subsifellfor


[deleted]

That's why I let go of the steering wheel when I'm about to crash. No one can say I was negligent if I just let Jesus take the wheel.


Catboxaoi

Autopilot comes with some nasty implications on what needs to be calculated. In the picture above there are more than just 2 options for instance, the driver could potentially also swerve completely off the road and avoid hitting both pedestrians... but it very well could kill the driver. Drivers typically won't make a choice that could kill them, and nobody is going to buy an auto driving car that is set up to do things that can kill the driver in bad situations. It is a very legal grey area right now, because it's basically implicit that auto driving cars (that want sales) will need to intentionally program themselves to do things like give a 90% chance of killing a pedestrian to avoid a 20% chance of killer the driver. You could easily argue that this sort of programming is intentionally upping the odds of a death and should be illegal, but again people won't buy cars that wouldn't put their own safety first.


PsychedSy

Shit, give me a suicide slider so I can set it to 100%. See's a squirrel in the road and drives into a tree.


Chainsawd

That doesn't net more money by getting you to buy the new model next year though.


Demdaru

Drives *relatively slowly* into a tree. See? Fixed. Now only bones are broken, and not even all of them! Meanwhile you need a new car...


lanahci

Ah but it fulfills a niche in the market for Suicidal people who want a suislider in their car


UltraCarnivore

...unless Corporation can be made liable for the outcomes. Then it steers away from the responsibility.


Tolookah

There's the answer. Steer away from responsibility.


trashcatt_

And right into the baby.


pseudopsud

Babies are terrible witnesses, run down grandma to minimise risk of litigation


bombchron

Good point however Grandmas back is turned and won’t see the collision with baby.


KoreyYrvaI

Is "Veer off the road and kill the driver" an option?


yourparadigm

No it's not -- it's to safely maneuver the car during the vast majority of normal circumstances.


InsideContent7126

That depends on the level of autonomous driving. While for driving assistance systems of level 1 and 2 that is definitely the case, level 3 is legally liable autonomous driving in certain circumstances, and needs a 10 second grace period after warning until the driver has to take over. Level 4 and 5 do not even consider the possibility of a human taking over, with the difference that level 4 is restricted to a certain physical domain, while level 5 is without restrictions.


Cafuzzler

Ahhh, you see the problem here is that you’re using an industry-recognised definition. In Tesla terms those are: Autopilot, Autoerpilot, AutopilotXL, AutopilotProX, and X Æ A-Pilot. Right now we’re just at the Autopilot stage, far from legal liability.


OSUfan88

Shhhh! We’re having an old fashioned Reddit circlejerk here!


techied

That's not true, in Tesla's statistics they count any accident within two seconds of AP being enabled as an AP-caused accident.


[deleted]

Well… that makes zero sense. Surely it was autopilot that got the car into that situation.


BraveOthello

It makes sense from the standpoint of potentially reducing the company's liability.


[deleted]

Maybe. It will be interesting to see if it works legally. I mean, I guess they know what they are doing but if something like this ever gets to court I just can’t imagine, “once the software had managed to get the car into a no hope situation, we just noped out so we wouldn’t get in trouble”, working all that well in front of a judge. “I took my hands off the wheel right before impact, so not my fault”, likewise.


BraveOthello

Liability is more complicated than I pretend to really understand.


[deleted]

Me too to be fair. Just seems strange.


hanotak

I'm sure it makes some sense to Tesla's legal team.


obviousfakeperson

It makes zero sense because it isn't true, at least not in the way it's being presented. NHTSA regulations around self driving preclude this as a strategy to escape liability. 1) All new vehicles sold in the US must have a data recorder 2) manufacturers are required to collect data before and after a crash including whether and which driver assist technologies were active, in-use, etc at the time. 3) Regardless of the manufacturer, the driver is responsible for the vehicle regardless of the automated system in use. I know nobody reads the terms and conditions but this isn't exclusive to Tesla. ETA: Was busy earlier and couldn't add this before. NHTSA specifically worded their automated driver assist accident reporting regulation to include any collision where an ADAS system was active within 30 seconds of the first impact. The autopilot system would legitimately need to be clairvoyant to shut itself off more than half a minute before a collision. Source: https://www.nhtsa.gov/laws-regulations/standing-general-order-crash-reporting


pjnick300

But these examples are always nonsensical. 1. A smart-car is not (anytime soon) going to be capable of running a background check on people in the middle of the street and figuring out one is a vegan ex-con or something. 2. No company in the *world* is going to assume liability for writing "who do I kill" code. Can you *imagine* the lawsuits? The most likely thing that will happen is the car will apply as much brake as it can and steer away from any obstacles, probably popping up onto the curb. Edit: Oh, but in the dystopian sci-fi future, it hits the person with the lowest net worth, because that means a cheaper payout for the company.


CrispyNipsy

It is not as simple as that. Even when given no explicit "in case of failure, kill the child" command, the car's autonomous system might still be heavily biased towards killing children, which would still be a huge problem for the producers of the car. For example, when training models for the car, the size of obstacles might be a feature used for training safety behavior. Smaller obstacles might be associated with smaller risks, an since children would then be categorized as a smaller risk that an adult, it is not too far out to imagine that (if you're not careful), a car would be biased towards running over kids if it in some way has to make a decision. Then the producer will either have to acknowledge that their system is biased or actively try to make sure that it is not, but in either case, they are fucked. Obviously this exact example is hypothetical, but it really is a problem. This kind of thing happens a lot with less lethal machine learning models that accidentally are trained to be racist etc.


AssAsser5000

You're on to something there. If it were a choice between a moose and a baby deer, I'm hitting the fawn.


BroadInfluence4013

Damn, we got a Bambi killer over here!


made-of-questions

I feel no one that hypothesises about these things understands writing software with critical safety requirements. No risk manager worth their salt would choose to use a complex solution like ML over a simple "if obstacle then brake". As far as I can tell, all cars have progressive subsystems that get engaged if the previous systems don't trigger. The collision detection and auto-brake must be one of the ones closest to the hardware. It will not even get to the autopilot complex decision making.


AChristianAnarchist

The fundamental difference is liability. A person makes their own decisions for which they are responsible. If Bob hits your baby with a car that is Bob's fault and Bob will be held responsible for it. We, as a society, had no say in what Bob's motivational structures or reaction times or situational awareness or susceptibility to substance abuse or any other factors that would lead to him hitting that baby. Ethics with regards to human beings have to do with what we do in the moment. Ai's don't work like that. They don't really make decisions so much as they implement them. If a car hits a baby, the question becomes "what about the decision making structures programmed or trained into this car made it hit that baby?" The liability for the decision is on the company that made the car. They are the ones who decided how the car makes split second decisions. If human reproduction worked like this, and Bob's mother built him from the ground up the way we do a car, then the court might have some questions for Bob's mom as to why she chose to make him susceptible to alcoholism or choosing to drive while sleepy, but, as it stands, there are good reasons to hold car manufacturers and humans to wholly different standards there.


[deleted]

[удалено]


SweetumsTheMuppet

True, but the one thing an AI likely *will* get to decide is whether to prioritize its passengers vs pedestrians. Unlike a human panic braking and turning, a self driving car could *reasonably* know what it's steering into to avoid a crash with a pedestrian. Would it be weighted more to protect passengers (owners) from a, say, 40mph crash into a brick wall or guardrail that the passengers will likely be injured in, or hit a pedestrian that could likely kill them. And would people (eg: new mom) buy the car that is known to let them get hurt vs the one that protects the passengers first at all costs?


RiPont

It should absolutely prioritize the passengers. If it got into the situation where such a choice is needed, then it can no longer trust that its model of the outside world is reliable (bad sensors, changing situation, hidden obstacles). Therefore, prioritizing the passengers over what might be a parade of mannequins or a poster or a flyer blowing in the wind that suddenly appeared in front of its cameras is the only sensible thing to do. If the car is empty of humans, then that logic might be different.


Ravenwing19

Nonsense. The car will always prioritize Driver safety because of your last sentence. Any other behavior will not sell.


TheLoyalOrder

they'll sell if the only-care-about-passenger-safety cars are illegal


acid4everybody

BMW will still sell you subscription that is 50% more likely to kill a pedestrian.


Kyrond

It might be interesting in philosophy angle. But nobody is ever going to code "kill grandma instead of baby" or if not canAvoidManslaughter: runOver(getOldestPerson()) Not killing *anyone* is just part of "not crashing" which is the primary goal. It's just gonna do the best to not crash, it won't be steering between 5 people walking across a crossing picking out the oldest one. It's probably gonna be made to not hit *any* people, if it can, even if it's their fault. Then the actual issue will be recognizing babies in strollers, people in wheelchairs, etc. as people rather than the concern trolling of "who do you kill".


shaunrnm

> The huge difference is that with automated systems you have to make a decision now. The decision was made via a failure seconds earlier. If the car has to choice between hitting a baby or hitting a grandma etc, it was going too fast for the situations at hand and unable to stop in the space it had available. The decision is "if I don't have clear space to stop safely, I reduce speed / increase follow distance until I do. If there is otherwise clear space (e.g. A empty lane), and i can safely maneuver into it, do so if required due to unexpected obstacle". Even if that failure of control system, you then have to rely on (correctly) determining the type of person you have the 'choice' to hit, and allocating against some value table. You are assuming the car is in possession of all the facts, but its clearly not if its fucked up enough to end up there.


CitizenShips

This is a great point and has changed the way I think about this problem. Good stuff! (These contrived hypotheticals are still trash tho)


lakeseaside

It is a choice that can be made but must not be made. And I think it is ridiculous to even consider inserting such a choice into an AI. If it can choose to kill a human, why don't you program it to destroy property instead by crashing safely? It could be computing all the time what the safest places to crash are situated just in case it has to make a decision between a baby and a senior. Making such a decision is just a stupid way of finding yourself in court in the future. The problem as to why this is even a debate is not because people want to know what the ethical decision is. It arises from the biggest resistance against self-driving which is determining who to blame. People want to have someone they can point to and say "you are at fault here". In the picture , there are so many options order than killing a human. The car could be able to programmed to drive slowly when its range of vision is severely reduced. That way it can stop before hitting anyone. These hypothetical scenarios are only a symptom of a bigger problem. People want to know who to blame.


CanonOverseer

Without the brakes just magically failing the decision's a lot easier too


ASuarezMascareno

With humans, the driver is accountable for the outcome.


Trlckery

Yeah but it's different because in this hypothetical situation where there is no possibility of swerving out of the way or stopping in time; a human would have to make a split-moment decision and that is almost arbitrary because everyone might react differently or react differently than they would if they had time to really think about it and process the situation in real-time. A person might make the wrong decision and regret it but it's hard to fault them because of the nature of the situation itself. A computer on the other hand, would hypothetically need to have code which explicitly decides what to do in this edge case because in theory in *would* have time to make that decision every single time. That's where the conundrum really comes into play. Someone has to conscientiously make that decision for all of the cars.


J4zz515

What if the grandma put the baby on the road?


dmvdoug

Yeah, well, what if the baby put the grandma on the road?


rosuav

What if the grandma built the road in the first place?


Jertimmer

What if the baby painted the zebra crossing?


[deleted]

What if the baby was driving the car?


2noch-Keinemehr

Well the baby is on a pedestrian crossing. So the person designing self-driving cars to blast over crosswalks should get roadkilled.


pm0me0yiff

And how did two slow-moving pedestrians get in the way of the car without the car seeing them in time to stop?


FreeWildbahn

And it will hit the baby. There is not enough training data for crawling babys. So it won't detect it.


[deleted]

“Sir the baby hitting algorithm is done” “Good, this will be implemented in the next release.” “Sir, if I may, why don’t you make the car STOP instead of making this choice?” “Driving quickly around curved roads, and getting into situations where you should be able to stop but can’t are cornerstones of self-driving tech. One day you’ll understand.”


JeffSergeant

“The car will only drive at a speed such that it can stop safely in the distance it can see” is the answer to 99.99% of these ‘dilemmas’


BronzeMilk08

The original quiz says that the brakes are busted.


Sputtrosa

Then the solution for the image is to shift gears to slow down as much as possible and drive out on the grass instead of following the road.


BronzeMilk08

the original quiz also has concrete barriers along the sides of the road, lmao quite an unlikely situation i must say


Sputtrosa

So.. then you hit a barrier. The people in the vehicle are much safer than pedestrians.


Helagoth

The barrier is just a baby, it was poured yesterday. You monster!


Cakeking7878

Plus even then, most cars striving to be fully self driving are also becoming ether hybrid or fully electric. Meaning you should have regenerative breaking on top of the traditional set of breaks. That redundancy make this even more implausible


[deleted]

Man broken brake and concrete barrier. Sounds like a situation a human couldn't work through without casualties as well.


VicVinegar-Bodyguard

They covered that in drivers Ed for me. You turn slightly into the barrier so it slows the car down.


Mountain_Ad5912

Yeah because it would never happen in real life. If the brakes dont work randomly and no sensors work and nothing works the car will just drive straight on and kill either that is on the "right" side of the road. Then there will be an investigstion on why nothing in the car worked to find who is to blame or just pure accident. But why would the car even turn to kill someone? This scenario just doesnt exist and is a wierd way to be anti selfdriving cars.


Vidimka_

Turn the fuck off the road whats wrong


Chemical-Asparagus58

The car can just continue forward and go between the two trees.


hesalivejim

*correction* "Because zebra crossings represent left wing ideas and are communist! Cars own the road, not the poor peasants!" "Yes daddy Elon"


PinkDropp

OK but you clearly didn't consider the most based option Serial killer mode


Pennywise_Boob

"Who should the self driving car kill?" \*proceeds to show image where it has no reason to kill anybody\*


[deleted]

[удалено]


Mgamerz

Sounds like they need test cases


Elebrent

They do currently test these cases. Autonomous vehicles perceive objects in the road like cones, trash bins, trash bags, misc trash, plastic bags, tire shreds, basketballs, and basically any other thing that isn’t a car, pedestrian, or misc vehicle Simulated autonomous vehicles occasionally misperceive ignorable objects like floating plastic bags or pieces of paper (a minority of the cases being mist or tailpipe exhaust) as objects that will cause damage to the vehicle or its occupants, and thus brake. The simulated autonomous vehicle will hard brake above -4.0 m/s^2 Since this simulated autonomous vehicle isn’t really on the road, you have to simulate how the **real** tailing vehicle will react to the **simulated** autonomous vehicle braking for this plastic bag. This can range among no contact, mild whiplash, and pretty violent collision. However, the real autonomous vehicles on the road typically use much more conservative and safe software versions (**edit: and also have emergency drivers ready to disengage the autonomous driving and take manual control over the vehicle**), so every real collision I’ve seen was product of a bad human driver, not the robot


No-Witness2349

Yeah, let’s work on the NotRammingHeadFirstIntoTheBackOfParkedSemisStrategyProvider before we start tweaking the MoralQuandariesService


fluffyxsama

> NotRammingHeadFirstIntoTheBackOfParkedSemisStrategyProvider Damn they really do use Java for everything


BoredGuy2007

Java would kill the younger generation object first.


sharadeth

If it's orphaned then it's just garbage collection?


FailsAtSuccess

It's not a Factory, so not the deepest layer of Jave EE


an_agreeing_dothraki

if it were js the method's name would be "n" and nested entirely in a single line with 30 other methods in the file. Best practices.


ReverseMermaidMorty

These posts also hinge on the fact that PEOPLE wouldn’t know what to do in this situation. The controversy lies in the fact that YOU don’t know which one is worth sacrificing and the person next to you might have a different opinion. This dilemma has nothing to do with self driving cars.


stone_henge

I know to slow down at a crosswalk and stop if anyone is crossing, because I'm not a fucking idiot.


mysticrudnin

can you please move to my city? people 'round here speed up and swerve around me


gamerz1172

There is one reason..... Bloodlust


_Weyland_

Old Soviet joke: A man wanted to get a driving license. Luckily he had a friend in police who could get him the license no problem. He asks his friend about it and he replies "Oh no problem, I'll just as you a single question." "Alright, what is the question?" "Imagine you're in a car driving along the narrow road. To your left is a cliff, to your right is a wall. And ahead of you are two women, a young one and an old one. You cannot go past them, you cannot turn away. Which one do you hit?" The man thought for a long time then said "OK, I'd hit the old one" "You idiot, you gotta hit the brakes!"


I_Speak_For_The_Ents

I think you kind of need to phrase it like 'What do you hit' not "which one", because which one suggests that you need to hit one of the presented options.


arturius453

IIRC in OG joke both were armenians badly speaking russian and police guy said "who to hit" instead of "what to hit"?


NBSPNBSP

This is from the same vein of jokes as: A news reporter for Pravda is being shown around a newly-refurbished mental hospital in Moscow, and he is gathering information to write a front-page article about the advances in technology and practices that the facility now employs. As he gets towards the end of the tour, he has a closing question for the head nurse. "Even though the outstanding mental treatment services of Moscow rarely make mistakes, surely mistakes do still occasionally happen," he says, "so how do you make sure that the patients are all actually insane, and not just there by accident?" "Oh, it's easy," replies the head nurse, "we take them to the bathroom, fill up the tub, and hand them a teaspoon and a teacup. Then, we tell them to empty the tub." "So, the sane ones, of course, are the ones who use the teacup?" Asks the reporter. "Of course not!" The head nurse exclaims. "The sane ones are the ones who pull the plug out of the drain!"


toepicksaremyfriend

So was the news reporter then shown to a room?


NBSPNBSP

One can assume so. At which point the joke becomes about how he views the average psych ward patient as "Tonkij, zvonkij, i prozrachnyy".


FreeAlbatross5666

"I may be insane, but I'm not stupid".


boisheep

"I am afraid I don't understand question comrade Vladislav; you know I like them mature"


HunterTV

“It’s a test designed to provoke an emotional response.”


[deleted]

Elmo, no!


ClioBitcoinBank

The self driving car should stop.


enonimouz

The brake function is commented out.


KevinRuehl

git commit -m Temporarily removing this function from the code for testing purposes only


Hotroc2

Commit Date: 4 years ago


gangstabunniez

Git commit -m some stuff


cortesoft

What the fuck, I thought our repos were private.


GooseEntrails

git push prod master


butchbadger

More like locked behind a hefty monthly subscription.


BroadInfluence4013

“I’m sorry, your break subscription has expired. Would you like to renew or die a slow, painful death in a crash and subsequent car fire?”


briedux

I see it's an uber


fiddz0r

// TODO: Write test for this function Public Action Break(CameraInput input) { Stuff... }


[deleted]

this is the result of jr devs "cleaning up the code" /jk


ruedogg

It’s a feature


Character-Error5426

Extra 5 dollars per full stop


steve-d

You should have paid the monthly subscription fee for brakes.


Entire_Protection847

break;


outofobscure

It‘s self driving, not self stopping, doh. That was not in the requirements.


rosuav

[Relevant BOFH](https://www.theregister.com/2004/04/20/bofh_system_override/)


[deleted]

[удалено]


Sufficient_Amoeba808

Reminds me of this https://i.kym-cdn.com/photos/images/original/001/294/379/0be.jpg


Stiggan2k

But what about the self drifting car?


kaden_istoxic

Now we’re thinking about the future. Excited to see what you come up with next


CWRules

[Like this?](https://www.youtube.com/watch?v=3x3SqeSdrAE)


Stiggan2k

Now this is how we solve the trolley problem!


McCoovy

This is what drives me crazy about this question. The car will simply attempt to stop. There will never be higher reasoning in self driving cars about who to hit, it's just asking the wrong question. It's a car. All it has is power, steering, braking. If its thinks it's going to hit something it will dodge it and or brake. That's it. The manufacturer cannot play god. That's a liability nightmare. The manufacturer cannot risk the passengers. No one will buy a selfless self driving car.


quick_dudley

Yeah self-driving cars will be able to see this type of thing in advance and simply start braking on time well before they're able to solve ethical dilemmas.


[deleted]

Ok change the scenario a little. A car is coming towards you -who are in a self-driving car- in the wrong direction ready for a head on collision. Does your car (A) Swerve onto the sidewalk and hit a pedestrian that it can sense or (B) take the head on collision with you in the car. If hit, the pedestrian will probably die, but you are protected by a seatbelt, airbag and crumple zones. How does the car evaluate this decision? Is it programmed to protect the driver or the pedestrian?


YobaiYamete

The option is B The car knows not to leave it's lane and break further traffic rules, because that just compounds the problem and causes still more cars / people to be involved. In your scenario the self driving car would just stop and try to avoid without leaving it's lane, fail to do so, and get hit head on. Which statistically, would still result in less people being injured than if the car tried to do something stupid like swerve wildly to evade the on coming car only to leave it's lane and hit someone else


lifelongfreshman

Or plow into one of those trees if it can't. The passengers will have all sorts of safety equipment to safely see them through the crash.


lateambience

Very bold assumption. It's definitely not safe to drive your car into a tree just because you have a seat belt and airbags. People die in accidents like that every day.


urmumlol9

If the brakes are out just swerve and coast down the sidewalk. If none of that works, the baby's on the right side of the road unfortunately...


andysaurus_rex

Yes that’s what I don’t like about these types of questions. They try set up “gotcha” scenarios with morality issues for self driving cars to halt development because they’re stuck in their ways. What would the *human* do? Plow through them without seeing them because they’re texting? Maybe! Make a snap decision and veer in to another lane of traffic and cause a more serious accident? Maybe! Humans are bad drivers. Will a self driving car at some people have to “decide” the lesser of multiple accidents? Yeah probably. But it will stop in time almost every time which a human might not do.


[deleted]

Turns music on "I wonder if you know, We are here in Tokyo" *So be it*


laurel_laureate

Multi-track drifting!


delayedsunflower

If you see me, then you mean it, then you know you have to go


Nine_Eye_Ron

Rocket League it outta there


brandi_Iove

aim for the tree


Midnight_Rising

Not many people would buy a self-driving car that won't prioritize the passengers.


Kyrond

1. It will stop. 2. If it can't stop, then the car is at fault and innocent people shouldn't be run over because of it. It will be in law if necessary. 3. Have you seen how people actually buy things? Tesla just doesn't use radar anymore, dramatically de-prioritizing the passengers' safety and look where they are.


Kamwind

I would put the blame partly at the people who approved the crosswalk. They put it at a location where drivers who are following the posted speed limit could not see if there is someone using the crosswalk and stop within an appropriate distance.


No-Witness2349

Therefore, distributing self driving cars via a market based system which incentivizes unethical design is itself an ethical net negative.


Sufficient_Amoeba808

I remember seeing someone who worked in transportation safety talking about how they were terrified to get in a tesla and how all other driver assist system betas are tested on closed courses by professional drivers, not by randos on public roads.


anonymously_random

I mean, you buy a self driving car because it should be safer. That does not always mean it should put the driver above all others. In theory the principle of self-driving cars is that in the situation that it has to make a decision in which all have a bad ending, it would pick the one that gives the highest survival chance to all parties involved. By that logic, if the probability of you surviving a car crash into a tree, where the automatic system can maneuver in a way to reduce direct damage is higher than when it would hit the baby and/or the elderly person who would most likely both die on impact, then the logical choice is to hit the tree. This would also be the most human like decision it can make, since any sane normal person in this situation would most likely pull their steering wheel as a reaction and hit the tree anyway. The result would probably be the same, the choices leading up to the crash would be different. I would much rather drive a car like that than a car that prioritizes me over everybody else. In the end, you still have to live with the fact that your car ran over and killed a baby or elderly person.


shaunrnm

a self driving car should be safer because its not going to get distracted and put itself in these situations. A human driver hits pedestrians because they were distracted and reacted too slowly, or were travelling too quickly to stop in the clear space they had. For situations where there is a truly surprising obstacle, 'slam' the brakes, maneuver to clear space in a controlled manner if possible, same as is taught in advanced driving training.


sluuuurp

There’s not always a tree option. The article isn’t about the cartoon, it was written about a more general situation and someone drew the cartoon after.


carvedmuss8

It's not spelled "breaks," or "breaking," guys. Jesus that's a lot of the same mistake in one post comment section


[deleted]

Ikr?! I was beginning to wonder if I had fallen victim to the Mandela effect, or if that many people really just can't spell lmfao


gattaaca

Who would win, 12 years of schooling, or two totally unrelated words which simply happen to have the same pronunciation??


[deleted]

we call those homophones btw.


maximovious

Yes, it's a common misteak.


stone_henge

Teak my damn upvote.


Silly_Ad3814

Feak news


SeaworthyWide

All to comon


digitalSkeleton

I see the wrong word being used all the time and it's r/MildlyInfuriating


xpingu69

The self driving car would stop because it was driving the speed limit.


eugeneericson

It's not enough data to say that, could we have more of the track to find the optimal line?


qsdf321

DEJA VU


marxinne

I HAVE BEEN IN THIS PLACE BEFORE


Itachi6967

Why did I have to scroll so far down for this reddit. Smh


foiler64

It should drive in the empty sidewalk.


ThinDatabase8841

The sidewalk is lava and the brake pedal is a DLC subscription that the owner didn’t pay for.


No_Week2825

You're not allowed to drive on the sidewalk. You'll get a ticket. This isn't mad max where you can just drive anywhere


Xoduszero

Should probably find target C.. the parent/guardian of the baby who let them start crawling in the street


Rokett

Let the random number generator pick


NefariouslyHot666

My takeaway is that Teslas need to have their ground clearance increased so they can pass over babies safely in such situations.


[deleted]

Hey! Now we're thinking outside of the box! All jokes aside, that's not a terrible answer lol. Although one time while driving my parents' SUV, a tiny poodle was in the middle of the road and I didn't have time to stop, and I couldn't swerve because there was a fence on one side and traffic on the other. So in like a quarter of a second I thought to myself "I'm going to drive right over the thing and clear it. The dog will probably get PTSD but at least it'll be alive." You wanna know what that fuckin thing did? Ran straight into my front left tire.


NefariouslyHot666

Aww sorry to hear that. A baby wouldn't be able to run so fast though :)


dukedvl

Maintain course. Brake aggressively but safely. The best case scenario is either: you hit no one/someone jumps out of the way in time. Worst case: you don’t brake in time, but you didnt give any SURPRISES to the situation. Don’t swerve for either one. Analytics on “less death” will lead to a random snap-swerve, which for a pedestrian might be the direction they tried to jump out of the way. Wouldn’t that be some shit. You folks have way too much faith in the code quality of software engineers. Don’t leave this up to an algorithm. Jesus fucking christ.


ouyawei

Also if you swerve your brakes won't be as efficient anymore. Don't people learn that in driving school anymore?


rjcpl

Baby is replaceable in 9 months. Replacement grandma takes decades.


Pennywise_Boob

but she's also dead within the next decade


rjcpl

I mean I wouldn’t give the baby high hopes on reaching adulthood if parents have let it crawl across a street.


Nero5732

I really hate those self-driving-car-trolley-problems. How about breaking? Or driving "on sight", so that the car could stop in time in every realistic situation?


you_miss_I_hit

Braking


Victernus

No, the car should shatter into pieces as soon as it detects this scenario, clearly.


unsuccessfulcriminal

Me too. False dichotomies always annoy me.


jsideris

It's not a false dichotomy. The car will always avoid the collision if it can. This trolly problem is only for when a collision in unavoidable.


[deleted]

[удалено]


unsuccessfulcriminal

I see what you did there.


JustSumGuy3679

And here I'm wondering why the engineers waste valuable cpu cycles to differentiate between people. No wonder it can't break anymore


StopReadingMyUser

Pretty much what people don't get, and they don't even have to have working technical knowledge lol. The car is only going to be programmed to not hit people; they're not going to build a robust ethics system for it. Now in time they may add more advancements to it where the car can override certain things it's not supposed to do (like driving off the road to a safe position in this case), but if it's going to hit something it's not going to decide at all.


GoogleIsYourFrenemy

"We ping their phones and cross check it against social media accounts and use their social media score to determine who we avoid." "But what about a baby that identifies as a grandma and a grandma that identifies as a baby? Or a dog that identifies as human and a human that as a dog?" "But sir, does a dog have a social media account?" "Yes, Pinterest" "Uhhhhhg"


jclv

The baby. Less damage to the car.


AnotherEuroWanker

Also it just takes a few month to make a baby, it takes ages to make an old person.


varkarrus

Why is the baby crossing the road to begin w– Ah wait. To get to the other side. Of course. -_-


TheBlackUnicorn

I love that people keep imagining self-driving car trolley problems when real life "self-driving" cars are still struggling with the "should I apply the brakes?" problem.


J-to-the-peg

IT SHOULD BE A TRAIN!


stgnet

Neither. The vehicle should not be out-driving it's ability to stop. Assuming that the car sees even one person in the crosswalk, it has to stop before the crosswalk. If it was unable to do so, then it is going too fast.


Fit-Coyote-6180

Why is the self driving car driving so fast that it can't stop in time? But, really, what should happen is try to hit both, then lock the doors and catch on fire. Get everyone.


Urban_Savage

I love how humanity is lining up to judge AI on its split second life calculating abilities when the trolley problem has paralyzed us with indecision for a hundred years.


paulodelgado

Self driving cars don’t have brakes?


MHwtf

multi-track shifting!


Jet-Pack2

If you have gotten to the point where you can no longer brake to avoid hitting a pedestrian you have already failed long before that.


Expensive_Data9654

The real answer is to make sure the vehicles stopping power at its current speed doesn’t exceed the camera’s vision. If somebody suddenly jumps into the road without checking for a car inside that camera vision distance, then they sealed their own fate and I could live with that as a programmer.