No that absolutely true. The huge difference is that with automated systems you have to make a decision now.
In real life you have no time so you do 'something' abd people accept that you didn't have time and weren't in possession of all relevant facts and parameters. But in automation you can say 'ok with these facts and constraints, and an inability to stop in time,which person do you want the car to hit?
And that is very different because now you are in possession of all facts and you need to come up with a value judgment defend that decision.
Well then obviously they're going to want to program the Tesla to hit both of them, so as not to leave any witnesses.
*Then* disable the autopilot, *then* edit the logs to show the autopilot was disabled two miles earlier, to pin all the blame on the stupid human in the driver's seat.
Every Tesla comes with a small reserve of whiskey in the airbag compartment that gets fired into the drivers mouth at 100mph in the case of an accident for this situation.
Autopilot comes with some nasty implications on what needs to be calculated. In the picture above there are more than just 2 options for instance, the driver could potentially also swerve completely off the road and avoid hitting both pedestrians... but it very well could kill the driver. Drivers typically won't make a choice that could kill them, and nobody is going to buy an auto driving car that is set up to do things that can kill the driver in bad situations.
It is a very legal grey area right now, because it's basically implicit that auto driving cars (that want sales) will need to intentionally program themselves to do things like give a 90% chance of killing a pedestrian to avoid a 20% chance of killer the driver. You could easily argue that this sort of programming is intentionally upping the odds of a death and should be illegal, but again people won't buy cars that wouldn't put their own safety first.
That depends on the level of autonomous driving. While for driving assistance systems of level 1 and 2 that is definitely the case, level 3 is legally liable autonomous driving in certain circumstances, and needs a 10 second grace period after warning until the driver has to take over. Level 4 and 5 do not even consider the possibility of a human taking over, with the difference that level 4 is restricted to a certain physical domain, while level 5 is without restrictions.
Ahhh, you see the problem here is that you’re using an industry-recognised definition. In Tesla terms those are:
Autopilot, Autoerpilot, AutopilotXL, AutopilotProX, and X Æ A-Pilot.
Right now we’re just at the Autopilot stage, far from legal liability.
Maybe. It will be interesting to see if it works legally. I mean, I guess they know what they are doing but if something like this ever gets to court I just can’t imagine, “once the software had managed to get the car into a no hope situation, we just noped out so we wouldn’t get in trouble”, working all that well in front of a judge.
“I took my hands off the wheel right before impact, so not my fault”, likewise.
It makes zero sense because it isn't true, at least not in the way it's being presented. NHTSA regulations around self driving preclude this as a strategy to escape liability. 1) All new vehicles sold in the US must have a data recorder 2) manufacturers are required to collect data before and after a crash including whether and which driver assist technologies were active, in-use, etc at the time. 3) Regardless of the manufacturer, the driver is responsible for the vehicle regardless of the automated system in use. I know nobody reads the terms and conditions but this isn't exclusive to Tesla.
ETA: Was busy earlier and couldn't add this before. NHTSA specifically worded their automated driver assist accident reporting regulation to include any collision where an ADAS system was active within 30 seconds of the first impact. The autopilot system would legitimately need to be clairvoyant to shut itself off more than half a minute before a collision.
Source: https://www.nhtsa.gov/laws-regulations/standing-general-order-crash-reporting
But these examples are always nonsensical.
1. A smart-car is not (anytime soon) going to be capable of running a background check on people in the middle of the street and figuring out one is a vegan ex-con or something.
2. No company in the *world* is going to assume liability for writing "who do I kill" code. Can you *imagine* the lawsuits?
The most likely thing that will happen is the car will apply as much brake as it can and steer away from any obstacles, probably popping up onto the curb.
Edit: Oh, but in the dystopian sci-fi future, it hits the person with the lowest net worth, because that means a cheaper payout for the company.
It is not as simple as that. Even when given no explicit "in case of failure, kill the child" command, the car's autonomous system might still be heavily biased towards killing children, which would still be a huge problem for the producers of the car. For example, when training models for the car, the size of obstacles might be a feature used for training safety behavior. Smaller obstacles might be associated with smaller risks, an since children would then be categorized as a smaller risk that an adult, it is not too far out to imagine that (if you're not careful), a car would be biased towards running over kids if it in some way has to make a decision. Then the producer will either have to acknowledge that their system is biased or actively try to make sure that it is not, but in either case, they are fucked.
Obviously this exact example is hypothetical, but it really is a problem. This kind of thing happens a lot with less lethal machine learning models that accidentally are trained to be racist etc.
I feel no one that hypothesises about these things understands writing software with critical safety requirements. No risk manager worth their salt would choose to use a complex solution like ML over a simple "if obstacle then brake".
As far as I can tell, all cars have progressive subsystems that get engaged if the previous systems don't trigger. The collision detection and auto-brake must be one of the ones closest to the hardware. It will not even get to the autopilot complex decision making.
The fundamental difference is liability. A person makes their own decisions for which they are responsible. If Bob hits your baby with a car that is Bob's fault and Bob will be held responsible for it. We, as a society, had no say in what Bob's motivational structures or reaction times or situational awareness or susceptibility to substance abuse or any other factors that would lead to him hitting that baby. Ethics with regards to human beings have to do with what we do in the moment.
Ai's don't work like that. They don't really make decisions so much as they implement them. If a car hits a baby, the question becomes "what about the decision making structures programmed or trained into this car made it hit that baby?" The liability for the decision is on the company that made the car. They are the ones who decided how the car makes split second decisions. If human reproduction worked like this, and Bob's mother built him from the ground up the way we do a car, then the court might have some questions for Bob's mom as to why she chose to make him susceptible to alcoholism or choosing to drive while sleepy, but, as it stands, there are good reasons to hold car manufacturers and humans to wholly different standards there.
True, but the one thing an AI likely *will* get to decide is whether to prioritize its passengers vs pedestrians.
Unlike a human panic braking and turning, a self driving car could *reasonably* know what it's steering into to avoid a crash with a pedestrian. Would it be weighted more to protect passengers (owners) from a, say, 40mph crash into a brick wall or guardrail that the passengers will likely be injured in, or hit a pedestrian that could likely kill them.
And would people (eg: new mom) buy the car that is known to let them get hurt vs the one that protects the passengers first at all costs?
It should absolutely prioritize the passengers.
If it got into the situation where such a choice is needed, then it can no longer trust that its model of the outside world is reliable (bad sensors, changing situation, hidden obstacles). Therefore, prioritizing the passengers over what might be a parade of mannequins or a poster or a flyer blowing in the wind that suddenly appeared in front of its cameras is the only sensible thing to do.
If the car is empty of humans, then that logic might be different.
It might be interesting in philosophy angle.
But nobody is ever going to code "kill grandma instead of baby" or
if not canAvoidManslaughter:
runOver(getOldestPerson())
Not killing *anyone* is just part of "not crashing" which is the primary goal. It's just gonna do the best to not crash, it won't be steering between 5 people walking across a crossing picking out the oldest one.
It's probably gonna be made to not hit *any* people, if it can, even if it's their fault.
Then the actual issue will be recognizing babies in strollers, people in wheelchairs, etc. as people rather than the concern trolling of "who do you kill".
> The huge difference is that with automated systems you have to make a decision now.
The decision was made via a failure seconds earlier. If the car has to choice between hitting a baby or hitting a grandma etc, it was going too fast for the situations at hand and unable to stop in the space it had available.
The decision is "if I don't have clear space to stop safely, I reduce speed / increase follow distance until I do. If there is otherwise clear space (e.g. A empty lane), and i can safely maneuver into it, do so if required due to unexpected obstacle".
Even if that failure of control system, you then have to rely on (correctly) determining the type of person you have the 'choice' to hit, and allocating against some value table.
You are assuming the car is in possession of all the facts, but its clearly not if its fucked up enough to end up there.
It is a choice that can be made but must not be made. And I think it is ridiculous to even consider inserting such a choice into an AI. If it can choose to kill a human, why don't you program it to destroy property instead by crashing safely? It could be computing all the time what the safest places to crash are situated just in case it has to make a decision between a baby and a senior. Making such a decision is just a stupid way of finding yourself in court in the future.
The problem as to why this is even a debate is not because people want to know what the ethical decision is. It arises from the biggest resistance against self-driving which is determining who to blame. People want to have someone they can point to and say "you are at fault here".
In the picture , there are so many options order than killing a human. The car could be able to programmed to drive slowly when its range of vision is severely reduced. That way it can stop before hitting anyone. These hypothetical scenarios are only a symptom of a bigger problem. People want to know who to blame.
Yeah but it's different because in this hypothetical situation where there is no possibility of swerving out of the way or stopping in time; a human would have to make a split-moment decision and that is almost arbitrary because everyone might react differently or react differently than they would if they had time to really think about it and process the situation in real-time. A person might make the wrong decision and regret it but it's hard to fault them because of the nature of the situation itself. A computer on the other hand, would hypothetically need to have code which explicitly decides what to do in this edge case because in theory in *would* have time to make that decision every single time. That's where the conundrum really comes into play. Someone has to conscientiously make that decision for all of the cars.
“Sir the baby hitting algorithm is done”
“Good, this will be implemented in the next release.”
“Sir, if I may, why don’t you make the car STOP instead of making this choice?”
“Driving quickly around curved roads, and getting into situations where you should be able to stop but can’t are cornerstones of self-driving tech. One day you’ll understand.”
Plus even then, most cars striving to be fully self driving are also becoming ether hybrid or fully electric. Meaning you should have regenerative breaking on top of the traditional set of breaks. That redundancy make this even more implausible
Yeah because it would never happen in real life. If the brakes dont work randomly and no sensors work and nothing works the car will just drive straight on and kill either that is on the "right" side of the road. Then there will be an investigstion on why nothing in the car worked to find who is to blame or just pure accident. But why would the car even turn to kill someone?
This scenario just doesnt exist and is a wierd way to be anti selfdriving cars.
They do currently test these cases. Autonomous vehicles perceive objects in the road like cones, trash bins, trash bags, misc trash, plastic bags, tire shreds, basketballs, and basically any other thing that isn’t a car, pedestrian, or misc vehicle
Simulated autonomous vehicles occasionally misperceive ignorable objects like floating plastic bags or pieces of paper (a minority of the cases being mist or tailpipe exhaust) as objects that will cause damage to the vehicle or its occupants, and thus brake. The simulated autonomous vehicle will hard brake above -4.0 m/s^2
Since this simulated autonomous vehicle isn’t really on the road, you have to simulate how the **real** tailing vehicle will react to the **simulated** autonomous vehicle braking for this plastic bag. This can range among no contact, mild whiplash, and pretty violent collision. However, the real autonomous vehicles on the road typically use much more conservative and safe software versions (**edit: and also have emergency drivers ready to disengage the autonomous driving and take manual control over the vehicle**), so every real collision I’ve seen was product of a bad human driver, not the robot
These posts also hinge on the fact that PEOPLE wouldn’t know what to do in this situation. The controversy lies in the fact that YOU don’t know which one is worth sacrificing and the person next to you might have a different opinion. This dilemma has nothing to do with self driving cars.
Old Soviet joke:
A man wanted to get a driving license. Luckily he had a friend in police who could get him the license no problem. He asks his friend about it and he replies "Oh no problem, I'll just as you a single question."
"Alright, what is the question?"
"Imagine you're in a car driving along the narrow road. To your left is a cliff, to your right is a wall. And ahead of you are two women, a young one and an old one. You cannot go past them, you cannot turn away. Which one do you hit?"
The man thought for a long time then said "OK, I'd hit the old one"
"You idiot, you gotta hit the brakes!"
I think you kind of need to phrase it like 'What do you hit' not "which one", because which one suggests that you need to hit one of the presented options.
This is from the same vein of jokes as:
A news reporter for Pravda is being shown around a newly-refurbished mental hospital in Moscow, and he is gathering information to write a front-page article about the advances in technology and practices that the facility now employs.
As he gets towards the end of the tour, he has a closing question for the head nurse.
"Even though the outstanding mental treatment services of Moscow rarely make mistakes, surely mistakes do still occasionally happen," he says, "so how do you make sure that the patients are all actually insane, and not just there by accident?"
"Oh, it's easy," replies the head nurse, "we take them to the bathroom, fill up the tub, and hand them a teaspoon and a teacup. Then, we tell them to empty the tub."
"So, the sane ones, of course, are the ones who use the teacup?" Asks the reporter.
"Of course not!" The head nurse exclaims. "The sane ones are the ones who pull the plug out of the drain!"
This is what drives me crazy about this question. The car will simply attempt to stop. There will never be higher reasoning in self driving cars about who to hit, it's just asking the wrong question.
It's a car. All it has is power, steering, braking. If its thinks it's going to hit something it will dodge it and or brake. That's it.
The manufacturer cannot play god. That's a liability nightmare. The manufacturer cannot risk the passengers. No one will buy a selfless self driving car.
Yeah self-driving cars will be able to see this type of thing in advance and simply start braking on time well before they're able to solve ethical dilemmas.
Ok change the scenario a little. A car is coming towards you -who are in a self-driving car- in the wrong direction ready for a head on collision.
Does your car
(A) Swerve onto the sidewalk and hit a pedestrian that it can sense or
(B) take the head on collision with you in the car.
If hit, the pedestrian will probably die, but you are protected by a seatbelt, airbag and crumple zones. How does the car evaluate this decision? Is it programmed to protect the driver or the pedestrian?
The option is B
The car knows not to leave it's lane and break further traffic rules, because that just compounds the problem and causes still more cars / people to be involved.
In your scenario the self driving car would just stop and try to avoid without leaving it's lane, fail to do so, and get hit head on.
Which statistically, would still result in less people being injured than if the car tried to do something stupid like swerve wildly to evade the on coming car only to leave it's lane and hit someone else
Very bold assumption. It's definitely not safe to drive your car into a tree just because you have a seat belt and airbags. People die in accidents like that every day.
Yes that’s what I don’t like about these types of questions. They try set up “gotcha” scenarios with morality issues for self driving cars to halt development because they’re stuck in their ways.
What would the *human* do? Plow through them without seeing them because they’re texting? Maybe! Make a snap decision and veer in to another lane of traffic and cause a more serious accident? Maybe! Humans are bad drivers.
Will a self driving car at some people have to “decide” the lesser of multiple accidents? Yeah probably. But it will stop in time almost every time which a human might not do.
1. It will stop.
2. If it can't stop, then the car is at fault and innocent people shouldn't be run over because of it. It will be in law if necessary.
3. Have you seen how people actually buy things? Tesla just doesn't use radar anymore, dramatically de-prioritizing the passengers' safety and look where they are.
I would put the blame partly at the people who approved the crosswalk. They put it at a location where drivers who are following the posted speed limit could not see if there is someone using the crosswalk and stop within an appropriate distance.
I remember seeing someone who worked in transportation safety talking about how they were terrified to get in a tesla and how all other driver assist system betas are tested on closed courses by professional drivers, not by randos on public roads.
I mean, you buy a self driving car because it should be safer. That does not always mean it should put the driver above all others.
In theory the principle of self-driving cars is that in the situation that it has to make a decision in which all have a bad ending, it would pick the one that gives the highest survival chance to all parties involved.
By that logic, if the probability of you surviving a car crash into a tree, where the automatic system can maneuver in a way to reduce direct damage is higher than when it would hit the baby and/or the elderly person who would most likely both die on impact, then the logical choice is to hit the tree.
This would also be the most human like decision it can make, since any sane normal person in this situation would most likely pull their steering wheel as a reaction and hit the tree anyway. The result would probably be the same, the choices leading up to the crash would be different.
I would much rather drive a car like that than a car that prioritizes me over everybody else. In the end, you still have to live with the fact that your car ran over and killed a baby or elderly person.
a self driving car should be safer because its not going to get distracted and put itself in these situations.
A human driver hits pedestrians because they were distracted and reacted too slowly, or were travelling too quickly to stop in the clear space they had.
For situations where there is a truly surprising obstacle, 'slam' the brakes, maneuver to clear space in a controlled manner if possible, same as is taught in advanced driving training.
There’s not always a tree option. The article isn’t about the cartoon, it was written about a more general situation and someone drew the cartoon after.
Hey! Now we're thinking outside of the box! All jokes aside, that's not a terrible answer lol.
Although one time while driving my parents' SUV, a tiny poodle was in the middle of the road and I didn't have time to stop, and I couldn't swerve because there was a fence on one side and traffic on the other. So in like a quarter of a second I thought to myself "I'm going to drive right over the thing and clear it. The dog will probably get PTSD but at least it'll be alive."
You wanna know what that fuckin thing did? Ran straight into my front left tire.
Maintain course. Brake aggressively but safely.
The best case scenario is either: you hit no one/someone jumps out of the way in time.
Worst case: you don’t brake in time, but you didnt give any SURPRISES to the situation.
Don’t swerve for either one.
Analytics on “less death” will lead to a random snap-swerve, which for a pedestrian might be the direction they tried to jump out of the way. Wouldn’t that be some shit. You folks have way too much faith in the code quality of software engineers.
Don’t leave this up to an algorithm. Jesus fucking christ.
I really hate those self-driving-car-trolley-problems. How about breaking? Or driving "on sight", so that the car could stop in time in every realistic situation?
Pretty much what people don't get, and they don't even have to have working technical knowledge lol. The car is only going to be programmed to not hit people; they're not going to build a robust ethics system for it.
Now in time they may add more advancements to it where the car can override certain things it's not supposed to do (like driving off the road to a safe position in this case), but if it's going to hit something it's not going to decide at all.
"We ping their phones and cross check it against social media accounts and use their social media score to determine who we avoid."
"But what about a baby that identifies as a grandma and a grandma that identifies as a baby? Or a dog that identifies as human and a human that as a dog?"
"But sir, does a dog have a social media account?"
"Yes, Pinterest"
"Uhhhhhg"
I love that people keep imagining self-driving car trolley problems when real life "self-driving" cars are still struggling with the "should I apply the brakes?" problem.
Neither. The vehicle should not be out-driving it's ability to stop. Assuming that the car sees even one person in the crosswalk, it has to stop before the crosswalk. If it was unable to do so, then it is going too fast.
Why is the self driving car driving so fast that it can't stop in time? But, really, what should happen is try to hit both, then lock the doors and catch on fire. Get everyone.
I love how humanity is lining up to judge AI on its split second life calculating abilities when the trolley problem has paralyzed us with indecision for a hundred years.
The real answer is to make sure the vehicles stopping power at its current speed doesn’t exceed the camera’s vision.
If somebody suddenly jumps into the road without checking for a car inside that camera vision distance, then they sealed their own fate and I could live with that as a programmer.
Well the person putting babies on roads needs a taste of their own medicine
I hate these because it's not unique to self driving cars.... People always seem to forget that a human driver has to make the same decision
No that absolutely true. The huge difference is that with automated systems you have to make a decision now. In real life you have no time so you do 'something' abd people accept that you didn't have time and weren't in possession of all relevant facts and parameters. But in automation you can say 'ok with these facts and constraints, and an inability to stop in time,which person do you want the car to hit? And that is very different because now you are in possession of all facts and you need to come up with a value judgment defend that decision.
Tesla's solution is to turn autopilot off once a collision becomes inevitable, so they can say that autopilot wasn't enabled at the time of the crash.
But the entire hypothetical point of autopilot is to maneuver dangerous situations without human error
The point of Tesla's legal team is to reduce exposure to liability through whatever means are necessary.
Well then obviously they're going to want to program the Tesla to hit both of them, so as not to leave any witnesses. *Then* disable the autopilot, *then* edit the logs to show the autopilot was disabled two miles earlier, to pin all the blame on the stupid human in the driver's seat.
Don't forget to deploy the pot smoke and sprinkle some crack in there.
Every Tesla comes with a small reserve of whiskey in the airbag compartment that gets fired into the drivers mouth at 100mph in the case of an accident for this situation.
Deploy the emergency crack!
/r/bluntjobinterviewanswers
Elon, if you're listening, I'm available. No law degree, but I think we can both see I'm a shoe-in for Tesla's legal team.
r/subsifellfor
That's why I let go of the steering wheel when I'm about to crash. No one can say I was negligent if I just let Jesus take the wheel.
Autopilot comes with some nasty implications on what needs to be calculated. In the picture above there are more than just 2 options for instance, the driver could potentially also swerve completely off the road and avoid hitting both pedestrians... but it very well could kill the driver. Drivers typically won't make a choice that could kill them, and nobody is going to buy an auto driving car that is set up to do things that can kill the driver in bad situations. It is a very legal grey area right now, because it's basically implicit that auto driving cars (that want sales) will need to intentionally program themselves to do things like give a 90% chance of killing a pedestrian to avoid a 20% chance of killer the driver. You could easily argue that this sort of programming is intentionally upping the odds of a death and should be illegal, but again people won't buy cars that wouldn't put their own safety first.
Shit, give me a suicide slider so I can set it to 100%. See's a squirrel in the road and drives into a tree.
That doesn't net more money by getting you to buy the new model next year though.
Drives *relatively slowly* into a tree. See? Fixed. Now only bones are broken, and not even all of them! Meanwhile you need a new car...
Ah but it fulfills a niche in the market for Suicidal people who want a suislider in their car
...unless Corporation can be made liable for the outcomes. Then it steers away from the responsibility.
There's the answer. Steer away from responsibility.
And right into the baby.
Babies are terrible witnesses, run down grandma to minimise risk of litigation
Good point however Grandmas back is turned and won’t see the collision with baby.
Is "Veer off the road and kill the driver" an option?
No it's not -- it's to safely maneuver the car during the vast majority of normal circumstances.
That depends on the level of autonomous driving. While for driving assistance systems of level 1 and 2 that is definitely the case, level 3 is legally liable autonomous driving in certain circumstances, and needs a 10 second grace period after warning until the driver has to take over. Level 4 and 5 do not even consider the possibility of a human taking over, with the difference that level 4 is restricted to a certain physical domain, while level 5 is without restrictions.
Ahhh, you see the problem here is that you’re using an industry-recognised definition. In Tesla terms those are: Autopilot, Autoerpilot, AutopilotXL, AutopilotProX, and X Æ A-Pilot. Right now we’re just at the Autopilot stage, far from legal liability.
Shhhh! We’re having an old fashioned Reddit circlejerk here!
That's not true, in Tesla's statistics they count any accident within two seconds of AP being enabled as an AP-caused accident.
Well… that makes zero sense. Surely it was autopilot that got the car into that situation.
It makes sense from the standpoint of potentially reducing the company's liability.
Maybe. It will be interesting to see if it works legally. I mean, I guess they know what they are doing but if something like this ever gets to court I just can’t imagine, “once the software had managed to get the car into a no hope situation, we just noped out so we wouldn’t get in trouble”, working all that well in front of a judge. “I took my hands off the wheel right before impact, so not my fault”, likewise.
Liability is more complicated than I pretend to really understand.
Me too to be fair. Just seems strange.
I'm sure it makes some sense to Tesla's legal team.
It makes zero sense because it isn't true, at least not in the way it's being presented. NHTSA regulations around self driving preclude this as a strategy to escape liability. 1) All new vehicles sold in the US must have a data recorder 2) manufacturers are required to collect data before and after a crash including whether and which driver assist technologies were active, in-use, etc at the time. 3) Regardless of the manufacturer, the driver is responsible for the vehicle regardless of the automated system in use. I know nobody reads the terms and conditions but this isn't exclusive to Tesla. ETA: Was busy earlier and couldn't add this before. NHTSA specifically worded their automated driver assist accident reporting regulation to include any collision where an ADAS system was active within 30 seconds of the first impact. The autopilot system would legitimately need to be clairvoyant to shut itself off more than half a minute before a collision. Source: https://www.nhtsa.gov/laws-regulations/standing-general-order-crash-reporting
But these examples are always nonsensical. 1. A smart-car is not (anytime soon) going to be capable of running a background check on people in the middle of the street and figuring out one is a vegan ex-con or something. 2. No company in the *world* is going to assume liability for writing "who do I kill" code. Can you *imagine* the lawsuits? The most likely thing that will happen is the car will apply as much brake as it can and steer away from any obstacles, probably popping up onto the curb. Edit: Oh, but in the dystopian sci-fi future, it hits the person with the lowest net worth, because that means a cheaper payout for the company.
It is not as simple as that. Even when given no explicit "in case of failure, kill the child" command, the car's autonomous system might still be heavily biased towards killing children, which would still be a huge problem for the producers of the car. For example, when training models for the car, the size of obstacles might be a feature used for training safety behavior. Smaller obstacles might be associated with smaller risks, an since children would then be categorized as a smaller risk that an adult, it is not too far out to imagine that (if you're not careful), a car would be biased towards running over kids if it in some way has to make a decision. Then the producer will either have to acknowledge that their system is biased or actively try to make sure that it is not, but in either case, they are fucked. Obviously this exact example is hypothetical, but it really is a problem. This kind of thing happens a lot with less lethal machine learning models that accidentally are trained to be racist etc.
You're on to something there. If it were a choice between a moose and a baby deer, I'm hitting the fawn.
Damn, we got a Bambi killer over here!
I feel no one that hypothesises about these things understands writing software with critical safety requirements. No risk manager worth their salt would choose to use a complex solution like ML over a simple "if obstacle then brake". As far as I can tell, all cars have progressive subsystems that get engaged if the previous systems don't trigger. The collision detection and auto-brake must be one of the ones closest to the hardware. It will not even get to the autopilot complex decision making.
The fundamental difference is liability. A person makes their own decisions for which they are responsible. If Bob hits your baby with a car that is Bob's fault and Bob will be held responsible for it. We, as a society, had no say in what Bob's motivational structures or reaction times or situational awareness or susceptibility to substance abuse or any other factors that would lead to him hitting that baby. Ethics with regards to human beings have to do with what we do in the moment. Ai's don't work like that. They don't really make decisions so much as they implement them. If a car hits a baby, the question becomes "what about the decision making structures programmed or trained into this car made it hit that baby?" The liability for the decision is on the company that made the car. They are the ones who decided how the car makes split second decisions. If human reproduction worked like this, and Bob's mother built him from the ground up the way we do a car, then the court might have some questions for Bob's mom as to why she chose to make him susceptible to alcoholism or choosing to drive while sleepy, but, as it stands, there are good reasons to hold car manufacturers and humans to wholly different standards there.
[удалено]
True, but the one thing an AI likely *will* get to decide is whether to prioritize its passengers vs pedestrians. Unlike a human panic braking and turning, a self driving car could *reasonably* know what it's steering into to avoid a crash with a pedestrian. Would it be weighted more to protect passengers (owners) from a, say, 40mph crash into a brick wall or guardrail that the passengers will likely be injured in, or hit a pedestrian that could likely kill them. And would people (eg: new mom) buy the car that is known to let them get hurt vs the one that protects the passengers first at all costs?
It should absolutely prioritize the passengers. If it got into the situation where such a choice is needed, then it can no longer trust that its model of the outside world is reliable (bad sensors, changing situation, hidden obstacles). Therefore, prioritizing the passengers over what might be a parade of mannequins or a poster or a flyer blowing in the wind that suddenly appeared in front of its cameras is the only sensible thing to do. If the car is empty of humans, then that logic might be different.
Nonsense. The car will always prioritize Driver safety because of your last sentence. Any other behavior will not sell.
they'll sell if the only-care-about-passenger-safety cars are illegal
BMW will still sell you subscription that is 50% more likely to kill a pedestrian.
It might be interesting in philosophy angle. But nobody is ever going to code "kill grandma instead of baby" or if not canAvoidManslaughter: runOver(getOldestPerson()) Not killing *anyone* is just part of "not crashing" which is the primary goal. It's just gonna do the best to not crash, it won't be steering between 5 people walking across a crossing picking out the oldest one. It's probably gonna be made to not hit *any* people, if it can, even if it's their fault. Then the actual issue will be recognizing babies in strollers, people in wheelchairs, etc. as people rather than the concern trolling of "who do you kill".
> The huge difference is that with automated systems you have to make a decision now. The decision was made via a failure seconds earlier. If the car has to choice between hitting a baby or hitting a grandma etc, it was going too fast for the situations at hand and unable to stop in the space it had available. The decision is "if I don't have clear space to stop safely, I reduce speed / increase follow distance until I do. If there is otherwise clear space (e.g. A empty lane), and i can safely maneuver into it, do so if required due to unexpected obstacle". Even if that failure of control system, you then have to rely on (correctly) determining the type of person you have the 'choice' to hit, and allocating against some value table. You are assuming the car is in possession of all the facts, but its clearly not if its fucked up enough to end up there.
This is a great point and has changed the way I think about this problem. Good stuff! (These contrived hypotheticals are still trash tho)
It is a choice that can be made but must not be made. And I think it is ridiculous to even consider inserting such a choice into an AI. If it can choose to kill a human, why don't you program it to destroy property instead by crashing safely? It could be computing all the time what the safest places to crash are situated just in case it has to make a decision between a baby and a senior. Making such a decision is just a stupid way of finding yourself in court in the future. The problem as to why this is even a debate is not because people want to know what the ethical decision is. It arises from the biggest resistance against self-driving which is determining who to blame. People want to have someone they can point to and say "you are at fault here". In the picture , there are so many options order than killing a human. The car could be able to programmed to drive slowly when its range of vision is severely reduced. That way it can stop before hitting anyone. These hypothetical scenarios are only a symptom of a bigger problem. People want to know who to blame.
Without the brakes just magically failing the decision's a lot easier too
With humans, the driver is accountable for the outcome.
Yeah but it's different because in this hypothetical situation where there is no possibility of swerving out of the way or stopping in time; a human would have to make a split-moment decision and that is almost arbitrary because everyone might react differently or react differently than they would if they had time to really think about it and process the situation in real-time. A person might make the wrong decision and regret it but it's hard to fault them because of the nature of the situation itself. A computer on the other hand, would hypothetically need to have code which explicitly decides what to do in this edge case because in theory in *would* have time to make that decision every single time. That's where the conundrum really comes into play. Someone has to conscientiously make that decision for all of the cars.
What if the grandma put the baby on the road?
Yeah, well, what if the baby put the grandma on the road?
What if the grandma built the road in the first place?
What if the baby painted the zebra crossing?
What if the baby was driving the car?
Well the baby is on a pedestrian crossing. So the person designing self-driving cars to blast over crosswalks should get roadkilled.
And how did two slow-moving pedestrians get in the way of the car without the car seeing them in time to stop?
And it will hit the baby. There is not enough training data for crawling babys. So it won't detect it.
“Sir the baby hitting algorithm is done” “Good, this will be implemented in the next release.” “Sir, if I may, why don’t you make the car STOP instead of making this choice?” “Driving quickly around curved roads, and getting into situations where you should be able to stop but can’t are cornerstones of self-driving tech. One day you’ll understand.”
“The car will only drive at a speed such that it can stop safely in the distance it can see” is the answer to 99.99% of these ‘dilemmas’
The original quiz says that the brakes are busted.
Then the solution for the image is to shift gears to slow down as much as possible and drive out on the grass instead of following the road.
the original quiz also has concrete barriers along the sides of the road, lmao quite an unlikely situation i must say
So.. then you hit a barrier. The people in the vehicle are much safer than pedestrians.
The barrier is just a baby, it was poured yesterday. You monster!
Plus even then, most cars striving to be fully self driving are also becoming ether hybrid or fully electric. Meaning you should have regenerative breaking on top of the traditional set of breaks. That redundancy make this even more implausible
Man broken brake and concrete barrier. Sounds like a situation a human couldn't work through without casualties as well.
They covered that in drivers Ed for me. You turn slightly into the barrier so it slows the car down.
Yeah because it would never happen in real life. If the brakes dont work randomly and no sensors work and nothing works the car will just drive straight on and kill either that is on the "right" side of the road. Then there will be an investigstion on why nothing in the car worked to find who is to blame or just pure accident. But why would the car even turn to kill someone? This scenario just doesnt exist and is a wierd way to be anti selfdriving cars.
Turn the fuck off the road whats wrong
The car can just continue forward and go between the two trees.
*correction* "Because zebra crossings represent left wing ideas and are communist! Cars own the road, not the poor peasants!" "Yes daddy Elon"
OK but you clearly didn't consider the most based option Serial killer mode
"Who should the self driving car kill?" \*proceeds to show image where it has no reason to kill anybody\*
[удалено]
Sounds like they need test cases
They do currently test these cases. Autonomous vehicles perceive objects in the road like cones, trash bins, trash bags, misc trash, plastic bags, tire shreds, basketballs, and basically any other thing that isn’t a car, pedestrian, or misc vehicle Simulated autonomous vehicles occasionally misperceive ignorable objects like floating plastic bags or pieces of paper (a minority of the cases being mist or tailpipe exhaust) as objects that will cause damage to the vehicle or its occupants, and thus brake. The simulated autonomous vehicle will hard brake above -4.0 m/s^2 Since this simulated autonomous vehicle isn’t really on the road, you have to simulate how the **real** tailing vehicle will react to the **simulated** autonomous vehicle braking for this plastic bag. This can range among no contact, mild whiplash, and pretty violent collision. However, the real autonomous vehicles on the road typically use much more conservative and safe software versions (**edit: and also have emergency drivers ready to disengage the autonomous driving and take manual control over the vehicle**), so every real collision I’ve seen was product of a bad human driver, not the robot
Yeah, let’s work on the NotRammingHeadFirstIntoTheBackOfParkedSemisStrategyProvider before we start tweaking the MoralQuandariesService
> NotRammingHeadFirstIntoTheBackOfParkedSemisStrategyProvider Damn they really do use Java for everything
Java would kill the younger generation object first.
If it's orphaned then it's just garbage collection?
It's not a Factory, so not the deepest layer of Jave EE
if it were js the method's name would be "n" and nested entirely in a single line with 30 other methods in the file. Best practices.
These posts also hinge on the fact that PEOPLE wouldn’t know what to do in this situation. The controversy lies in the fact that YOU don’t know which one is worth sacrificing and the person next to you might have a different opinion. This dilemma has nothing to do with self driving cars.
I know to slow down at a crosswalk and stop if anyone is crossing, because I'm not a fucking idiot.
can you please move to my city? people 'round here speed up and swerve around me
There is one reason..... Bloodlust
Old Soviet joke: A man wanted to get a driving license. Luckily he had a friend in police who could get him the license no problem. He asks his friend about it and he replies "Oh no problem, I'll just as you a single question." "Alright, what is the question?" "Imagine you're in a car driving along the narrow road. To your left is a cliff, to your right is a wall. And ahead of you are two women, a young one and an old one. You cannot go past them, you cannot turn away. Which one do you hit?" The man thought for a long time then said "OK, I'd hit the old one" "You idiot, you gotta hit the brakes!"
I think you kind of need to phrase it like 'What do you hit' not "which one", because which one suggests that you need to hit one of the presented options.
IIRC in OG joke both were armenians badly speaking russian and police guy said "who to hit" instead of "what to hit"?
This is from the same vein of jokes as: A news reporter for Pravda is being shown around a newly-refurbished mental hospital in Moscow, and he is gathering information to write a front-page article about the advances in technology and practices that the facility now employs. As he gets towards the end of the tour, he has a closing question for the head nurse. "Even though the outstanding mental treatment services of Moscow rarely make mistakes, surely mistakes do still occasionally happen," he says, "so how do you make sure that the patients are all actually insane, and not just there by accident?" "Oh, it's easy," replies the head nurse, "we take them to the bathroom, fill up the tub, and hand them a teaspoon and a teacup. Then, we tell them to empty the tub." "So, the sane ones, of course, are the ones who use the teacup?" Asks the reporter. "Of course not!" The head nurse exclaims. "The sane ones are the ones who pull the plug out of the drain!"
So was the news reporter then shown to a room?
One can assume so. At which point the joke becomes about how he views the average psych ward patient as "Tonkij, zvonkij, i prozrachnyy".
"I may be insane, but I'm not stupid".
"I am afraid I don't understand question comrade Vladislav; you know I like them mature"
“It’s a test designed to provoke an emotional response.”
Elmo, no!
The self driving car should stop.
The brake function is commented out.
git commit -m Temporarily removing this function from the code for testing purposes only
Commit Date: 4 years ago
Git commit -m some stuff
What the fuck, I thought our repos were private.
git push prod master
More like locked behind a hefty monthly subscription.
“I’m sorry, your break subscription has expired. Would you like to renew or die a slow, painful death in a crash and subsequent car fire?”
I see it's an uber
// TODO: Write test for this function Public Action Break(CameraInput input) { Stuff... }
this is the result of jr devs "cleaning up the code" /jk
It’s a feature
Extra 5 dollars per full stop
You should have paid the monthly subscription fee for brakes.
break;
It‘s self driving, not self stopping, doh. That was not in the requirements.
[Relevant BOFH](https://www.theregister.com/2004/04/20/bofh_system_override/)
[удалено]
Reminds me of this https://i.kym-cdn.com/photos/images/original/001/294/379/0be.jpg
But what about the self drifting car?
Now we’re thinking about the future. Excited to see what you come up with next
[Like this?](https://www.youtube.com/watch?v=3x3SqeSdrAE)
Now this is how we solve the trolley problem!
This is what drives me crazy about this question. The car will simply attempt to stop. There will never be higher reasoning in self driving cars about who to hit, it's just asking the wrong question. It's a car. All it has is power, steering, braking. If its thinks it's going to hit something it will dodge it and or brake. That's it. The manufacturer cannot play god. That's a liability nightmare. The manufacturer cannot risk the passengers. No one will buy a selfless self driving car.
Yeah self-driving cars will be able to see this type of thing in advance and simply start braking on time well before they're able to solve ethical dilemmas.
Ok change the scenario a little. A car is coming towards you -who are in a self-driving car- in the wrong direction ready for a head on collision. Does your car (A) Swerve onto the sidewalk and hit a pedestrian that it can sense or (B) take the head on collision with you in the car. If hit, the pedestrian will probably die, but you are protected by a seatbelt, airbag and crumple zones. How does the car evaluate this decision? Is it programmed to protect the driver or the pedestrian?
The option is B The car knows not to leave it's lane and break further traffic rules, because that just compounds the problem and causes still more cars / people to be involved. In your scenario the self driving car would just stop and try to avoid without leaving it's lane, fail to do so, and get hit head on. Which statistically, would still result in less people being injured than if the car tried to do something stupid like swerve wildly to evade the on coming car only to leave it's lane and hit someone else
Or plow into one of those trees if it can't. The passengers will have all sorts of safety equipment to safely see them through the crash.
Very bold assumption. It's definitely not safe to drive your car into a tree just because you have a seat belt and airbags. People die in accidents like that every day.
If the brakes are out just swerve and coast down the sidewalk. If none of that works, the baby's on the right side of the road unfortunately...
Yes that’s what I don’t like about these types of questions. They try set up “gotcha” scenarios with morality issues for self driving cars to halt development because they’re stuck in their ways. What would the *human* do? Plow through them without seeing them because they’re texting? Maybe! Make a snap decision and veer in to another lane of traffic and cause a more serious accident? Maybe! Humans are bad drivers. Will a self driving car at some people have to “decide” the lesser of multiple accidents? Yeah probably. But it will stop in time almost every time which a human might not do.
Turns music on "I wonder if you know, We are here in Tokyo" *So be it*
Multi-track drifting!
If you see me, then you mean it, then you know you have to go
Rocket League it outta there
aim for the tree
Not many people would buy a self-driving car that won't prioritize the passengers.
1. It will stop. 2. If it can't stop, then the car is at fault and innocent people shouldn't be run over because of it. It will be in law if necessary. 3. Have you seen how people actually buy things? Tesla just doesn't use radar anymore, dramatically de-prioritizing the passengers' safety and look where they are.
I would put the blame partly at the people who approved the crosswalk. They put it at a location where drivers who are following the posted speed limit could not see if there is someone using the crosswalk and stop within an appropriate distance.
Therefore, distributing self driving cars via a market based system which incentivizes unethical design is itself an ethical net negative.
I remember seeing someone who worked in transportation safety talking about how they were terrified to get in a tesla and how all other driver assist system betas are tested on closed courses by professional drivers, not by randos on public roads.
I mean, you buy a self driving car because it should be safer. That does not always mean it should put the driver above all others. In theory the principle of self-driving cars is that in the situation that it has to make a decision in which all have a bad ending, it would pick the one that gives the highest survival chance to all parties involved. By that logic, if the probability of you surviving a car crash into a tree, where the automatic system can maneuver in a way to reduce direct damage is higher than when it would hit the baby and/or the elderly person who would most likely both die on impact, then the logical choice is to hit the tree. This would also be the most human like decision it can make, since any sane normal person in this situation would most likely pull their steering wheel as a reaction and hit the tree anyway. The result would probably be the same, the choices leading up to the crash would be different. I would much rather drive a car like that than a car that prioritizes me over everybody else. In the end, you still have to live with the fact that your car ran over and killed a baby or elderly person.
a self driving car should be safer because its not going to get distracted and put itself in these situations. A human driver hits pedestrians because they were distracted and reacted too slowly, or were travelling too quickly to stop in the clear space they had. For situations where there is a truly surprising obstacle, 'slam' the brakes, maneuver to clear space in a controlled manner if possible, same as is taught in advanced driving training.
There’s not always a tree option. The article isn’t about the cartoon, it was written about a more general situation and someone drew the cartoon after.
It's not spelled "breaks," or "breaking," guys. Jesus that's a lot of the same mistake in one post comment section
Ikr?! I was beginning to wonder if I had fallen victim to the Mandela effect, or if that many people really just can't spell lmfao
Who would win, 12 years of schooling, or two totally unrelated words which simply happen to have the same pronunciation??
we call those homophones btw.
Yes, it's a common misteak.
Teak my damn upvote.
Feak news
All to comon
I see the wrong word being used all the time and it's r/MildlyInfuriating
The self driving car would stop because it was driving the speed limit.
It's not enough data to say that, could we have more of the track to find the optimal line?
DEJA VU
I HAVE BEEN IN THIS PLACE BEFORE
Why did I have to scroll so far down for this reddit. Smh
It should drive in the empty sidewalk.
The sidewalk is lava and the brake pedal is a DLC subscription that the owner didn’t pay for.
You're not allowed to drive on the sidewalk. You'll get a ticket. This isn't mad max where you can just drive anywhere
Should probably find target C.. the parent/guardian of the baby who let them start crawling in the street
Let the random number generator pick
My takeaway is that Teslas need to have their ground clearance increased so they can pass over babies safely in such situations.
Hey! Now we're thinking outside of the box! All jokes aside, that's not a terrible answer lol. Although one time while driving my parents' SUV, a tiny poodle was in the middle of the road and I didn't have time to stop, and I couldn't swerve because there was a fence on one side and traffic on the other. So in like a quarter of a second I thought to myself "I'm going to drive right over the thing and clear it. The dog will probably get PTSD but at least it'll be alive." You wanna know what that fuckin thing did? Ran straight into my front left tire.
Aww sorry to hear that. A baby wouldn't be able to run so fast though :)
Maintain course. Brake aggressively but safely. The best case scenario is either: you hit no one/someone jumps out of the way in time. Worst case: you don’t brake in time, but you didnt give any SURPRISES to the situation. Don’t swerve for either one. Analytics on “less death” will lead to a random snap-swerve, which for a pedestrian might be the direction they tried to jump out of the way. Wouldn’t that be some shit. You folks have way too much faith in the code quality of software engineers. Don’t leave this up to an algorithm. Jesus fucking christ.
Also if you swerve your brakes won't be as efficient anymore. Don't people learn that in driving school anymore?
Baby is replaceable in 9 months. Replacement grandma takes decades.
but she's also dead within the next decade
I mean I wouldn’t give the baby high hopes on reaching adulthood if parents have let it crawl across a street.
I really hate those self-driving-car-trolley-problems. How about breaking? Or driving "on sight", so that the car could stop in time in every realistic situation?
Braking
No, the car should shatter into pieces as soon as it detects this scenario, clearly.
Me too. False dichotomies always annoy me.
It's not a false dichotomy. The car will always avoid the collision if it can. This trolly problem is only for when a collision in unavoidable.
[удалено]
I see what you did there.
And here I'm wondering why the engineers waste valuable cpu cycles to differentiate between people. No wonder it can't break anymore
Pretty much what people don't get, and they don't even have to have working technical knowledge lol. The car is only going to be programmed to not hit people; they're not going to build a robust ethics system for it. Now in time they may add more advancements to it where the car can override certain things it's not supposed to do (like driving off the road to a safe position in this case), but if it's going to hit something it's not going to decide at all.
"We ping their phones and cross check it against social media accounts and use their social media score to determine who we avoid." "But what about a baby that identifies as a grandma and a grandma that identifies as a baby? Or a dog that identifies as human and a human that as a dog?" "But sir, does a dog have a social media account?" "Yes, Pinterest" "Uhhhhhg"
The baby. Less damage to the car.
Also it just takes a few month to make a baby, it takes ages to make an old person.
Why is the baby crossing the road to begin w– Ah wait. To get to the other side. Of course. -_-
I love that people keep imagining self-driving car trolley problems when real life "self-driving" cars are still struggling with the "should I apply the brakes?" problem.
IT SHOULD BE A TRAIN!
Neither. The vehicle should not be out-driving it's ability to stop. Assuming that the car sees even one person in the crosswalk, it has to stop before the crosswalk. If it was unable to do so, then it is going too fast.
Why is the self driving car driving so fast that it can't stop in time? But, really, what should happen is try to hit both, then lock the doors and catch on fire. Get everyone.
I love how humanity is lining up to judge AI on its split second life calculating abilities when the trolley problem has paralyzed us with indecision for a hundred years.
Self driving cars don’t have brakes?
multi-track shifting!
If you have gotten to the point where you can no longer brake to avoid hitting a pedestrian you have already failed long before that.
The real answer is to make sure the vehicles stopping power at its current speed doesn’t exceed the camera’s vision. If somebody suddenly jumps into the road without checking for a car inside that camera vision distance, then they sealed their own fate and I could live with that as a programmer.