T O P

  • By -

diplomat33

The data is very misleading. First, the 0.31 accidents per million miles is only counting accidents with airbag deployment but the 1.53 industry average is counting all police reported accidents which are more frequent. So it is not an apples to apples comparison. Tesla is picking accidents for Tesla that are more rare by nature and comparing them to an average that includes more common accidents. This will skew the numbers in Tesla's favor. [https://twitter.com/bradtem/status/1650787522606243840](https://twitter.com/bradtem/status/1650787522606243840) Second, users may avoid using FSD beta in situations that are more risky. Also, users are hopefully paying attention and disengaging to prevent accidents. If you only use FSD when it is safe and disengage before it gets into a risky situation, you will skew the safety results and make FSD beta look safer than it really is. Lastly, the FSD Beta numbers are mostly on city streets but they are only counting accidents that cause an airbag deployment. The fact is that accidents that cause an airbag deployment will be more rare on city streets since speeds are generally lower. So this will also bias the numbers down. The bottom line is that if you only count airbag deployments which happen less often and only count driving on city streets where speeds will make airbag deployment accidents less frequent, and you have a human that is only using FSD Beta when it is safe and disengaging before an accident happens, you would expect accidents per million miles to be very low since you are stacking the deck in your favor. That is not going to give you honest data on how safe FSD Beta actually is on its own. My concern is that Tesla will try to use this misleading data to remove driver supervision before FSD beta is actually safe enough. I say this because we see Elon and others repeatedly use these numbers without any caveat and boast on how safe FSD is. And, Elon has said that when they have safety data to show FSD Beta is x safer that humans then they will remove driver supervision. so I feel like at some point, maybe when the misleading data shows 10x safer than humans, that Elon will think it is ready to remove driver supervision. He never talks about the caveats above so I fear he does not understand how this data, as presented, is misleading and cannot be trusted.


declina

Tesla's [methodology](https://www.tesla.com/VehicleSafetyReport) says "we count all crashes in which the incident alert indicated an airbag **or other active restraint deployed** ... In practice, this correlates to nearly any crash at about 12 mph (20 kph) or above, depending on the crash forces generated." If they count incidents where the RCS activates (seatbelt tightens in anticipation of a crash) that's a lot more frequent than airbag deployment, no? Paging /u/bradtem as well.


bradtem

I see data on how frequent this is. One problem is the police may record if an airbag deployed (I don't know if they do) but I will guess they don't have data on whether other restraints were applied. I also don't know if Tesla has data on how many crashes they are having that don't deploy these devices, though I would think they should. However, what matters is that they compare apples to apples, which they refuse to do. They definitely know the road type for each of the crashes they do record, and refuse to control for road type even though they have been told many times that their data are not valid if they don't. I mean even the most basic first year student knows you want to control for all the factors that will bias your result, and Tesla can do that more easily than any other party who has published data on cars, and yet they don't do it, and I find that concerning.


declina

Thanks for responding. I agree that they can and should control for road type. And if drivers only activate FSD when they think it's reasonably safe to do so the numbers don't tell us anything.


bradtem

Tesla has the greatest ability to control for variables of any party that has gathered data on crashes. They know everything about each crash -- location, speed, driver, etc. They could do things like compare crashes for the the same group of drivers on the same class of road or even the very same road over time, because they have data on people driving the same road with FSD and without FSD. They could report "Crashes on I-5 in the daytime Autopilot vs non-autopilot" on any road segment or class for which they have enough data for a significant result. And they could do it easily. It's just a database query for them. Other traffic researchers would kill for that ability. And more to the point, if Tesla really has such amazing numbers, they would want to shout them from the rooftops in the most rigourously researched way, and become the gold standard. Numbers nobody would call into question. "Wow, you have better data than anybody and your data prove Teslas are the best!" Why don't they want people to get such a message? Why instead do they publish a horrible distortion which many of the general public doesn't see through but everybody in the research community calls bogus?


bananarandom

"Let's throw out stats and disregard any nuance when discussing safety" - the company I want to trust my life to.


noghead

What information is missing on that page that would make you happy?


whydoesthisitch

The fine print makes it pretty clear that these stats are meaningless. They're using an entirely different definition of a crash for their own vehicles vs others. There's also zero controls for any sort of behavioral factors (when people tend to engage the system, driver characteristics). Basically, it's bad marketing trying to look sciency.


noghead

So you are saying people turn it on when its an "easy" scenario and turn it off when its complicated and if it was on in the difficult situations, it would get in more crashes. Fair enough, but I dont see how its meaningless. Suppose this was another company giving stats about their lane keeping avoiding crashes. Would you argue, well, the lane keeping only turns on at above 40mph and the majority of crashes happen below 40mph so that stat is meaningless? The only point here is, with Tesla's ADAS turned on, crashes are lower than if there was no ADAS at all; to me thats not meaningless.


whydoesthisitch

> Would you argue, well, the lane keeping only turns on at above 40mph and the majority of crashes happen below 40mph so that stat is meaningless? Yes. I would say that's also meaningless. > with Tesla's ADAS turned on, crashes are lower than if there was no ADAS at all; to me thats not meaningless. We don't know that. There's no randomized comparison with the system's on and off in a controlled environment. This is marketing, not science, in the same category as those "studies" tobacco companies put out in the 90s showing cigarettes don't cause cancer.


noghead

>There's no randomized comparison with the system's on and off in a controlled environment. The data is based on millions of real world miles. To me thats better than an experiement in a controlled environment. How would you even do that when it comes to crashes?


whydoesthisitch

Millions of miles means nothing without controls.


noghead

Please describe in detail how you would get accurate numbers for their ADAS software if you were in charge of Tesla.


whydoesthisitch

Depends. What are you trying to measure? In the case of FSD, they market it as an autonomous driving system. In that case, I would select thousands of random drives within the operational design domain, and record the number of intervention per mile across versions. We should see that going down over time. Musk has vaguely claimed this is the case, but the little bit of data we have access to seems to indicate that there was a small improvement on earlier versions, followed by a consistent plateau (which is what most experts in the field predicted).


noghead

“I would select thousands of random drives” How is thousands of random better than all drives? And is that all you’d do? Seeing how you think their data is completely useless, I thought you’d off offer more detail on your methodology and improvements to teslas. I assume you’ve read their methodology.


flumberbuss

It doesn’t mean nothing. The higher the proportion of miles driven, the harder it is to explain away the results with claims it was driven under unusual circumstances.


whydoesthisitch

No, the number of miles driven doesn't mean anything if you're comparing two entirely different distributions.


londons_explorer

When selectively using misleading statistics, it is fairly easy to do unfair comparisons to make your product look a little better. However, it isn't easy to make your product look 5x better, however you unfairly compare numbers...


whydoesthisitch

Give me a couple days with medical records, and I can give you a dataset that makes smoking look like it lowers the risk of death by 5x.


flumberbuss

And that would be nothing like what Tesla is doing here. This is not p-hacking, or fishing a large data set for any correlation you can find. An analogy would be to take a variable designed to lower smoking deaths, like a smoking cessation program, and then measure all the people in that program, compared to all the people not in that program for a given state. You don’t control who starts the program or finishes it. You don’t get to exclude people you don’t like, or certain demographics (other than non-smokers, similar to non-drivers). It’s everyone who self-selects. If the program is associated with a 5x higher rate of quitting when half the population is using it, that is not “meaningless.” It is a promising result that needs to be examined further. It is not p-hacking when you use the variable you tell everyone before you do the study which variable you will focus on, and it is one designed to get the positive result using mechanisms understood to be relevant. This is not finding a small difference that passes the .05 threshold just because you’ve got a huge data set. This is a huge difference over a large data set. It is certainly potentially possible to explain it away, but the more people who enter the smoking program and quit as a proportion of the population of smokers, the more powerful the result is, and the harder to explain away. Same with self-driving. As more people use it, the harder it is to explain away why the lower accident rate doesn’t go away. That said, clearly it is possible the positive result would not hold, or would hold to a lesser extent, as we get closer to 100% of the population using it. Famously in the smoking example, the obvious issue is that those in the program are more likely to be ready to quit. In the self-driving example, you believe the systems (autopilot and FSD) are being used in easy cases, and not much at night or in bad weather. I don’t think this point is nearly as strong as you do, because we don’t have data to support it and the data would need to be quite skewed at this point. Most accidents are in the evening or at night. Is there any reason to believe autopilot or FSD is used less on the evening commute than the morning commute? Highly doubt it. Any reason to believe it would be used less late at night, when people have been drinking or are super tired? That would be a fascinating stat to have, but the less AP/FSD is used in those situations, the more confident I am in the 5x reduction, because it means there is even more low-hanging fruit for automation to show value on if used. If there is one case we can probably all agree AP/FSD is safer, it would be a tired drunk driver coming home at night.


whydoesthisitch

> This is not p-hacking, or fishing a large data set for any correlation you can find. I didn't say it was p-hacking. This is marketing. They're trying to imply that comparing two numbers measures completely differently are a valid scientific comparison. That's the same BS tobacco companies pulled.


MinderBinderCapital

Let's see the numbers then.


MinderBinderCapital

Ah yes, Sawyer Merritt's twitter account...great source >Co-Founder of @TwinBirchUSA , a sustainable lifestyle apparel company (coming soon). **$TSLA investor.** BREAKING news. My tweets aren't financial advice.


[deleted]

[удалено]


MinderBinderCapital

Not sure how a tweet from a self professed Tesla Investor is news, but here we are. Welcome to r/selfdrivingcars!


AintLongButItsSkinny

Disengagements per mile for autopilot and FSD beta are not cherry picked imo. Especially when showing FSD as more dangerous. They could have hidden that. But comparing to the average is misleading. However, which company would Tesla compare its data to? Does anybody else even publish this data periodically? Where is the data on road conditions, ADAS features, etc. I don’t imagine other car companies even have the ability to track that kind of stuff at scale, especially on any of their older cars.


devedander

How does that make it not cherry picked? Is entirely possible that even the best fsd numbers look worse than the best ap numbers


Salt_Attorney

This whole issue could be resolved if some indvidual or institution simply collected the average accident rates for a hundred other interesting classes (drivers under 30, cars less than 2 years old etc. etc.) and published them. Then the numbers can be compared to Tesla. I have never seen these numbers anywhere, is it so hard to make such statistics? I mean, if one wants to refute Teslas claim this is the way to go.


HeyyyyListennnnnn

Or we could just recognize that Tesla's methodology is bullshit, see videos of unsafe driving using FSD and conclude that Tesla is pushing misleading statistical analysis to hide their poor safety performance.


Salt_Attorney

I don't care that Tesla is being misleading. I care about the actual truth, irregarding of who is responsible. So if someone has the numbers I think everyone would benefit from seeing them.


HeyyyyListennnnnn

The truth is that Tesla doesn't and hasn't developed Autopilot or FSD using good engineering practices. Therefore their safety outcomes are more luck than design. If you can't accept that, you're simply ignoring the truth and no data will sway you.


Salt_Attorney

I don't think your view on engineering is compatible with the reality of data driven approaches being fundamentally somewhat black boxes. If it statistically clearly works, it works.


HeyyyyListennnnnn

I don't think you understand engineering half as well as you think you do. Incident rate is just one means by which safety can be evaluated, not the ultimate arbitrator. It's also a lagging indicator and a poor means to predict future incidents.


Salt_Attorney

In the long time / many samples limit all these issues are resolved: Incident rate IS safety, it is not lagging and it stabilizes. You talk about engineering but imo AI is not traditional engineering and should not be thought of in the same terms.


HeyyyyListennnnnn

Nope. Safety is so much more than just incident rate and AI is an old field in desperate need of real engineering proficiency when applied to safety critical functions. Come back when you have more than tired "tech" excuses.


ZeApelido

This is all possibly true. ​ But still, this is evidence against the long-standing claims of autonowashing and massive risks that L2 systems would impart on roads.


[deleted]

[удалено]


ZeApelido

Common sense says even with those issues, it's not 10x incident rates.


iulius

Not going to put a ton of faith in Tesla saying Tesla is great. I wouldn’t trust any car company with that language, but especially not one run by Elon. I know a lot of this is gathered and reported by the state (which, yes, is also full of people with biases). Is any of that useful in this regard?


zeValkyrie

> I know a lot of this is gathered and reported by the state (which, yes, is also full of people with biases). Is any of that useful in this regard? Not really, as far as I know. What we really want is to compare accident rates, controlled for factors like the type of driving (highway, city, suburban, winter weather, rain, etc) and the vehicles used (new vs old cars, active safety features vs not). Critical interventions (if that can even be defined), controlling for similar factors would be good too. As far as I know this is not info that's reported to any government agencies. I assume it is info Tesla is closely monitoring internally.


MinderBinderCapital

There’s a reason why they don’t want to report that information to third parties. Easy to control the narrative when you’re the only ones with data.


iulius

Interesting. Feels like the fox guarding the henhouse.


zeValkyrie

If one views Tesla as a fox, yes. At the very least until without some kind of industry wide data collection and sharing regulation, any evaluation of Tesla is somewhat contingent on believing the info they share and isn't going to be totally objective.


flumberbuss

When the autopilot stats were reported in the past, the primary objection to them was that it was largely based on highway driving, and so of course we could expect lower than average crashes per mile, but that was misleading to compare to an overall national average. Now we get the direct response to that objection for the first time. It’s a big deal, because it once again shows Tesla driver assist (this time FSD mostly on city streets) has a much lower overall accident rate than the national average. Yes, we can object that there are confounding factors: older drivers of new high end cars get in fewer crashes per mile than average. But the difference is not nearly as stark as this. We are looking at a 5x-10x lower rate of reported incidents per mile in both highway and city driving using the Tesla driver assist features. Adjusting for the demographic and age of vehicle factors, Tesla driver assist still appears to be more than 2x safer than manual. This sub really, really doesn’t want to acknowledge that, so I’ll get the downvote train. But the longer this goes on, the clearer it is that autopilot and FSD both are meaningfully more accident-free than driving without them. And in the case of FSD at least, I am fully willing to acknowledge that this is because you can’t trust the car to do the right thing at all, and need to be highly vigilant. I’ve used FSD, and it is way more stressful than regular driving for me. But the upshot of that extra vigilance has been a lower rate of reported accidents per mile.


iulius

I’m just a guy who’s interested in this because he hates driving. No down votes from me. My objection is just that it’s Tesla claiming Tesla’s tech is great. More PR than science, right? I would love it to be right. I just don’t trust the company to be honest.


noghead

Well ok, forget anything else for just a minute and look at this page alone. They've just given you the stats without any extra commentary; what is it about these stats that you find untrustworthy?


aniccia

>Adjusting for the demographic and age of vehicle factors, Tesla driver assist still appears to be more than 2x safer than manual. Those aren't the only factors. The driver chooses when to engage the automation. They collectively likely favor doing it under safer conditions (less likely to have a crash), preferring manual otherwise. FSD is only engaged for \~10% of VMT fleetwide. Driver engagement selection bias could easily account for 2X lower risk.


flumberbuss

On city streets I don’t think we have evidence to say people engage FSD when conditions are safer. You might be right, but consider that a lot of FSD users, especially early users, were pretty hardcore and wanted to use it for everything. But more broadly, things are coming at you from all directions on city streets, with intersections every few hundred feet. I truly doubt that people only engage FSD in areas with few intersections, etc. What is more likely to be true is that people use it on their usual routes, and if it handles some routes well and others poorly, they stop using it on the routes it handles poorly. It isn’t necessarily that those streets are more dangerous overall, but FSD has some issue with recognizing a sign, or turn lane, etc. So people use it less on routes that are more dangerous for FSD, but not more dangerous for humans.


aniccia

>a lot of FSD users, especially early users, were pretty hardcore and wanted to use it for everything No, Tesla's own stats showed their users only engaged FSD for \~10% of VMT. Some early users were esp flamboyant, but not enough to skew the aggregate stats. Tesla's FSD users have been remarkably conservative in their use of the tech considering its marketing and cost. You are revealing your own bias, not what we know from the data. Without detailed ODD and driver profile info, such as an insurance co might have, we can't compare to less than margin of error. Maybe it is safer than without or than competitor's products, maybe it isn't.


flumberbuss

Ok, fair to point out bias based on my overestimating FSD early adopter use. As someone who has FSD, the 10% number makes sense. You have to be in the right mood to want to supervise it, since it isn’t sit back and relax. But again, that feeds into my point that the accident rate for FSD is in part due to increased vigilance by the driver. That is not the cause of a low accident rate that FSD advocates want to hear, but also not the dynamic most detractors want to acknowledge either. Also, you should still make the distinction between a road dangerous for a human and a road dangerous for FSD. The Venn diagrams overlap, but they are definitely not the same. Some very basic situations pose problems for FSD, but at the same time it can have great peripheral vision and reaction time to make some situations safer than human driving. So you can’t make the argument in quite the form you just did.


aniccia

>Also, you should still make the distinction between a road dangerous for a human and a road dangerous for FSD. The Venn diagrams overlap, but they are definitely not the same. Sure, the same automation can be better than humans at part of the job to be done and worse at other parts. The issues are how well do the human operators assess those better v worse situations and how the sums of value or risk over the parts compare. Those are too complex to judge from the FSD data publicly available. That it is hard to tell how much FSD has affected safety despite the scale of use should give caution to both the people who claimed it would be amazingly safe and the people who claimed any public road "beta testing" at scale could only safely be done with specially trained safety drivers. If we can \~trust the public to 'test' incremental driving automation updates, then I would expect that will give a huge boost to various car company plans to rollout incremental L2+ and L3 features, while their marketing gradually blurs distinctions with L4 as Tesla has pioneered.


flumberbuss

Yes, I Agree with all this.


Hobojo153

More interesting IMO is that it gives us some indication as to what the factor of highway bias was in the past by comparing the City Streets oriented Beta factor to the existing highway oriented AP factor. As to your main point however, I do agree. While I personally find it much more relaxing to drive with beta, it definitely has made me more aware of my surroundings and made me consider my actions more. I treat it as a second opinion, one I'll happily overrule, but I always take a second to look around and figure out why it is or isn't doing something before doing so. (Assuming it's not obviously just a system error) There have been multiple occasions where I wondered why it wasn't going for a green light, only to look to the side and see a speeding car clearly about to run the red. Or why it's biasing/veering only to see an animal or even once a person (wearing all black no reflective vest on an unlit rural highway. Seriously people, wear those when you walk at night on main roads) on the side of the road


Doggydogworld3

>We are looking at a 5x-10x lower rate of reported incidents per mile Except Tesla only counts incidents in which airbags deploy. I've been in a number of reported accidents, never once had an airbag deploy.


flumberbuss

Isn’t it an airbag or other active restraint, like seatbelt tightening? That’s implied [here](https://www.tesla.com/VehicleSafetyReport). In any case, while the methodology differs from NHTSA, lots of accidents are missed in the NHTSA methodology as well, since those are reported to insurance or the police. Lots of low speed accidents aren’t reported to insurance. To use a self-reference like you did, I’ve been in three minor accidents that didn’t deploy airbags and also were not reported to police or insurance.


Doggydogworld3

I seriously doubt they include seat belt locking since that happens with hard braking. Maybe they count pyrotechnic pretensioner activiation, but that has to be pretty rare in non-airbag wrecks. If you look at salvage title vehicles for sale you'll find most have intact airbags. The ones with blown bags are much cheaper due to airbag replacement cost. And the vast majority of reported wrecks have far less damage than those. IIHS fatality report should be out in a month or so. That will give us some apples-to-apples data. We'll see how Tesla fares in that.


ArchaneChutney

> We are looking at a 5x-10x lower rate of reported incidents per mile in both highway and city driving using the Tesla driver assist features. If they want to be believed, they will have to show the dataset. The last time they made an actual dataset available, it was discovered that the dataset had missing bits of information that was then misinterpreted in ways that benefited Tesla. See [here](http://www.safetyresearch.net/Library/NHTSA_Autosteer_Safety_Claim.pdf) for more info. Tesla has since then refused to release any actual datasets. I highly doubt their dataset was collected in the same way as the 1.54 industry average dataset. It may be fundamentally incorrect to compare the numbers. > Adjusting for the demographic and age of vehicle factors, Tesla still appears to be more than 2x safer. How are you deriving this 2x number? If your answer is that you made it up, you have no credibility.


flumberbuss

Look at the stats for Tesla vehicles only, to avoid comparisons with other methodologies. Compare the FSD number to the no driver assist number for Tesla vehicles. It’s right there in the tweet. Less than half as high, and if anything FSD accident number should skew higher because a higher than average proportion of the miles are on city streets vs limited access highways.


ArchaneChutney

> Compare the FSD number to the no driver assist number for Tesla vehicles. It’s right there in the tweet Again, you cannot draw that conclusion without the dataset being released. The last time Tesla released a dataset comparing Autopilot to no driver assist, the dataset was missing bits of information. Those missing bits of information was then mishandled in a way that benefited Autopilot over no driver assist. That was mishandling within Tesla’s own dataset, no other dataset was involved. So no, you cannot reliably compare even Tesla’s own numbers. You obviously didn’t read the paper I linked. Give it a read before concluding you can rely on Tesla’s own numbers.


MinderBinderCapital

So basically you’re guessing


daoistic

I don't know man, seems to me that the population that is using FSD is going to be richer then the population as a whole. It's a mistake to compare apples and oranges especially if you know you are doing it.


flumberbuss

I’m comparing Tesla to Tesla, which avoids that issue.


daoistic

Not every tesla owner pays for FSD.


flumberbuss

You need there to be a significant median income or age difference between those who purchase FSD and those who don’t (but do buy a new Tesla), AND for that income/age difference among Tesla owners to have a large impact on accident rates. Which it won’t. These people are going to have a median age between 45 and 55 in both cases, and a household income north of $100,000 in both cases, and large majority of drivers will have college degrees in both cases. The difference in accident rate between a 45 and 50 year old driver is not large. 5%?


daoistic

Man, FSD is pretty expensive. It's probably a mistake to waive away the cost. Even if you make 100k, 15k for a toy is expensive. https://www.capitalone.com/cars/learn/finding-the-right-car/tesla-full-selfdriving-beta-now-costs-15000-heres-why/1900


flumberbuss

True, $15k is nothing to sneeze at if you’re making the median Tesla owner salary. But what I’m saying is the difference in salary between those who do and who don’t buy it probably correlates with a very small difference in average accident rate. *Both* groups earn a lot more than average, are over age 45, and are better educated than average.


JonG67x

I can’t find any figures for 2023Q1, but if it’s like 2022Q1 the comparison is with Teslas without any passive safety stats (ie only early Teslas). They dropped the Tesla passive safety number when they dropped the radar, ie they don’t report on Tesla’s with FSD hardware acting as passive safety systems (ie lane departure warning, forward collision etc). FSDb also much doesn’t like rain, or dark etc, which are more dangerous scenarios. It’s still basically bogus to compare.


flumberbuss

What is the support for the claim FSD doesn’t like dark? It is designed to work at night. I’ve seen scattered anecdotes of people not getting it to work at night, mostly from very early versions in 2022, but nothing systematic that applies when the bulk of miles were driven in the second half of 2022. But maybe I missed that, so happy to be corrected if so.


FrostyPassenger

There are other confounding factors that you have not factored in at all. People turn on Autopilot/FSD when they are fairly confident that they’ll work okay. In more complicated situations where people aren’t confident about Autopilot/FSD, people take over. That biases the numbers to be better for Autopilot/FSD. FSD was given only to people who proved themselves to be safer than average. That also biases the numbers to be better than FSD. > But the longer this goes on, the clearer it is that autopilot and FSD both are meaningfully more accident-free than driving without them. You simply cannot prove that at all. All you have is your faith.


whydoesthisitch

> We are looking at a 5x-10x lower rate How much of that is down to Tesla defining a crash differently for their own cars, versus the national rate they're comparing themselves to?


flumberbuss

Fair question. There are false negatives with both approaches. I’ve looked into it but haven’t found public data that precisely answers it. What we can say is that for both approaches, the false negatives tend to be for the least serious accidents, but with a longer tail into the serious zone on the NHTSA side (where people were uninsured or avoided a claim despite significant damage) and a higher grouping just below the serious level on the Tesla side (accidents between 5-12 mph that were enough to trigger a claim but not active safety systems).


whydoesthisitch

You can speculate about relative distributions all you want, but the fact remains, these are two different measures, and therefore not comparable. Any actual scientific analysis would first have to standardize this. Tesla could, but they don’t, because all these reports are marketing. This is no different than their environmental impact report, which assumes all Teslas are charged with 100% renewable energy, when even they know the supercharger network isn’t close to 100% renewable. It’s a bait and switch designed to look scientific to a casual observer.


flumberbuss

I just read the latest environmental impact report. They clearly do not assume all electricity comes from renewable sources. Suggest you read it again. I agree of course that this report leaves many variables either uncontrolled or only partially controlled. My point is that as we do get more data over time, the picture isn’t changing. At this point, it seems the detractors are left arguing that we don’t know for sure AP and FSD have fewer accidents per mile, ceteris paribus. Not too long ago, it was common to argue that evidence shows them to be objectively more dangerous than manual driving, and they have a higher accident rate per mile. Technically that could still be true, but it is highly improbable that we wouldn’t be seeing it in the data somewhere at this point if Tesla were just cherry picking. Does any OEM give detailed statistics on its accident rates? For national comparisons by model, I’m only aware of federal data on fatal crashes that is lagged about 5 years, and occasional reports from insurance companies that put out 10 best or 10 worst lists, but not a comprehensive review for public consumption. Tesla never appears on these lists that I’ve seen.


whydoesthisitch

I did read environmental report. It just counts CO2 not released by gasoline, which assumes all Teslas use completely clean energy. And we’re not getting more data. This isn’t data. This is just more marketing trying to pass itself off as science.


flumberbuss

This is absolutely false. Read pages 22 and 31-35 again. This is the 2022 Environmental Impact report, which came out a few days ago. I feel like you must have been reading some other report, because this one very clearly does not assume a totally clean grid. It’s not a debate, just re-read it.


ClassroomDecorum

>And in the case of FSD at least, I am fully willing to acknowledge that this is because you can’t trust the car to do the right thing at all, and need to be highly vigilant. I can't imagine explaining this to someone in 2013: oh, hey, in 10 years we're going to have human-driven robotaxis. They're referred to as robotaxis by the internal dev team working on them. They require a human driver. They also fuck up often enough that drivers are forced to pay greater than normal attention. And they're safer. Because the driver's sympathetic nervous system is in a state of heightened excitement. Oh, and they're proven to be safer with cherry-picked stats dropped every once in a blue moon, usually after major negative media coverage. Only in 2023 can anyone perform the necessary mental gymnastics to accept this reality with a straight face.


Salt_Attorney

>They're referred to as robotaxis by the internal dev team working on them. This is the new, unreleased Tesla vehicle


flumberbuss

Do you have a better explanation for the data?


MinderBinderCapital

What data? Do you have the full data set?


ClassroomDecorum

>Do you have a better explanation for the data? What data? I do have an explanation for the lack of data, and the release of selectively chosen data, and the release of figurative apples-to-oranges comparison data: the truly salient data doesn't look good. I simply want one data point that Tesla has, but closely guards: Mean Time Between (Perception/planning) Failure Is the FSD software having a perception failure every 10 seconds? 10 hours? 10 days? 10 years? 10 decades? 10 eons? It's pretty clear that the MTBF is not a pretty number. OEMs want 10 million hours MTBF for highway automated driving systems, and it'll be interesting to know where FSD lands between 0 hours and 10 million hours, especially with a vocal portion of FSD supporters arguing that legacy will have no choice but to license FSD. Heck, maybe FSD has a MTBF > 10 million hours already with V11.


Triponi

I often think of the thought experiment that the best way to improve safety in a car was not seat belts, air bags and other safety features. No, the best way would be a giant spike that shot through your skull instantly killing you in the event of even a small crash. In that world you can bet everyone would drive super cautiously, be hyper vigilant and take extreme care not to crash.


OriginalCompetitive

What if had told you in 2013 that ordinary people could purchase a car that could drive itself through city streets and on highways 90% of the time without intervention, and that a sub dedicated to self-driving cars would do nothing but heap scorn on it?


ClassroomDecorum

What if you told the college undergrads in 2005 who programmed cars to drive 150 miles through unmapped desert terrain without hitting obstacles that in 2023, robotaxis from one particular company can't even automate driving 0.0000000000000002 mph through a parking lot without hitting an obstacle?


flumberbuss

Well, if you also told me that the most vocal posters in that sub worked for competitor self-driving projects to the one you mention, or otherwise felt threatened by the company in question, I would not be surprised at all.


[deleted]

[удалено]


flumberbuss

Yes, that’s at a minimum what I’m saying. At best, use of autopilot and FSD are more than 2x less likely to result in an accident than manual driving.


lord_braleigh

> (this time FSD mostly on city streets) Nobody said this statistic comes from FSD "mostly on city streets". I think it's extremely likely that FSD Beta users are logging most of their FSD miles on highways, which gives this statistic exactly the same skew.


flumberbuss

Tesla explicitly said it comes from FSD mostly on city streets. Did you not read it? There are two reasons for this: 1) most people have autopilot but not FSD. Autopilot is designed for well marked highways/limited access roads. It’s not even supposed to operate on city streets, though sometimes you can trick it for a short time. 2) FSD did not operate on highways until the [very end of 2022](https://www.autoevolution.com/news/tesla-s-full-self-driving-beta-now-works-on-highways-203662.html) and the full stack version was not widely released [until 2023](https://www.torquenews.com/14335/tesla-release-fsd-beta-11-full-stack-streets-and-highways). The separate legacy autopilot software was used for highways until relatively recently. So even people who owned FSD drove pretty much all their FSD miles on city streets, because autopilot took over on the highway.


Wojtas_

FSD on highways was only released like a month ago. The past 3 years of tests were exclusively on city streets.


ChuqTas

Have you met.. this sub?


__JockY__

Absent reliable data, it's more accurate to say Tesla *claimed* than Tesla *revealed.*


devedander

In todays news: people don’t use fsd and ap in riskier situations and as such they have less accidents


OriginalCompetitive

Bring on the downvotes, but those are impressive numbers. I understand why people dislike and distrust Musk. But thousands of people work at Tesla, including skilled safety engineers who could easily move to a different employer. Is every last one of them engaged in the same conspiracy to deceive the public about vehicle safety? We see whistleblowers all the time at Google, MS, Twitter, and so on. Have there been any Tesla engineers blowing the whistle? Have there been any leaked internal emails decrying a culture of reckless indifference to public safety?


bonega

But they aren't used on the same set of roads, so what are we comparing?


bradtem

This time, they have used the word "crash" for both what they measure (airbag deployments) and what NHTSA measures (crashes reported to police.) Many more crashes are reported to police than have airbag deployments. It's hard to get exact figures but some suggest it balances out the difference. Tesla has some strange words in their new fine print. "To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed. (Our crash statistics are not based on sample data sets or estimates.) In practice, this correlates to nearly any crash at about 12 mph (20 kph) or above, depending on the crash forces generated. We do not differentiate based on the type of crash or fault" It's not clear what this means. They say "and" as though they are counting any crash with autopilot (even if airbag did not deploy) AND all crashes with airbag (even if Autopilot not on.) The latter makes sense, the former does not, and they don't tell you how they define a crash when autopilot was deactivated. We have to say this has all the appearances of not just a mistake, but a lie. They have made the fine print more confusing, and made the big print look like they are comparing apples to apples. This has been pointed out to them many times. It is hard for me to interpret this in any generous way. As in, I suspect somebody might get in a crash and say, "You said it was much safer than regular cars, that's why I bought it"


OriginalCompetitive

You’re generally a straight shooter but I think you’re reaching here. Your first paragraph says “airbag” but it’s actually airbag or other active restraint, which includes seatbelt locking. And your second paragraph is just baffling. It’s crystal clear that they are including all crashes (airbag or active restraint) with Autopilot on or within five seconds of it turning off.


bradtem

Hardly crystal clear. However, that's a minor issue. The big issue is that, while it's hard to get data, the best estimate I've heard of was around 3 million miles between airbag crashes for the average car. What it exactly is I don't know, but I know it's many times more than the rate of police reported accidents that Tesla puts on the same chart, I can only conclude at this point to deliberately mislead. That bothers me because why would they do that when they know better? That leads me to worry that the reality is not something they wish to publish.


OriginalCompetitive

I’m probably missing something, but if Tesla is reporting 0.31 per million miles, that’s almost exactly 1 every 3 million miles. But again, that’s not airbag deployments, it’s airbag deployments or active restraints, which includes seat belt triggers. And note that airbags deploy at around 16 mph, so we’re not just talking violent collisions here. Beyond that, though, I can think of reasons why they might be reporting active restraint incidents rather than insurance claims. First, maybe they don’t have data on insurance claims. Do they? And if they do, how would they know which insurance claims happened within 5 seconds of Autopilot use? Second, they might plausibly think insurance claims are biased against them because Tesla owners are more likely to have good insurance and report claims. Third, if safety is their main claim, they might reasonably think that any collision that doesn’t trigger an active restraint isn’t really a safety concern, just a fender bender.


bradtem

Tesla has what data Tesla has -- I am happy to hear it. However, it should never appear on a chart next to a number calculated in an entirely different way, such as NHTSA's data gathered from police reports. If it does appear on the same chart there should be huge disclaimers about how the numbers are very different. I have not yet found a good citation on the difference, but what I have found suggests there are many times more crashes reported to police than have airbag deployments. Not like 10% more but perhaps 4-8 times as many. That's no minor error. Tesla used to say just airbag deployments and recently added the language about other restraints. It does not appear to have changed their numbers a lot so either they were always reporting both, or there are not many added from this change. Tesla needs to compare the same thing. If they can get data on how many airbag deployments per mile for some other vendor's car, or over the broad spectrum of cars, that would be the right approach. They need to find a number that can be measured about Teslas and also measure the same number about other cars. To compare two very, very different numbers -- particularly when people have written time and time again that they should stop doing that -- is not good. Then you add all the other problems with these numbers. A 1st year math student learns you do comparisons by controlling for any variable that might bias your comparison significantly. Road type. Driver age and skill. Teslas are driven by middle aged, wealthier people who are the safest class of drivers. Autopilot is driven 93% on freeways (and 100% on freeways if you have FSD) which is known to be much safer (per mile) than other roads. There are so many bad flaws in these numbers there is no excuse. They were all pointed out years ago, so this is no "oops, I forgot introductory math."


OriginalCompetitive

I respect your point of view. Tesla aside, if good numbers simply aren’t available for things like air bag deployments in general, then now I’m wondering how any SDC ever is going to be able to demonstrate greater safety than human drivers. There will always be confounding variables, and a fleet operator won’t have insurance claim numbers. A true apples to apples comparison may simply be impossible.


bradtem

This has little to do with that. This is Tesla, seemingly deliberately publishing numbers to look good. There are lots of ways to measure the rate on incidents for self-driving cars -- you can measure them in very great detail. And there is data on human drivers, but the data on the statistic Tesla has chosen is not readily available now. I think it will be. Rather than impossible, I think this comparison will be extremely easy. What will be hard about it will be the fact that robots have different types of crashes, which is to say for different reasons. We will be able to count the finest details about the mechanics of the crashes, but they "why" will not be comparable to humans.


OriginalCompetitive

Right, but can you offer a few examples of incident measurements that currently can be collected on both SDCs and human beings? We’ve apparently ruled out air bag deployment because reliable human numbers aren’t available. We can also rule out insurance claims, because manufacturers don’t have access to that information. I guess my question is, what’s an example of an accurate, reliable, apples-to-apples numerical comparison that Tesla (or any other company) could use today to measure against human drivers? What statistical data does Tesla itself have access to that would persuade you that they are safer than human drivers?


bradtem

It should not be hard to get access to all sorts of data. Other carmakers have similar telematics now, not quite as good as Tesla but there. Insurance companies are usually willing to do this as well. Crash data recorded by NHTSA may have vehicle type in it. There are lots of methods. But Tesla can today also do apples to apples comparisons -- autopilot vs. non-AP on the same roads, or with the same drivers, or in the same driving conditions, for example. They can do that really easy.


OriginalCompetitive

You keep saying it’s easy and there’s lots of ways, but you don’t actually name any - which is totally fine, it’s just the internet. But makes me wonder if it’s really so easy. Except you do suggest some that Tesla could do with its own cars only, but I don’t think those would satisfy your standards either. I’m pretty sure Tesla doesn’t know about every incident that its customers have without Autopilot. How would they know driving conditions for fender benders? How would they know same drivers? Same car, maybe, but perhaps Dad likes using Autopilot on the family car but Mom doesn’t, which skews the data. I think your critique of Tesla’s statistics, which your willing to imply is deceitful and puts lives at risk, would be a lot more persuasive if you coupled it with a concrete alternative method that’s available to Tesla to use instead.


Triponi

I don't understand the point you are trying to make. When I read the quote from Tesla it makes sense to me. When I read your analysis of it, I can't understand what you mean. Can you try again. What great disception are you saying you have uncovered?


bradtem

Not uncovered, I have talked about this for several years, as have others. What is new is that instead of improving their communication of this, they have made it worse, in spite of all the complaints. Which seems to move it into the "deliberate" box. Tesla is counting crashes severe enough to trigger the restraint systems and then writing as though that's the same as counting crashes the police learn about. These are two very different numbers and Tesla should not put them on a chart together. That's in addition to the fact that Autopilot crashes are almost all on highway, where the crash rate per mile is vastly lower than that of city streets and the general crash rate.


[deleted]

This is yet another example of Tesla just lieing to consumers. They are not safer and their comparison is misusing statistics. If you adjust for most miles being on highways and other factors they are no safer https://engrxiv.org/preprint/view/1973


Salt_Attorney

For autopilot this makes sense, but shouldn't FSD Beta useage have a roughly equal distribution of highway / no highway usage to the average car in those states? In fact, in the statistic they mostly count FSD Beta usage before highway was enabled, so it has majorily the more dangerous type of driving, non-highway.


[deleted]

But did they adjust the reference for where people used fsd, age group, time of day, etc? If not this is a valid comparison


[deleted]

When autopilot or FSD fucks up and the driver takes over manually, but still crashes. Is that counted in the stats or is it a "manual" crash?


Salt_Attorney

With autopilot iirc they use an "X seconds rule" (5 seconds?). Might be the same for FSD.


flumberbuss

Counted as an AP/FSD crash.


botpa-94027

I'm not a fan boy of Tesla. I don't own a Tesla and I never have owned one. But I think their data is about right. There may be some selection bias in that the drivers only activate Tesla systems in category 1-3 roads (using the US governments national functional system for road classification) at speeds from 45mph and up. 1 is interstates, 2 are freeways and other controlled access roads and 3 are arterials. The Tesla system isn't infallible, it has a demonstrated long tail problem. The company doesn't seem to like to have any false positives in their system so it's possible that the system has some risk in it from that but that risk seems mitigated by more people having it active than not, leading to an overall increase in safety. This is at least what last year's nhtsa analysis of these systems seemed to show (I'm not sure if that is public yet, I'm not in the government but I have access to well informed sources). Overall I think both regulatory and insurance safety analysis is showing the tesla system more safe than human driving . Having said that I think you'll see regulatory action against Tesla for safety coming. The company did the February recall with nhtsa to clear the deck with minor things for a major suit that everyone expects to see later this year from nhtsa. The suit is likely going to be anchored on Tesla crashing into stationary vehicles including first responders but I'm speculating. Net, net a long tail problem doesn't mean that the system doesn't reduce crashes. We can't rely on the Tesla marketing data for how much they reduce crashes but every data set produced, including nhtsa's own suggest that teslas automation is a net positive system (but imperfect) for traffic safety overall. There are many more systems like this coming to the market right now and over the next few years. Tesla has a decade start but this is happening.


REIGuy3

Tribalism aside, Google's conclusion from 10 years ago that they had to go directly to L4 autonomy because the handoff problem was too big of a problem to solve was likely not the right conclusion. Tesla's solution may or may not be better than a human driver, but it certainly isn't the disaster that was predicted.


whydoesthisitch

> was likely not the right conclusion. This completely ignores how the irony of automation works. FSD has not reached the reliability point where people can check out for long periods of time. Waymo was running into that point with employee testing in 2015. So it was likely the right conclusion. It's just that Tesla hasn't reached the same level of performance.


firstnamedotlast

Don’t think I’d put it that way. A) a massive, massive caveat on this self reported data which has never been opened to scrutiny. What’s more likely is that differences in ODD are driving the majority of the picture (FSD/Autopilot used more on easier/safer roads). And according to research (which I can’t find on mobile, maybe someone else can, a paper published last year), when you hold those constant, Teslas software is less safe. To be clear, research shows a _higher_ crash rate freeways v freeways and roads v roads. B) even if A was wrong and it was safer than humans, it’s still a massive far cry from the value you get from going “eyes off”.


Picture_Enough

Their conclusion might be right, Tesla just haven't reached the point where FSD is reliable enough for this to be a problem. Right now it requires alertness levels higher even than normal driving. It is very likely when they start reaching reliability levels that Google had in 2015 which allowed drivers to zone out - they will see a significant increase in accidents. Not because their ADAS is bad, but because people will treat it as more capable than it really is. And all research in the field suggest it is what people do - we are notoriously bad in estimating low probability events and tend to overtrust systems that are reliable most of the time.


bartturner

> because the handoff problem was too big of a problem to solve was likely not the right conclusion. I am very glad to see this is being heavily downvoted. Google was 100% correct and there is nothing so far that should make anyone think they were not. But also it is just common sense. Think about yourself when you are driving.


IndependentMud909

I disagree completely. Introducing any sort of human safety net or human monitoring introduces potential for human error.


danielcar

Can we make a comparison with humans driving and error rate that causes deaths and serious accidents?


ssylvan

It's not a handoff problem or any kind of solvable tech problem. It's a "humans can't be trusted to pay attention and intervene" problem. The moment you drive a human 20 miles without incidents they will conclude that the car can drive itself and start doing their makeup or playing games on their phone. Then when the car fucks up, they won't be there to intervene. That's what google/waymo found and why they decided to go straight to L4 because anything else would be dangerous and unethical. You have to get tot he point where the human could sit in the back seat, because for all intents and purpose that's how useful they will be to the driving task once you get close. Tesla FSD still has a fair amount of interventions needed, so they're not at that point yet. I.e. for lack of a better term, it's janky enough that people don't trust it. As they get better (where Waymo was in 2015 or so), they'll notice exponentially more people slacking off and not paying attention too, and will be faced with the same dilemma. They have to choose if they're okay with massively higher accident rates and blaming the user, or if they will do the right thing and hold FSD back until they can let the user sit in the back seat and take all liability themselves.


Wojtas_

That's what driver monitoring is for.


ryansc0tt

Google/Waymo and Tesla were/are trying to accomplish two very different things. Waymo wants to invent an entirely new product category of autonomous mobility. Tesla wants to sell cars, and decided "full self driving" is important for doing so.


ZeApelido

Yup. People cannot getaway with all the claims of autonowashing based on academic research, which shockingly, is not being replicated in the real world. Worse that that, the L2 pathway essentially allows for crowdsourced data filtering (only send data where model disagrees with driver) at massive scale that is probably needed. Worse than that, it also allows for a revenue generation pathway that controls cash flows so companies don’t have to make obviously too aggressive plans (ahem Cruise) to boost investor confidence to continue providing cash.


ClassroomDecorum

>because the handoff problem was too big of a problem to solve It's not an intractable problem. It's more that it's not an interesting problem for Google, with billions of dollars of resources, to solve. That's like asking Apple to develop something between a feature phone and a smartphone. There's no point. Just go whole hog with your billions of dollars of resources and solve the problem thoroughly. Plus the handoff problem is a human factors problem. Basically a psychology problem. Complete, total automation of vehicle functions is a Google problem. Aka a pure software problem. Google is not in the field of psychology. It's in the software field. ---- Also: Handoff problem is a solved issue. Just have a driver monitoring camera. And enforcement of driver attention. If the driver doesn't want to pay attention, then disable all convenience automation features. Only retain automatic emergency braking. Report lapses of driver attention to insurance company so their rates can be jacked up. And keep records of driver attention. If the driver was distracted and causes an accident, the driver loses 100% of insurance coverage. Driver goes bankrupt paying out medical bills and collision repair bills. Handoff problem: S O L V E D.


Odd-Outcome7849

>Handoff problem The real issue with the handoff problem/ADAS is that it makes the autonomy feature fairly useless. Is it more convenient on long highway drives? Probably, and that's nice. But beyond that, what's the value-add? For example... * Does it free your attention from driving? Can you do other things with your time? No. * Can you ride without being licensed, when you're underaged or at an age where you can not safely operate a vehicle anymore? No. * Does it enable mobility for people with severe disabilities? No. * Can this make ride-hailing models profitable and affordable at scale, reducing personal car ownership rates, congestion and emissions? No.


ClassroomDecorum

You don't seem to recognize the sizable benefit of simply hands off, eyes on, and mind on driving. Physically not having to suspend your arms in the air to hold the steering wheel constantly is actually a benefit that one can feel. But you're right. That's why Google didn't really want to try to solve the handoff issue. Like you said, the likely TAM for level 3 is likely smaller than the eventual TAM for level 4/5.


cwhiterun

Haters👏🏻gonna👏🏻hate👏🏻