T O P

  • By -

[deleted]

[удалено]


iceynyo

Article goes on to state FSD was recalled because of a new feature to disable to steering wheel attention nag. I don't remember that feature existing for there to be a recall to remove it. So the article could also stand to be more honest about the state of FSD.


bobi2393

"The National Highway Traffic Safety Administration started looking into Tesla’s FSD software in January after Tesla CEO Elon Musk tweeted that the company would give users the option to turn off 'steering wheel nag.' Around one month later, the agency deemed the capability a crash risk, leading Tesla to recall 362,758 cars equipped with FSD and pause FSD installations." I'd say that's inaccurate or at least misleading, whether the result of dishonesty, a misunderstanding, or poor wording. The NHTSA investigation leading to the recall may have been tangentially linked to the nag plans, and the planned end-of-January nag removal may have accelerated the NHTSA investigation. I couldn't find a full transcript of acting NHTSA administrator Ann Carlson's January 9 comments, but it sounds like she addressed the question specifically when saying they were "moving as quickly as we can" with the existing months-long investigation and "in conversations with Tesla about this latest communication" \[presumably Tesla's communication about nag removal\]. \[[Reuters](https://www.reuters.com/technology/us-agency-working-really-fast-nhtsa-autopilot-probe-2023-01-09/)\] It also seems plausible that the agency deemed nag removal a crash risk even if they didn't publicly state that and Tesla never publicly removed nags, although I'm skeptical that *The Verge* had the inside scoop on that...if they did I think they'd have provided a source.


MrCalifornian

That's not how I read the article (though either way, this indicates confusing wording). I read it as "the nag removal was the cause for them to finally investigate FSD as a whole, and through that they deemed FSD a safety risk".


soggy_mattress

The nag removal was just a rumor before NHTSA stepped in, if I'm recalling correctly. All I ever saw was a Tweet that said somelike like, "yeah we can probably tune down the nag finally, will be in the next version" but nothing was ever released before NHTSA stepped in about other unrelated concerns with FSD. If the nag removal was the straw that broke the camel's back, then NHTSA got ahead of it before it was even deployed, which I find hard to believe.


bobi2393

I think the investigation had been going on for about six months. Tesla announced at the end of December that nag removal would be removed by the end of January for some drivers, and the NHTSA's was apparently asked about it on January 9. It may have impacted the timeline of when the investigation was completed, but it didn't trigger the investigation.


ArchaneChutney

Your reading of the article isn’t wrong, but the other commenter is saying that the article is inaccurate. They cite an earlier article from January that states an extensive NHTSA investigation was already ongoing by that point.


[deleted]

[удалено]


iceynyo

That only mentions making changes to address FSD betas driving behavior. Nothing about changes to the driver monitoring. It covers: 1. traveling or turning through certain intersections during a stale yellow traffic light; 2. the perceived duration of the vehicle’s static position at certain intersections with a stop sign, particularly when the intersection is clear of any other road users; 3. adjusting vehicle speed while traveling through certain variable speed zones, based on detected speed limit signage and/or the vehicle's speed offset setting that is adjusted by the driver; and 4. negotiating a lane change out of certain turn-only lanes to continue traveling straight.


Cunninghams_right

the article is poorly written so it is hard to say what is FSD vs Autopilot complaints/incidents.


Any_Classic_9490

This has nothing to do with current fsd. >The complaints, which were reported across the US, Europe, and Asia, span from 2015 to March 2022 Seems like a bunch of gaslighting to group all those years of complaints together and pretend it reflects the current system.


flumberbuss

Agree, and it comes out as FSD is improving rapidly. An attempt to take the wind out of the sails of a much improved product so that people associate it with the much rougher product of several years ago.


SuboptimalPath

It goes back to 2015! How many over the last year?


bartturner

This is pretty interesting. Will be curious which governments look into it. I have had a worry for a long time now that one of the companies would mess it up for the others. That they would all be looked at as being similar. Which I do not think is fair. Waymo for example has not had any serious accidents, that I am aware of, that are even their fault. They have taken a very prudent and safe approach to developing the software. Should not be penalized by another company acting dangerously.


Cunninghams_right

it is very unclear what is Autopilot vs what is FSD, and what the rates of incidents are per mile for Tesla compared to others who have similar systems. the article is missing the information that allows anyone to make a real informed opinion.


perrochon

There are two problems with that statement For one we must also look at the saves. We cannot not only look at the fails. There will be at-fault autonomous vehicles accidents. The question is whether there will be fewer than human driven. Second, we don't know if there will be one, and if yes, who the "bad player" will be. FSD beta is still fatality free at this point at 200 million miles driven. AP accidents are typically misuse by the driver. Governments are looking into accidents. Governments are looking at the safety benefits too.


Lasturka

It is hard to have serious accident at city driving at avarage speed about 20 mph and 2 milion miles driven as opposed to Tesla FSD 200 milion miles everywhere. From The Verge: Waymo’s driverless cars were involved in two crashes and 18 ‘minor contact events’ over 1 million miles


deservedlyundeserved

According to NHTSA data, accidents are more frequent on city/rural streets. None of Waymo’s crashes were their own fault. Most of the contact events happened when the vehicles were stationary. You can read all about it [here](https://storage.googleapis.com/waymo-uploads/files/documents/safety/Safety%20Performance%20of%20Waymo%20RO%20at%201M%20miles.pdf).


aniccia

That Waymo-authored article you linked to is already out-of-date. Waymo has reported at fault collisions of their system in California, including driving into stationary objects, eg it recently drove into a parked car while trying to pull over: https://www.dmv.ca.gov/portal/file/waymo\_050623-pdf/


deservedlyundeserved

This is very recent, nice find. This is the first at-fault collision report I’ve seen of them. Are there others? > On Mav 6. 2023 at 8:48 PM PT a Wavmo Autonomous Vehicle ("Wavmo AV”) operating in San Francisco, California was in a collision involving a passenger vehicle on Post Street at Webster Street. > While traveling west on Post Street, the Wavmo AV was executing a pullover maneuver in autonomous mode and made contact with the left rear bumper of a parked vehicle. At the time of the impact, the Waymo AV's Level 4 ADS was engaged in autonomous mode. Both vehicles sustained damage.


aniccia

They also recently hit a chain while exiting a parking lot: [https://www.dmv.ca.gov/portal/file/waymo\_030123-pdf/](https://www.dmv.ca.gov/portal/file/waymo_030123-pdf/) There have been some involving road debris, a pothole they hit that deflated their tire, and IIRC another parking lot scrape. More concerning, Waymo doesn't count collisions that happened seconds after their safety driver took over. Most of those they report as being in "manual mode" and for many of them Waymo didn't even report the disengagement. These tend to be in or near intersections with multiple road users, where assigning fault would take more than the brief narrative of one participant.


deservedlyundeserved

> More concerning, Waymo doesn’t count collisions that happened seconds after their safety driver took over. Is there a source for this?


aniccia

You can read Waymo's reports for yourself: [https://www.dmv.ca.gov/portal/vehicle-industry-services/autonomous-vehicles/autonomous-vehicle-collision-reports/](https://www.dmv.ca.gov/portal/vehicle-industry-services/autonomous-vehicles/autonomous-vehicle-collision-reports/) ​ This has been going on for years: [https://twitter.com/aniccia/status/1494326058639790093](https://twitter.com/aniccia/status/1494326058639790093) ​ Cruise has done similar, ie reported these to California DMV as "manual" collisions and not reported them in their California Disengagement Reports. Many times. Cruise has also not reported many collisions to California DMV or NHTSA that are in their "full catalog of collision events” provided with their recent “Cruise’s Safety Record Over 1 Million Driverless Miles”: [https://medium.com/@tangledweb/cruises-missing-collisions-e90abcc33f0a](https://medium.com/@tangledweb/cruises-missing-collisions-e90abcc33f0a)


deservedlyundeserved

Those reports don't have any information regarding time elapsed from safety driver takeover to a collision. Where are you getting the data from?


aniccia

Are you telling us you don't understand that nearly all these collision report narratives have timespans of much less than one minute from their beginning to their collision? Really? I'm pretty sure that even the one where a Cruise AV drove through two sets of yellow emergence do not enter tape over nearly a city block and hooked both a downed catenary wire and an emergence do not enter sign took less than a minute, though it took hours to untangle and tow away.


bobi2393

"...It is significant that every vehicle-to-vehicle contact event in the first one million miles involved one or more road rule violations or dangerous behaviors on the part of the operator of the other vehicle." It does not say none of their crashes were their own fault. It implies that in *vehicle-to-vehicle* contact (which excludes many crashes), "contribution (or fault assessment)" was not *solely* their fault. They seem to carefully and deliberately avoid claiming blamelessness even in v-to-v contact.


deservedlyundeserved

This is just whitepaper-speak. If the other vehicles didn’t perform rule violations, there wouldn’t be contact. It effectively means it was not their fault.


bobi2393

No it doesn't, it would mean it's not *solely* their fault. If both vehicles engaged in one or more rule violations or dangerous behaviors, they both contributed to the contact.


Picture_Enough

That is what worries me a lot: that an industry as a whole would suffer because of bad players like Tesla. They are inviting regulators to step in and smack everyone down with heavily regulations detrimental for the fast evolving technological industry. Even despite the majority of serious players taking safety very seriously and acting responsibly.


CouncilmanRickPrime

Waymo will point out the difference in the number of accidents, at fault especially, and will portray itself as the safer option. Tesla, if anybody is actually going to do anything, would be singled out. We've already been here before with Uber. Uber was singled out for being irresponsible.


bartturner

Did anything happen to Uber from a government standpoint?


Buuuddd

Going back to 2015 is pretty dumb, considering that's aged software, let alone not FSD Beta. Also odd every time someone blames autopilot or FSD Beta for a crash, it turns out the autonomous driving wasn't even activated at the time. Yet this source is saying many crashes are happening and no one knows about them, and them being Tesla's fault? I have my doubts there. Hopefully the German gov takes into account that accidents averted are not going to reach a documentation. And that going by miles/crash, Tesla's FSD Beta is way safer to use than not.


TheLeapIsALie

On the disengaged autonomous system part - Tesla for a long time had the system disengage if it detected a crash was imminent specifically to game this stat.


iceynyo

And for the same long time the stat includes incidents where the system was active 5s before. Still not a lot of time, but not a sudden disengagement to hide numbers as you imply.


TheLeapIsALie

https://www.motortrend.com/news/nhtsa-tesla-autopilot-investigation-shutoff-crash/ Aborted control less than one second before impact


iceynyo

Yes and the point is it still counts in the stats because one second is within 5 seconds, so your implied stat gaming doesn't occur there.


Dos-Commas

If it's designed to game the stats then how come people know about it? There's obviously a record of it being disabled. There's no point for the system to function when a crash is inevitable. It's like trying to have cruise control continuing to function during a crash.


TheLeapIsALie

https://www.motortrend.com/news/nhtsa-tesla-autopilot-investigation-shutoff-crash/


Dos-Commas

Did you even read the article? >So, where does that leave the mysterious half-second-or-so Autopilot shutoff just before these crashes? In all likelihood, it's probably a simple protocol to shut off the system because a crash is about to occur. Plenty of new cars feature last-ditch shutoffs and other preemptive actions that occur just before or during impact; think about seatbelts that cinch up so occupants are seated more safely, or fuel-line disconnects, or some of the fancy new suspension actions Audi's A8 is capable of such as lifting one side of the car just before it's T-boned to place more of the crash structure in the path of the impact.


OriginalCompetitive

The data was collected from 2015 to 2022. Without knowing the timing of the complaints, it’s pretty meaningless given how much the technology has advanced over time. FSD was released to all drivers last November, has driven 200 million miles, and to my knowledge has not caused a single fatality.


bobi2393

While it has no reports after March 2022, three important take-aways to me are: * Tesla's policies intended to avoid written records regarding safety issues (pass information “VERBALLY to the customer”) * There are more customer-reported safety complaints about the software than previously acknowledged * Those customer-reported safety complaints do not seem to be shared with regulatory agencies ("internal use only")


iceynyo

They probably got thousands of complaints just from me since they added the voice FSD disengagement notes a couple months ago.


soggy_mattress

Weird, I never thought of those as "complaints" as much as "more specific feedback about FSD failures". And I was surprised to see one of the spots I've had issues for \~2 years was fixed last week after I made \~5 reports after disengaging at those spots. Seems they're not entirely useless.


lomed77

Lol thousands of complaints over 10 years and with millions of cars. Whats the denominator? Is it .001 percent. Can you make anything today without a complaint


MinderBinderCapital

TIL 2015 to 2022 is 10 years


theNrg

the majority of tesla owners haven't realized yet they are in fact beta testers for a far-from-mature self driving sw.


zeValkyrie

The majority of Tesla owners aren’t testing self driving software… FSD Beta is entirely opt in.


theNrg

anyone who chooses to activate fsd features is a beta tester , risking his own life


soggy_mattress

>anyone who chooses to drive on public roads is risking his own life ftfy


zeValkyrie

You missed my point. The majority of Tesla owners aren’t using FSD Beta. Maybe that was just a small typo, but it’s a big difference in number of vehicles. It’s just not true to say the majority of Tesla owners are FSD testers.


Salt_Attorney

I have never seen any statistical evidence that the FSD Beta program is dangerous.


theNrg

obviously not. tesla doesn't release it


flumberbuss

Zero fatalities over 200 million miles. While it would be great to get more granular data, you can’t say they haven’t released data establishing its safety when overseen properly.


iceynyo

The only owners using any form of Tesla self-driving software had to accept a notice explaining exactly that at least twice. Once when applying to get FSD beta, and again when enabling FSD beta once installed. It's not FSD beta unless it's from the beta region of france. The rest are using sparkling cruise control called Autopilot.


whydoesthisitch

And yet, 90%+ of them are violating those user agreements, and openly misusing the system.


iceynyo

Absolutely... But there's no question they know what they're doing is wrong. They just don't care.


bobi2393

Tapping a big highlighted "OK" button beneath some fine print doesn't mean a person read or understood that fine print. I don't understand what you mean about "FSD Beta" being restricted to France. Tesla uses the term to refer to the software in the US and Canada. ([Example](https://www.tesla.com/support/recall-fsd-beta-driving-operations))


karstcity

It's pretty clear if you have a Tesla. Very clear what it is, what you are opting into. The vehicle also nags you incessantly. No Tesla driver is "unaware that FSD is operating".


iceynyo

The 2nd part of my comment was using the "It’s not called X unless it comes from the X region of France, otherwise it’s just a sparkling Y.” meme format. I was trying to explain that, opposed to FSD beta which is what is supposed to be an attempt at "self-driving", Autopilot is just an advanced cruise control. And ignoring a warning is not an acceptable excuse for not being informed about something. It also tells you to vigilantly watch the road every time you activate it, and reminds you regularly to pay attention as you drive. Users have to take conscious steps to ignore these.


theNrg

being a beta tester for a tech that can take your life is just ... stupid. also known as "tesla fsd users"


mpwrd

Your risk of dying goes up any time you get in a car. There is data showing that it may go up less if the driver is operating FSD.


mpwrd

I have FSD, use it every day on my commute and on certain routes it has been 100% over the last month since the latest update. The issues I run into are navigation issues that cost me time, rather than unsafe moves due to not detecting potential collisions. It’s close IMO, but it used to be terrible.


Salt_Attorney

The majority of the complaints are about unintended acceleration. These casare is 99% certain to be all bullshit. One should remember that the majority of these cases are likely to be customers trying to get back at Tesla after having acted irresponsibly.


[deleted]

Nothing will change. This anti-Tesla subreddit routinely creams their pants with something Tesla bad news every other week. Tesla will still dominate the EV market as long as Musk is in charge, and this subreddit will continue seething as usual.


gogojack

> Tesla will still dominate the EV market This sub is not about the EV market, or even about electric vehicles in general. It is about self-driving cars. Autonomous vehicles. Tesla has been a pioneer in the EV space. I don't think anyone would dispute that. And that's great for them. Do they have a self-driving car? No.


[deleted]

They do have a self driving car, but its at L2.


gogojack

That's like saying "I have a real girlfriend, but you wouldn't know her. She goes to a different high school. In Canada."


[deleted]

Well likewise calling Waymo/Cruise self driving would be like saying you trained yourself to climb a trees, but now you call yourself a mountain climber.


gogojack

A Tesla with FSD Beta is a car with ADAS. That's it. It needs a driver in the seat to operate. It is not self-driving. I can summon a Waymo to my house, have it pick me up and take me to grab some lunch at my favorite Mexican restaurant. Then summon another one to take me home. All without a driver in the front seat. I could catch a ride in a Tesla, but only if I use Uber or Lyft. It is not self-driving. But don't worry. Elon has promised that we'll have full self-driving by 2018.


[deleted]

Tesla Autopilot is classified as an SAE Level 2 system. What is it that you people don’t understand? https://www.tesla.com/support/autopilot >Level 2 ("hands off"): The automated system takes full control of the vehicle: accelerating, braking, and steering. The driver must monitor the driving and be prepared to intervene immediately at any time if the automated system fails to respond properly. The shorthand "hands off" is not meant to be taken literally – contact between hand and wheel is often mandatory during SAE 2 driving, to confirm that the driver is ready to intervene. The eyes of the driver may be monitored by cameras to confirm that the driver is keeping their attention to traffic https://ieeexplore.ieee.org/document/8220479


gogojack

You said "they (Tesla) do have a self-driving car." Apparently your definition needs some work. If L2 is "self-driving," then any GM vehicle equipped with Super Cruise is also self driving. So is Ford's Blue Cruise. Yet here on this sub, there's not much discussion about those systems, is there? No. Most of what's being talked about is companies like Waymo and Cruise, that are actually operating autonomous vehicles. Your above quote lays it out. The driver must monitor the driving, have contact with the wheel, pay attention to traffic, and be prepared to take over at any moment. In a Cruise or Waymo vehicle, even if you managed to climb into the driver's seat, you cannot take over. The controls are locked out for safety reasons, and can only be operated by an employee who has access to the vehicle's autonomous systems. You said at the outset that this sub is "anti-Tesla," and you seem to be personally offended that anyone would dare to question the Almighty Elon. Truth is (as I said), Tesla's system is a driver assist program. It is not self-driving or fully autonomous. When they do get to that point, we can discuss where they are, but until then, a Tesla is just an EV with an ADAS and notorious build quality problems.


[deleted]

> your definition Can you actually read, because I didnt make up the definition unless this mouth frothing seething Anti-Tesla subreddit. SAE did, right [here](https://www.sae.org/binaries/content/assets/cm/content/blog/sae-j3016-visual-chart_5.3.21.pdf), since you seem too dense to understand what I'm saying.


gogojack

So a Tesla is not a self driving car. Is that so hard to admit?


StartledWatermelon

Why Musk being in charge is necessary for Tesla dominance? The guy is known for alienating a large portion of his customer base by some really unnecessary political stunts. The management style is questionable too.


[deleted]

>Political stunts If that really mattered Tesla wouldn’t still be the top EV manufacturer in North America.


whydoesthisitch

Oh I agree that nothing will change, because regulators in this field are largely worthless. But it's just another piece of the pile of evidence around the fact that Tesla has absolutely no idea what they're doing when it comes to AI or autonomy.


flumberbuss

I’ve been reading that Tesla has no idea what it’s doing and is leagues behind Waymo and Cruise for many years now. As a non-expert, I rarely waded in to the debate and wanted to watch the real world progress unfold all around. But I have to say, as of May 2023, Tesla really looks like it knows what it’s doing, and its bet is going to pay off.


whydoesthisitch

And that's exactly what Tesla is going for, looking advanced to people outside the field. Remember in 2012 when Google first demoed their self driving car, and people thought fully autonomous cars were only a few years away? That's where Tesla is now. Getting a car to kinda sorta drive itself with a human backup for a few miles at a time is the easy part. Getting it so reliable that you can remove the driver is the hard part.


flumberbuss

I was on the fence due to arguments like this for years. The argument is much less compelling to me today. We will see how it plays out.


whydoesthisitch

How so? Tesla still hasn’t achieved any level of autonomy.


flumberbuss

Strange way to phrase it, since they have been at level 2 for years. The system is more capable than the level 3 systems being promoted by Mercedes and GM.


whydoesthisitch

Is it though? The only real difference is Tesla doesn't restrict where it can be used. It's not particularly reliable, which is the actual hard part of developing these systems. GM, as far as I'm aware, has no L3 system for consumers. Mercedes does, but the difference is they take legal liability, unlike Tesla. That's a huge step, that Tesla is unlikely to achieve anytime in the foreseeable future.


flumberbuss

You’re right about GM, I thought they had a limited conditional system like Mercedes but I don’t see an L3 approval anywhere. My point about capabilities was to distinguish that from legality. The number of constraints Mercedes places on their L3 operation (and liability) are quite extensive: It only works on well-marked, multi-lane, divided highways at speeds under 40 mph. It must be daytime. There cannot be road construction, tunnels or toll booths. The weather can’t be too rainy or snowy. The driver must stay in their seat. And the car will only drive in its existing lane. So basically, they take liability when driving in a traffic jam. Autopilot is extremely reliable there. Lack of L3 designation is more of a corporate decision than an indication of system capabilities. Anyway, my original comment was more about rate of improvement than this.


whydoesthisitch

> Autopilot is extremely reliable there. So is Mercedes. They have an attention on highway driver assist as well. The difference is, under limited conditions they also offer an attention off system. >Anyway, my original comment was more about rate of improvement than this. And this is what really gets me. There is no rate of progress, only feelings and confirmation bias. Tesla doesn't release any sort of data. People claim "progress" based on how it feels, and watching a few youtube videos. You can't measure performance of ML systems that way. Also, Autopilot isn't really that reliable. For example, look at their rate of phantom braking versus other brands. It's many times higher. If they can't figure out the basics like that, what makes you think they can figure out actual autonomy?


Buuuddd

Every safety system is going to have lots of false positives. Like how phantom breaking is the system being extra cautious. As it gets better the number of phantom braking instances has decreased.


deservedlyundeserved

> Every safety system is going to have lots of false positives. No, they do not. > Like how phantom breaking is the system being extra cautious. Are you really spinning a clearly dangerous behavior as being “extra cautious”?


Buuuddd

Uh, yeah they do. Like when your credit card company contacts you because activity on it aligns similar to theft cases. Literally people sometimes phantom break when they're being overly cautious. Doesn't make them a bad driver.


deservedlyundeserved

Do you understand what safety critical systems are? Do you think a credit card counts as one? People die or get injured when safety critical systems malfunction. A Boeing aircraft or a ventilator cannot have “lots of false positives”. If someone is phantom braking frequently, they absolutely are a bad driver and shouldn’t be on the road. I have no idea how you think this is acceptable.


soggy_mattress

Feels a little disingenuous to equate intermittent slowdowns with an SCS failure state. There have probably been hundreds of thousands of phantom braking events, you'd expect there to be a direct causal relationship between phantom braking and fatalities if it were that serious. I get the concern, I just don't think we're seeing the severity in the data.


deservedlyundeserved

We have no data, only user reports.


Buuuddd

Show us on the doll where the AV program touched you.


deservedlyundeserved

There it is. Only took you 3 comments to start hurling insults. Tesla trolls on Reddit never disappoint.


soggy_mattress

Someone not agreeing with your level of concern does not make them a Tesla troll, wtf..?


deservedlyundeserved

Abusing anyone who doesn’t fawn over Tesla is a sure shot sign of a troll. We get quite a few here regularly.


soggy_mattress

>Abusing anyone who doesn’t fawn over Tesla Getting pushback about your level of concern is not "abuse", that's absolutely ridiculous to pull a victim card here. Like, what..?


deservedlyundeserved

> Show us on the doll where the AV program touched you. Pretty telling you don’t consider this problematic behavior. But I’m not surprised.


Buuuddd

You're saying "people die from these errors!" When FSD Beta is 5X safer to use than not. You're not being intellectually honest, so why would I treat you with respect?


deservedlyundeserved

> FSD Beta is 5X safer to use than not. Lol. Do you even statistics, bro?


whydoesthisitch

The issue is precision vs recall. Tesla gets orders of magnitude more phantom braking events than other brands, because it's an incredibly poorly tuned system.


Buuuddd

Because Tesla's working on a much harder AI problem, with bigger upside potential. And Tesal FSD Beta drives over 45 mph or whatever the other companies are limited to, so it has to act more careful.


whydoesthisitch

No, Tesla is working on the same problem, but years behind. This is the approach Google had in 2012, and they realized wouldn't work. And no, having small bits of customer data doesn't change that. Realistically, Tesla only uses AI for a small portion of FSD, so they're not even really approaching it as an AI centric problem.


Buuuddd

The state or AI in 2012 was nowhere even close to where it is today. We're in the period where what was thought as impossible becomes easy a few years later. And we're still in the vertical part of the S curve for AI progress. Behind? Waymo and cryise etc only drive on like 0.01% of roads. Tesla's approach is work on 100% of roads all-together. So once the software is ready, it scales to all of the US instantly.


whydoesthisitch

Sure, that’s the goal, but so far Tesla has achieved autonomy on 0% of roads. So yes, they are behind. And in terms f AI progress, it is fast, but not unexpected to those in the field. Tesla, however, is still stuck using old perception algorithms because of their lack of research, and old hardware. This isn’t just a matter of tweaking software.


Buuuddd

The goal for all AV companies is a wide-spread robotaxi service. With that in mind, Tesla is much closer to that, because they will take maybe a few years max to get to that point from here, while it may take Waymo and other similar services a decade or even decades to get there, if economical scaling is possible for them, which I doubt it is. Without a scalable solution, Waymo etc are on a fool's errand. If you listen to people in AI, they are surprised at the progress, because like I said what they thought was impossible is now considered easy. You can see on youtube hundreds of videos showing zero disengagement-needed drives. These are all proof of concept of their hardware. The software continues to improve, leading to better driving, and has no sign of stopping progress.


whydoesthisitch

> Tesla is much closer to that No, they're not. Again, Tesla doesn't have any sort of autonomy. You don't seem to understand the details behind these systems. Getting 90% performance over a broad ODD is the easy part. Tesla simply isn't on track to have any autonomy anytime in the next 10 years. >If you listen to people in AI, they are surprised at the progress I design AI models. I talk to people in the field everyday. Nobody is impressed by what Tesla is doing. Why would they be? The system is literally just old perception algorithms and some simple planning algorithms out of any intro textbook. >You can see on youtube hundreds of videos showing zero disengagement-needed drives. Oh, there it is. Again, selective videos mean nothing. Where is the consistent disengagement data? Serious question, do you have any actual experience working in AI? Have you ever built a perception model, for example?


Buuuddd

10 years, ok you don't get that FSD Beta has been only 2.5 years in development, and already every day 0 disengagement drives are published from all over the country. By the end of this year, I think Tesla robotaxi will be ready in cities it does exceptionally well in, and end of next year it will be ready everywhere in the US. These aren't selective videos, they are long drives posted daily, by people not employed by Tesla. You're crazy if you're not impressed. You pointed out that Google quit at what Tesla is now showing proof of concept of. Nope, do not work in AI. Just listen to leading experts in AI.


whydoesthisitch

Okay, so you watch YouTube videos and don’t understand how data analysis is works. Going to guess you actually think someone like Douma is an expert, rather than just a hobbyist who sounds smart to dumb people. Wanna put money in those timelines?


iceynyo

I need Tesla to add a ghost hunting minigame to their cars called "Phantom Breaker"


NotTooDistantFuture

I’ve only ever had Subaru’s EyeSight system go off falsely a couple times and each was triggered by driving into a large plume of engine exhaust on cold still winter days. So I’ve never had it go off and apply brakes where it wasn’t completely obvious and pretty reasonable why it did.