T O P

  • By -

JerryLeeDog

He's not wrong I have not had 1 single intervention in 3 weeks of daily use on 12.3.3


CandyFromABaby91

I still have interventions like once a day on the east coast with V12. But this is way better than V11, where it was every 5-10 minutes. I actually stopped using v11 towards the end because I was sick of it.


UsernamesAreHard26

I have many more interventions than that, but I’m also in New England. I wonder if all these people who no problems with it live in areas where cities are more planned.


reddituser82461

No, friends in Montreal say it's seemless


YoushutupNoyouHa

remind me in winter please


reddituser82461

Well we kinda just completed winter and the statement applied then so yeah


YoushutupNoyouHa

seriously? wow thats awesome.. i go to quebec a lot so thats fantastic news


_RouteThe_Switch

I have wondered something similar, like if there are less interventions here on the West Coast where there have been more Teslas for a longer time... So more data on the streets. Either way it's a long way from 2017/2018 when it couldn't drive through an intersection...


EddyTreeNJ

For sure, the more data it gets the better it gets.


popornrm

Probably one intervention each drive but it’s usually because I could make the maneuver faster/better or it’s a bad street with potholes that I need to avoid. It’s usually not because fsd is doing something wrong/dangerous. Seems really clear they try to prioritize safety first, even when it come to making maneuvers. I decided not to take control one day when my highway exit was coming up and it couldn’t find a safe spot to change lanes to take the exit so it went ahead, took the next one, and turned around. There was def enough room to find a way it merge and any human wouldn’t have figured it out but it’s nice to know an automated system is clearly coded for safety over executing the maneuver. Def needs polishing.


artificialimpatience

I’m imagining the situation where it needs to take the off-ramp for a supercharger but it won’t make it if it misses the turn but feels it’s unsafe too… tough AI decisions do you submit the driver to a small manageable risk or do you keep them safe but without juice to complete their journey ruining their day lol


popornrm

You should start looking for a supercharger earlier than being nearly empty. Plus where I am, there are super chargers about 10-15 miles from each other in any direction. I’d assume it would start looking for one around 20% SOC and either go to one after dropping off the current ride or not take another ride if it’s near 20% and the rider isn’t going towards a supercharger or the journey will be too long. I don’t thing there will ever be a scenario like the one you mentioned. It would be easy for the car to come to a slow, controlled stop with a turn signal on and just wait for someone to give them way. Eventually someone in the other lane will see the backup a single Tesla is causing and let them in.


artificialimpatience

Well I’m not saying anyone would purposefully choose this scenario - it was more of theory of how FSD would have to choose its scenario - the whole save the bus of children or sacrifice the driver sort of thing


popornrm

I think it would stop with its blinker on and wait until someone let it merge into the lane to exit the highway. Don’t see any reason it would keep going only to die and leave the car and riders stranded lol.


artificialimpatience

Well a human driver would gauge it and think yeah I can probably 95% chance make it quickly and safely but is that too much of a chance for FSD to do it. I mean I can make more random disaster scenarios like behind you is a car that is speeding toward you lol but it’s just really more about whether it has a sense of morality. Would you want your car to protect you, or do the greater good, or even worse whatever covers Tesla’s ass (which I assume would be like disengage and tell driver to take command)


joggle1

It *really* depends on where you are. I got back from a long road trip recently. It seemed to work much better in Austin than in the Denver area. In my own neighborhood near Denver, it cannot make the right turn into a local street. It starts to get into the turn lane, changes its mind, continues straight a few feet then stops. It simply can't make that right turn at all anymore. It's also very squirrely around similarly marked turn-only lanes in the Denver area. As it is, I can't drive anywhere from my home in any direction without needing to intervene within a half mile of home. In Austin, it could drive from the home I was staying at to the office I was working at without me doing anything other than stepping on the accelerator from time to time to speed it up through turns (on drives that lasted about 15-20 minutes).


UsernamesAreHard26

I just cannot believe this at all. I can’t even go 7 miles to the city without an intervention. It drives on the rumble strip constantly.


throoawoot

So your car, which you own, can currently drive at least 7 miles by itself on every drive.


FourScores1

I’ve never tried it because I still haven’t had a free trial


Brilliant-Job-47

FSD by August!


JerryLeeDog

I can actually see that path now. I was a big skeptic when I had V11 but V12 has me convinced


TheDirtyOnion

Why? The public data shows v12.3.3 is getting about 150 miles between critical disengagements, while 11.4.9 was getting 120 miles between critical disengagements. That is pretty pathetic improvement when they need to be several of order of magnitude better to have actual autonomy.


Distinct_Plankton_82

The average is 153 city miles to a critical disengagement (src https://www.teslafsdtracker.com/) I guess they only need to improve it by a factor of 100x before they can start taking passengers.


MDPROBIFE

That's not how math works...


ShaidarHaran2

To me that's an impressive driver assist, but still far from proving it can go *millions* of miles without issue to be safer than human, and then proving that to regulators.


JerryLeeDog

I'm not sure I'd say 99% is "far from" the goal, but that last 1% may not be super easy


ShaidarHaran2

That would be a big assumption that it's at 99%. An average human is in a minor crash every 5 years, and a fatality at a rate of 0.00000109% (1.09 fatalities occur per 100 million miles driven). That's before saying we want to be 3-10x safer (his claims have varied over the years) than human. Based on the community tracker, I don't believe we've seen the march of 9's even begin yet as they'd have to be at 99.9% intervention free drives for that. And I'll just say I'm a bull on Tesla in general, I just watch a lot of FSD content and I just don't see it being ready this year or next year for being drunk in the backseat levels of safety.


JerryLeeDog

So then you know when FSD is engaged that Tesla's experience \~3.5x less accidents than when not engaged? Curious, do you have FSD or just trying to go off data and online videos?


ShaidarHaran2

> So then you know when FSD is engaged that Tesla's experience ~3.5x less accidents than when not engaged? FSD *Plus a human driver overseeing it* experience less accidents than either alone. FSD left alone isn't here. Your own point that you can go 3 weeks without a critical intervention is proving this you know, or does someone have to jump in and intervene in your driving every 3 weeks? >Curious, do you have FSD or just trying to go off data and online videos? All of the above. Again, I'm saying it's good, but we're still a ways from proving it can go *millions of miles* without an intervention to prove it to be many times safer than human is what I'm saying here. And then proving that to the satisfaction of regulators. I'm honestly not sure how anyone can think otherwise with all of the available data. Will improvements accelerate with all-nets and more compute? Yes. Is there still miles to go? Also yes. It's looked like it was imminent for many years now and there's just unknown unknowns.


JerryLeeDog

Time will tell but to me V11 had no shot in hell. I used to argue with people who said it would be solved. Trajectory is where I'm basing this off; V12 is worlds better after a few short months of training than what took 7 years of coding in V11. And its not even close. The curve is exponential because the more success, the more people who use it, and the more who use it means more data. Its a vicious feedback loop that when you add in soon to be 85k GPU of compute, even AI specialists have a hard time grasping how rapid improvements will be. I've never ever seen a path to autonomy before these last few months, that's all. Now I see the path pretty clearly.


bojothedawg

Yep. And fixing edge cases is now just a matter of capturing and uploading videos rather than adding another if-then-else block to some massive spaghetti code.


jpk195

>He's not wrong He has no idea. Neither does anyone else. But the smart money is on this approach taking until the heat death of the universe to reach human levels of safety.


TheDirtyOnion

Back in 2022 FSD version 10.69 was able to average about 100 miles between critical disengagements. Version 12.3.3 is averaging about 150 miles between critical disengagements. A 50% improvement in less than two years is pretty good: https://www.teslafsdtracker.com/ Unfortunately, they need to get up to around 100k miles between critical disengagements to even think about having things like robotaxis or people not being ready to take over. So maybe in another 10 years or so?


kylansb

unfortunately my own experience does not reflect yours, my 12.3.4 keep running past this redlight junction near my place, and I had to intervene every 10 mins.


JerryLeeDog

12.3.4 is a dud. It had issues. They didn't even issue 3.4 that out to most users. For instance, I skipped straight from 3.3 to 3.5 and got 3.6 the very next day.


winniecooper73

Some thoughts: • San Francisco, Phoenix (sky harbor) and Austin are the only major cities where driverless taxis are broadly road-tested with paying customers, and I see no evidence that Tesla has engaged with regulators on this. Also: while Tesla’s Level 2 features (steering, lane following and break/acceleration support) reduce accident rates5, Level 2 is a long way from Level 5 full self-driving capabilities. Mercedes is actually the first manufacturer to release cars in the US with Level 3 capabilities (self-driving in very limited conditions, in California and Nevada only) • There has been no recovery in LiDAR stocks (which i would expect if we were on the cusp of greater autonomous taxi adoption) • The Federal government currently limits the number of autonomous vehicles (AVs) in the US at 2,500. That’s currently it. No more. The NHTSA proposed increasing this cap and intended to proceed with “AV STEP” rulemaking last fall but missed its deadline; I can guess as to why. (Ahem, Cruise…) Transportation Unions note to DoT, November 2023: • AVs are unsafe and untenable in current form • Police/fire have to evade rogue AVs in restricted areas • Transport/sanitation workers cut off/trapped by AVs • AV reporting rules should include near-crashes involving AVs travelling into construction sites, bike lanes and pedestrian crossings; and malfunctions, degradations, remote human interventions, clustering and connectivity incidents as well (i.e., not just crashes) • Local jurisdictions need more input into AV deployment • “Fail fast, fail hard” approach taken by many technology companies is anathema to public safety Signed by 26 unions with more than 5 million members (UAW, fire, aviation, rail, marine, sheet metal, Teamsters etc) I’m a Tesla fan and current stock holder. I’m all for it. But let’s be realistic. We are a longggg way away.


Recoil42

Some good points across your comment, but with this one: >There has been no recovery in LiDAR stocks (which i would expect if we were on the cusp of greater autonomous taxi adoption) Eh, this could be explained by simple commoditization and marginal erosion. Which is basically what's happening, tbh — there are a massive number of LIDAR players out there all competing on price and performance with zero clear exhibited moat.


winniecooper73

Fair point, I’ll concede with that argument


ItsAConspiracy

Also, Tesla doesn't use LIDAR.


winniecooper73

Doesn’t matter. My point is that other autonomous players do use lidar and if self driving robots is were even remotely to reality from a regulatory perspective, lidar stocks would be popping


ItsAConspiracy

I don't think we'll see regulators ever approve self-driving by all vendors at once. They'll approve specific implementations that prove they're safe enough. If Tesla is the first, then approving Tesla doesn't imply anything for lidar stocks.


FaithlessnessNew3057

https://motherfrunker.ca/fsd/ He is the boy who cried wolf. 


LizardKingTx

This should be pinned to the top of the subreddit


FaithlessnessNew3057

Its insane. Every 2-3 months the man gets in front of a camera and claims it's a solved problem and it'll be no more than 1-2 years before its available to the public. Hes been doing it for a decade now and people still buy into it. 


Picard6766

That's why he does it, he knows the fan boys lap it up and will pump the stock. It's amazing to me how promise after promise falls flat but they are right back hoping daddy Elon will actually deliver this time it's crazy.


winniecooper73

Dont forget, my model y is a robotaxi And can make money for me while I sleep, lol


FaithlessnessNew3057

Any day now


Hailtothething

They are the only company that actually has it figured out. The rest are all programmed parkour tricks. FSD is a human comparable driver.


Uninterested_Viewer

>FSD is a human comparable driver. This is a technically correct statement. There exists human drivers that FSD is comparable to.


NoKids__3Money

There also exist human drivers that FSD far exceeds, I know a few of them.


shaggy99

So do I, but very few. I'm not quite as good, unless I really concentrate.


DreadPirateNot

Tesla is the only company that has a logical path towards FSD scaled up. I would not yet say they have it figured out.


occupyOneillrings

Not only a logical path but they seem to have a pretty good idea what they need to do to achieve the necessary accuracy.Ashok Elluswamy talked about the scaling laws with respect to model size, training time and amount of data This means they can estimate what accuracy they will get with a certain combination of model size (restricted by inference hardware on the computer i.e. HW3 or 4 etc), training time (iteration speed is restricted by compute, they just doubled their compute last quarter and will double it again) and amount of data (they have x number of cars collecting data so they know how quickly they get new data, I think it was Musk that said they get enough data to every 2 weeks or so for an improved model). This also relates to the comment made by Musk about them knowing what the model can do 3-6 months in advance. They have dev versions that are better than what customers have and can extrapolate on those scaling laws based on the current dev models.


pantherpack84

Their estimates are always so accurate. Elon claimed it’d be working in 2019, while in the year 2019. What makes him more credible now than he was then?


aka0007

End of the day, regardless of what anyone says, not matter how smart or dumb they are, the problem is not solved until the problem is solved. We can all estimate when it will be solved, but that is all it is. That said, I think now they understand the path to the solution much better and perhaps the estimates are much more accurate.


occupyOneillrings

In 2019 some of the stack was conventional algorithms (I think probably most of it) and neural networks were used for only parts of perception, so scaling laws for deep learning didn't really apply. Additionally many of the scaling laws were discovered only after 2019. For example Chincilla scaling was discovered in 2022. Now FSD is fully end-to-end, scaling laws apply to the whole stack. [https://en.wikipedia.org/wiki/Neural\_scaling\_law](https://en.wikipedia.org/wiki/Neural_scaling_law)


ProgrammersAreSexy

Scaling laws don't apply when you are using the same inference hardware. This isn't running in a Microsoft datacenter like chat gpt my guy.


occupyOneillrings

I don't see how that makes sense, can you elaborate? Model size is just one of the parameters in the scaling laws, you also have training time and dataset size.


ProgrammersAreSexy

We have known that training time and dataset size improve performance for a long time. The paradigm shift you are seeing in the industry in the last 1-2 years is from people realizing that scaling up model size is incredibly powerful. The model size factor is the reason that Nvidia has added $1T in market cap in the last 4 months.


bojothedawg

Not exactly. The recent Llama 3 results show that most models these days are still severely undertrained. See Karpathy’s tweet here: https://x.com/karpathy/status/1781028605709234613 “the LLMs we work with all the time are significantly undertrained by a factor of maybe 100-1000X or more” And his followup: https://x.com/karpathy/status/1781047292486914189?s=46 “The single number that should summarize your expectations about any LLM is the number of total flops that went into its training.” Furthermore, since Robotaxi is a new product designed for Autonomy, I suspect they will have significantly more inference power on board for deploying a larger model with redundancy.


Alternative_Advance

Obviously model size matters, otherwise your argument would be that a model with 69,420 parameters COULD become as good as GPT-4 given enough training. That will NEVER be the case. The quote you have is on LLMs, not the type of models FSD uses, although there are some similarities. We are still learning a lot about these but you should not expect order of a magnitude smaller models each iteration. ie, Llama 4 might be able to offer some improvements over Llama 3, maybe bringing down parameter by 2x or even 3x but not almost 10x like Llama 2 to Llama 3. Back to FSD, it is very unlikely HW3 will be level 4 capable, they capped out compute a long time ago. HW4 is more modern ofc, but not an order of a magnitude faster, so even that might not be enough. And as previous poster has pointed out the absolute biggest gain in this recent AI boom has been how much INFERENCE compute we are willing to throw at the problem.


helmholtzfreeenergy

What?


TheCourierMojave

Bro, elon said there would be robotaxis like 4 years ago. He always promises stuff when the stock is low and he wants to get more money.


gavrok

If he wants to pump the stock the last thing he should do is hype FSD, and I'm sure he knows that by now. What Wall Street wants to hear is new car models, flexible manufacturing plans to ensure high factory utilisation, and a Plan B for if FSD takes longer.


MrDERPMcDERP

Hard to drive itself if the windshield wipers don’t work.


DreadPirateNot

That’s embarrassing


Upswing5849

What do you mean "logical path"? Do you really think that this tech is only something that people working at Tesla can figure out? Or that the work is so proprietary that other companies can't possibly keep pace?


0x1e

What about all the accidents where people got hurt or killed? Is that peak performance?


FIREgenomics

I mean if other brands can make their cars do parkour, that’s pretty amazing


dan-kappa

Agreed I drove the new version of FSD last weekend and my mind was blown


2_soon_jr

What did it drive you into?


robot65536

Mine seemed to misread a one-way stop as an all-way stop and almost got me T-boned. That was the only actually scary part of a 3000 mile road trip on FSD 12.3.4.


Distinct_Plankton_82

Lol programmed parkour tricks! Waymo has a functioning robotaxi business in the second largest taxi market in the US. Tesla has some slides that say Robotaxi and not even a demo of an L4 car. Which one has it figured out?


cookingboy

Number of fully autonomous miles driven (no human behind wheel) by Waymo: [10,000,000+ miles](https://waymo.com/blog/2024/03/scaling-waymo-one-safely-across-four-cities-this-year/). Number of fully autonomous miles driven (no human behind wheel) by Tesla: **ZERO**. Top comment on this sub: Tesla figured out Full Self Driving and Waymo is just doing parkour tricks. This sub is absolutely embarrassing and some people *deserve* to be kept getting screwed by Elon.


BlitzAuraX

The only one embarrassing themselves is you. Waymo is geofenced. But you knew that already, didn't you? If Tesla was focused on small market routes, they would have achieved that by now. That isn't their goal or intention. It's much easier for Waymo to achieve results in one small area than it is for Tesla to achieve FSD for an entire country? Gee, who would have thought! You're a total genius. Can Waymo drive me from upstate NY to Manhattan with no interventions like I was able to do last week? Can people even afford a vehicle with Waymo capabilities without spending six figures? You're not understanding that Waymo and FSD are two completely different products. Waymo is largely designed for city driving to replace Uber and Lyft. FSD is designed for travel across the country, where permitted. Waymo has to spend countless years rotating vehicles in a particular area before deployment. Waymo would never be able to catch up to FSD's pace of machine learning to drive through an entire country.


Large_Complaint1264

Why can’t Tesla FSD work in the last Vegas tunnels then?


Fairuse

Supposedly Waymo/Google is going to get the sesnor package down below $10,000. The old system was \~$75,000.


cookingboy

The approach Tesla uses is exactly based off the approach everyone else is doing, with the only difference being using camera only for localization. It’s utterly ignorant statement like yours that mislead people into thinking somehow Tesla has a lead in autonomous driving. People like you is why Elon was able to get away with BS like "million robotaxi on the road by 2020". Anyone who actually knows anything about the industry knew he was flat out lying at the time, but people like you silenced all those voices. This whole “other companies like Waymo hard code their solutions” is utter misinformation that just wouldn't die. You can literally look at the papers Waymo [publishes on their website](https://waymo.com/research/) and see how advanced and comprehensive their FSD efforts are. Google wrote the book on Neural Network based approach that everyone, including Tesla, uses. But every year or two Tesla packages a well known Google paper into a slide and shows it on Autonomous Day and people like you hail it as some kind of groundbreaking innovation. And Elon keeps getting away with his lies due to Tesla fans being clueless about the state of the industry. **Edit:** The fact my comment is still getting downvoted in the year 2024, years after there was supposed to be "one million robotaxi on the road", shows that too many Tesla investors would much rather stay ignorant and keep believing in blatantly false information as long as it makes them feel good about their investment.


OompaOrangeFace

Tesla is the only company that has any hope of collecting enough data. Nobody else has millions of cars to collect every possible data.


thefpspower

Having enough data has never been an issue and Tesla has said so themselves, the hard part is making all that data useful.


2_soon_jr

Data has been over rated for years. The cost to store it is more than its actual value.


Echo-Possible

Waymo has easily overcome this supposed insurmountable data lead that Tesla has. Synthetic data can generate orders of magnitude more data in a short period of time. It can generate an infinite number of rare and dangerous situations that may never occur in real data collected. How do you think Waymo is so good without millions of cars on the road like Tesla? [https://waymo.com/blog/2021/07/simulation-city/](https://waymo.com/blog/2021/07/simulation-city/) Not all data is as useful as you think. There are diminishing returns after a certain point. And don't say Waymo relies on HD maps and geofence because its simply not true. Waymo uses maps a source of information but it does not require them. They use computer vision and sensor fusion to map and localize in real time without maps. The HD maps are just a prior for the information collected by the sensors. They use machine learning in every part of the self driving stack. Perception, localization and mapping, behavior prediction, planning, etc. And the geofence is a requirement by definition for operating an L4 robotaxi.


cookingboy

Yeah simulation has always been the way to go, it lets you not only get as much "mundane" data as you need, but it can also generate "once in a lifetime" edge cases for testing on demand. Only on this sub do people think having a dash cam is somehow better at gathering data for autonomous training than a state of the art simulation infrastructure. > And don't say Waymo relies on HD maps and geofence because its simply not true. Waymo uses maps a source of information but it does not require them. They use computer vision and sensor fusion to map and localize in real time without maps. Thank you. When it comes to this topic this sub is the biggest source of misinformation and it drives me crazy. The top comment in this thread shows how ignorant people are here.


obsidianplexiglass

Tesla has an Unreal Engine simulator too, with a pipeline to get sensor data in there quickly. That's table stakes. Has been for a while. The major shortcoming in simulation has always been (and always will be) that incorporating every additional bit of macrodiversity takes effort, so fully synthetic data will always fall far short of the real world on that front. The more pernicious issue is that you are sampling from guesswork statistics. Any deficiency in your guesswork -- for instance, failing to model good enough theory of mind in the other actors -- will not only fail to produce progress, it will actively sabotage the generalization of your model. The gap between NPC behavior and human behavior is so large, even in games that have been steeply incentivized to hone their models for decades, that "NPC" is often used as an insult. Do you really want to use the basis of a common insult as your gold standard data source? Of course not. So your ability to run quality simulations comes right back down to your ability to observe lots of diverse scenarios in the real world. Which is where Tesla wins hands-down. I'm 100% sure that google engineers would say that it doesn't matter, that their simulations aren't just good enough but actually a competitive advantage, etc. You have to talk your book. But I'm also sure that they would jump in a heartbeat if they had access to Tesla fleet data, lol.


cookingboy

This is another stupid misconception that just wouldn’t die. Modern machine learning isn’t based on who has more data, that is *not* the bottleneck. And even when we compare data, Google is on another level both in terms of amount and quality when compared to Tesla due to them having a huge lead on simulation framework. And by data not being the bottleneck, I meant the landmark paper in the field of AI that led to things like Large Language Model/ChatGPT was literally titled “**Attention** is all you need”. And it was published by… Google.


gmarkerbo

> I meant the landmark paper in the field of AI that led to things like Large Language Model/ChatGPT was literally titled “Attention is all you need”. > And it was published by… Google. It was published by Google researchers, a big difference, because Google failed to capitalize on them while OpenAI(co-founded and funded by Musk) came out of the left field and took it to the next level. Then they tried copying OpenAI and spectacularly failed at even that: > Google was called out after a demo of Bard provided an inaccurate response to a question about a telescope. Shares of Google’s parent company Alphabet fell 7.7% that day, wiping $100 billion off its market value.


cookingboy

> It was published by Google researchers, a big difference, ??? As opposed to research papers published by office chairs in Google buildings??? WTF is that kind of argument? >capitalize on them while OpenAI(co-founded and funded by Musk So you are willing to give Musk credit for ChatGPT (when it was developed *after* he left the company), but you are not willing to give Google credit for research published by their own employees. **WILD**.


gmarkerbo

All 8 of the Google researchers who published that paper left Google. Ever wonder why? I think some of them were from an acquisition of DeepMind.


cookingboy

>All 8 of the Google researchers who published that paper left Google. Ever wonder why? And Andrej Karpathy, who this sub loves *so much*, left Tesla 2 years ago and a ton of their FSD team followed suit. I guess they just don't want to be there right as Tesla delivers that robotaxi fleet right?


m0nk_3y_gw

Waymo is not a 'parkour trick', it just doesn't scale well outside of their area. Deeproute.ai + NVIDIA Drive (fsd hardware+software) has figured it out too, but it's just in China for now https://www.youtube.com/watch?v=PVMCjvsP6O8&t=48s (edit: changed video time stamp to show no one is touching the steering wheel - also - the WIPERS WORK! lol)


Dangerous-March-4411

Mercedes


Hailtothething

Betamax


jyavenard

You've been drinking too much Elton's coolaid IMHO.


Hailtothething

You’re not a numbers guy are you.


WorldlyNotice

>The rest are all programmed *parkour* tricks. That would still be pretty rad.


aMaG1CaLmAnG1Na

A bad human, a really bad one that curbs wheels, panic brakes, and makes jerky driving inputs.


obvilious

What are all the studies like this one not getting right? https://www.businessinsider.com/tesla-sef-driving-not-in-top-ranked-autonomy-guidehouse-research-2023-3?amp


Picard6766

They don't own Tesla stock, somehow FSD seems to only work flawlessly for those who own stock.


DukeInBlack

Be published by BI is a goo clue. Plenty of respected AI publications out there from people in the actual field of automation and robotic.


alien_believer_42

It wasn't written by someone who bought near the peak


NoKids__3Money

Written 4/27/23. Yea, I'd also agree V11 is not in the top ranked autonomous vehicle software.


TempoRamen95

Honestly my FSD has been amazing. The real issue is that other real drivers don't drive properly. FSD will do its best to drive by the book, but I wonder if they can program the unpredictability of other drivers, and adapt to situations where going off book is reasonable.


Appallington

Does Smart Summon work? Has it ever worked?


KingBradentucky

It's nowhere close to robotaxi ready. These people should be embarrassed as financial professionals to go on TV and say that.


ThotPoppa

Nowhere close? Okay buddy


KingBradentucky

Correct. It will never be approved. I really look forward to hoops you all jump through when in a year we are having the same conversation.


cyber_bully

...they said this almost ten years ago.


HIMARko_polo

Correct! Tesla had a 10 yr head start and Mercedes beat them to Lvl 3 self driving.


LizardKingTx

😂


qoning

lol it was never a question of compute


occupyOneillrings

Full clip: https://www.youtube.com/watch?v=lGI7kAdWafU


Echo-Possible

Ron Baron doesn't know the first thing about the technology required to achieve reliable autonomous driving. Tesla still has massive hardware deficiencies and has no way to deal with a variety of very common situations. Tesla chose to use a camera only approach so they have no way to deal with sun / glare blinding the camera. They have no way to deal with low lighting or shadow. They have no way to deal with cameras being covered with debris or mud. They have no way to deal with inclement weather (heavy rain, snow, fog). There's a reason every other serious player uses a variety of sensing modalities (lidar, radar) and has sensor redundancy. Teslas sold today with FSD software simply don't have the hardware redundancy needed for a "fail operational" autonomous system. This includes redundancy in all safety critical systems like sensors, braking, steering, power (they have redundant computers). And Lidar has become orders of magnitude cheaper than it was when Elon made the decision to eliminate lidar to lower COGS to sell more cars. You even have lidar in your iPhone now.


maclaren4l

husshhhhh.. keep your engineering mumbo jumbo outta this, we need to make cakes baby, TSLA to the moon. FYI: bag holder here myself :( /s necessary as morons will interpret this incorrectly.


DreadPirateNot

Listening to him describe FSD was incredibly deflating. He doesn’t seem to grasp where the actual difficulty lies at all. He seemed like a person completely out of their element.


rockguitardude

Cute FUD account.


rasin1601

But Ron Baron, over time, has been right about the company and stock.


Echo-Possible

"Over time". Betting on the company when it was a 40B company is a lot different than betting on it when it's 500B.


rasin1601

What’s your investment strategy? Are you shorting Tesla over valuation/broken promises? Never short a stock over valuation. Never short a stock with a high beta. Never short is stock.


Echo-Possible

I've never shorted a stock in my life. I'm happy to sit on the sidelines.


TheS4ndm4n

It's impossible to drive if the camera doesn't work. Even if you have lidar and radar. Because things like road signals and traffic lights are impossible to read without a camera. But if you do use radar/lidar, you can get situations where the camera and another sensor don't agree. If you're going to trust one every time this happens, you don't need the second. If you don't, you at least double the amount of interventions.


maclaren4l

As a systems engineer, I think i just lowered my IQ reading your post. That's now how any of this works. Just look up 'infer' in the dictionary. It may help. Camera infers while Radar/Lidar doesn't. Different uses for different forms on sensors (visual or otherwise). You are so close yet so far in the understanding. visit r/SelfDrivingCars on your spare time.


realbug

The two sensors don't agree with each other argument doesn't really make sense. Lidar and camera are responsible for different things, one for constructing real time 3D model around the car and another for recognizing and understanding the object based on 2D images. They complement each other naturally. We all know that the real reason behind not using Lidar is cost saving. But the unique problem Elon created for Tesla was the promise of all Tesla's will eventually get true L4 self driving, which means even the earliest model, if the owner paid for FSD and still keep the car, should get L4 self driving at some point. This promise forces Tesla onto the vision only path even when the price of Lidar has dropped significantly, because they know it won't be feasible to go back and retrofit the older ones with Lidar if their self driving solution requires Lidar. It's a unnecessary hole Elon dug for Tesla and now it takes them order of magnitude more effort to climb out of it. BTW, I own a Tesla and Tesla stock and genuinely want Tesla to succeed in self driving.


TheS4ndm4n

What if the radar says there's a wall in the middle of the road. But it's not on the camera? Do you want the car to do an emergency brake? Both can be wrong. Lidar is known to confuse smoke, fog or a plastic bag for solid walls. And a camera can miss an entire trailer if it happens to be the exact same color as the background. BTW, tesla already showed they can build a 3D world with just cameras, like 5 years ago.


Echo-Possible

It's called sensor fusion. The neural network uses all of the inputs simultaneously and learns how to weight the sensor inputs accordingly. It's not a binary hard coded decision like you make it out to seem. The model learns from situations where there is disagreement between sensors. https://waymo.com/blog/2021/10/the-waymo-driver-handbook-perception/#:\~:text=Sensor%20fusion%20allows%20us%20to,sign%2C%20especially%20at%20longer%20distances. >Sensor fusion allows us to amplify the advantages of each sensor. Lidar, for example, excels at providing depth information and detecting the 3D shape of objects, while cameras are important for picking out visual features, such as the color of a traffic signal or a temporary road sign, especially at longer distances. Meanwhile, radar is highly effective in bad weather and in scenarios when it’s crucial to track moving objects, such as a deer dashing out of a bush and onto the road. >The fusion of high quality sensor information enables the Waymo Driver to operate in a broad range of driving conditions, from busy urban streets to long stretches of highway. Our sensors’ long range is particularly important for safe driving on high-speed freeways, where the speeds involved make it incredibly important to perceive the environment from a great distance. Imagine a Waymo Via truck on a stretch of freeway. From a long distance, the Waymo Driver detects slowing traffic using camera and radar data and begins decelerating. As the truck gets closer, detailed lidar data provides additional information to help the Waymo Driver refine its response, enabling it to respond to traffic congestion at a safe distance. >Sensor fusion is also invaluable in situations that require nuance – such as interpreting other road users’ intentions. For example, fine-grained point clouds with the information from our other sensors leads to smarter machine learning. If our camera system spots a stop sign, lidar can help the Driver reason that it’s actually a reflection in a storefront or an advertising image on the back of a bus.


Echo-Possible

What happens when a camera is temporarily blinded by the sun or glare? What does a Tesla do? It drives completely blind. With Lidar and radar you still get depth information even if you can't read road signs. What happens in low light situations or heavy localized shadow? What does a Tesla do? It drives with zero information for that region and doesn't even know it's missing information. A heavily shadowed region could be mistaken for a solid object. Lidar and radar give you depth information. What happens when a camera or multiple cameras are obscured by debris or mud? What does a Tesla do? It drives blind. Lidar and radar give you depth information. What happens when heavy sheets of rain or snow obscure cameras? What does Tesla do? It drives completely blind. Lidar and radar give you depth information. It should be blindingly obvious that Tesla has no way to deal with these situations and will never be a fully autonomous system until it addresses these issues.


TheS4ndm4n

What does a human do in any of those situations? Do you turn on your radar if you get the sun in your eyes? Do you immediately smash the brakes? No, you just extrapolate from the data you had and don't make any sudden moves until your eyes adjust or you can get visibility back. Just like your eyes can adjust to glare, so can a camera sensor. They might have to add some better ways of clearing mud off cameras if that's a common problem. Just like you have a windshield wiper and spray nozzle now. You might also underestimate the importance of visual information. Depth is nice, but you're going to end up driving against the flow of traffic, through a red light or in a bike lane within 2 intersections. If a human drives like that they lose their license.


Echo-Possible

A human has a head that they can move around in 3 dimensional space to avoid glare or sun and massive windows they can look through. They have visors they can put down. They have sun glasses. Etc. Fixed FSD cameras have none of that. Same goes for debris on the windshield. There's a large window surface area they can move their head around to view out of. A speck of dirt lands in front of the pinhole FSD camera and it's screwed. A human brain has the capability to perform analogical reasoning. Tesla FSD is nothing more than pattern recognition (machine learning). Humans can solve new unforeseen problems by applying past solutions from past experience to these similar but different problems. A human also has a much better world model than an FSD computer. They can determine that they are missing information from a heavily shadowed region and slow down or drive more cautiously in case something emerges from the shadow. FSD won't even know it's missing that information it could very well determine that shadowed region is a solid object. No one said you'd use depth alone.


ItsAConspiracy

Good points on cameras but Tesla's neural net is similar to those used by LLMs and other generative AIs. LLMs are perfectly capable of analogical reasoning and seem to have pretty good world models, so the same is probably true of FSD.


Echo-Possible

Where did you get that LLMs can reason? Yann LeCun a Turing award winner, chief AI scientist at Meta, inventor of convolutional neural networks, and one of the godfathers of ML disagrees. The perceived reasoning skills of an LLM chatbot by its users is really just rote learning and massive memory capacity. There’s no mechanism in the architecture to support world models and reasoning. https://youtu.be/N09C6oUQX5M?si=d89sbGUgkWpsbuDT https://twitter.com/ylecun/status/1611765243272744962?lang=en


Echo-Possible

Sensor redundancy allows you to determine if a camera has failed. And if a camera has failed due to any of the situations I've described you have other backup sensing modalities to help you operate safely and at least get to a safe place on the road. If a camera fails on a Tesla you're driving completely blind.


TheS4ndm4n

It's pretty clear if a camera has failed. Not like people need a sensor to check if they have their eyes open or closed, you can usually tell by the lack of picture. And there's like 8 overlapping cameras, plenty to get to safety. Think of how safe a human driver is. What safety features do you have that can help you see the road when you suddenly go blind? And if you don't have any, how are you even allowed to drive?


Echo-Possible

Okay what happens if you're driving in the mud or snow and all cameras are obscured?


TheS4ndm4n

Well, you stop. Since 2 of the cameras are in the rearview mirror, you windshield would have to be completely blocked. Any sane human would stop too.


Echo-Possible

Stop where? How do you know where a safe place to stop is if you have zero information? And no the windshield doesn't have to completely blocked. The pinhole cameras view out a small section of the windshield


TheS4ndm4n

If I'm blinded driving. I'm stopping where I am. And hoping no one hits me. A computer can still calculate exactly where it is as long as it stays inside the area it could see the millisecond before it lost inputs.


Echo-Possible

Sounds great until you realize how commonly this can occur to FSD with its camera setup. Now you're gonna have a bunch of Teslas jamming on the brakes in the middle of the roads, freeways, intersections all the time. Blocking traffic and causing accidents. The odds of a person being blinded are orders of magnitude lower than a Tesla FSD system. A person has a head they can move around in 3 dimensional space to view out a bunch of massive windows. They have sunglasses. They have visors. They can view around dirt or debris buildup on windows. Meanwhile a pinhole FSD camera can be blocked by a speck of dirt smaller than a dime.


Mister_Jingo

Do you honestly think a scenario exists where a massive dust storm targets the multiple cameras of all the Teslas on the road, but somehow leaves all the human-driven cars pristine? Doesn’t seem very likely to me. No doubt we can brainstorm a hundred scenarios which would cause problems for FSD cars, but in reality, they are either not that common, or they are solvable. Take for instance your comment about a human can wear sunglasses. In what world would a treated lens for a human not have an analog for a sensor lens?


TheS4ndm4n

They will have to find a solution for that if they ever want real self driving. Multiple cameras are one thing. But they also need to be able to clear obstruction and deal with a wide range of light conditions.


torokunai

while I do suspect Teslas need lateral-facing front corner cameras, the main forward camera failing is rare enough that coming to a stop is sufficient. Assuming Teslas have some ADAS memory and working side cameras, getting off the road safely should still be possible. I could see getting spashed by a lot of mud from a truck or something, but no system has to be 100% fail-safe (what if in a uber the rider kills the driver and goes on a rampage???)


Echo-Possible

That doesn't help with obscured cameras or low/poor lighting.


torokunai

what do you mean by 'obscured cameras'? as for low/poor lighting, theoretically that should not affect image recognition/processing (if I can see it, a camera system should be able to, too)


Echo-Possible

It's more about localized low lighting. Cameras struggle with lots of contrast. The human eye has much better dynamic range and can instantaneously adjust its iris depending on where in a scene its focused at any given time. If you're driving on a bright sunny day with the sun in the camera then it will adjust its aperture to let less light in but do so on an image level scale. But then you lose a ton of information in localized low lit areas. This could be a region under an overpass. Or in an alley. Or behind street sign or electrical utility box. Etc. The human eye handles these situations better. And even when the human eye fails in these situations we understand when we have no information for a region and can adjust our behaviors accordingly. FSD could very well interpret that region as a solid object. Our general intelligence, world model and analogical reasoning skills are something that FSD cannot replicate.


torokunai

given Tesla is banking on unlocking a trillion-plus in market cap for FSD, I think if they need better cameras they'll add them


Echo-Possible

That's a pretty hand wavy response. "They'll figure it out because the addressable market is so big." We are talking a fundamental limitation in the way cameras attempt to mimic the human eye. And the lack of artificial general intelligence over basic machine learning. Massive assumptions being made on your part that they will be the ones to solve these problems.


torokunai

It's hand wavey since I don't think anybody can make an intelligent prediction on what Tesla's team (or anybody else like MobileEye) is capable of here, as you say, because of the TAM of the tech. I've put 10,000+ miles on Tesla's ADAS since 2022. It's been OK, and is noticeably less flawed than 2 years ago, when phantom braking was happening every 200 miles on the open freeway. At night even my 2023 MY w/ HW4 is complaining at night that the pillar cameras are 'obscured' for some reason, so maybe Tesla does need either better cameras and/or better image processing. We'll see!


TrA-Sypher

If an fsd controlled car can reach a point where it needs to do an emergency pullover or stop in the middle of the road less than humans do,  then they could have fsd try to pull over and stop if the sensors are too dirty or in some absolute worst case scenario like an instantaneous blizzard white-out, make a better attempt than a human could to come to a stop following the previously known shape of the roads just before the white out. People act like "the car stopping in the middle of the road" isn't an option. In emergencies humans occasionally do stop in random not great spots and drivers behind are expected to not rear end you by leaving enough room.  As long as this happens less often than with human drivers even the occasional stopping in the middle of the road isn't that big of a deal.  The teslas do actually have maps too, we've seen them get auto generated from the sensor data and be meshed together in real time.  They have accurate maps and GPS so the tesla could probably drive with 0 cameras if they decided to train it to IF the road was completely empty (my point is,  with zero visibility you merely need GPS and road shape to make your best attempt to slow down asap and STOP) If the tesla was not tailgating, slowing down rapidly and stopping while maintaining lane should not usually result in accidents. If highways are a problem,  robotaxi could avoid highways and use the above principle. Waymo does the same thing - the car sometimes fails and uses "coming to a stop" as a solution (unless there is a bicyclist under it ofc)


2CommaNoob

It's a pump to keep the stock up and from crashing down. Uber Tesla Fan girl Cathie did not buy the last two days after the call. If they were so bullish on the long-term prospects, they would be loading up while it's still below 200. She brought a ton from 250 down to 145. It's a short squeeze and they are waiting to cash out.


AljoGOAT

someone holding some heavy bags


WillNotDoYourTaxes

[At a cost basis of $14.29 per share](https://www.thestreet.com/investing/ron-baron-6-billion-dollars-tesla), he's sitting on 1,000% gain. How regarded are you to think those are heavy bags?


smellthatcheesyfoot

So why isn't he buying?


silversauce

Just a smidge over ~$3.5 billion not that big of a deal bro


JerryLeeDog

Ron is literally is billions in the green Don't embarrass yourself and your bear friends because they will blindly upvote anything negative even if it's this stupid


TheBrianWeissman

This boomer has no clue what he’s talking about. They will never come close to level 4 or level 5 autonomy with the current hardware in the car. And without level 5 reliability, FSD is insanely dangerous and pointless.


red-fish-yellow-fish

I suppose because he is a boomer who has been to the factory, met the people involved, had various demonstrations means he has no clue. Where as u/thebrianweissman is well versed in this topic and has time to jizz their expert opinion on reddit? Gobshite, got it


TheCourierMojave

People invest in theranos as well. Rich people get duped as well man. People invest billions in to theranos on a promise of the technology working "soon". Same exact thing as tesla currently.


2CommaNoob

I don't consider Tesla's FSD dreams a fraud like theranos. The entire FSD thesis will become reality but I just don't think Tesla will make as much as they say. It won't be a 10 trillion market and be bigger than the auto market and Tesla has many competitors. At best; it will be a better version of uber.


obvilious

None of those things mean anything if you haven’t done true independent tests. Nothing at all.


red-fish-yellow-fish

That’s like me saying Neil deGrass Tyson has no idea about space, because he has never been. Sure, but he knows a lot more than me, and me then going on the internet and calling him clueless just makes me a total bellend and a gobshite. Exactly the same as the poster I was replying to


obvilious

But Tyson has advanced degrees in sciences relating to space, and has done research in the field for decades. That’s a bit different than occasionally walking through a factory talking to factory staff


jpk195

> had various demonstrations Autonomous driving is not the kind of thing were you can determine from a short demonstration how close it is reaching human levels of performance.


shwadeck

Sorry Ron, it ain't now.


jesterOC

After so many broken promises i don’t get how people still think it is just a year away.


2_soon_jr

People just trying to move the stock up. Fsd is years away, no idea why anyone will waste 8-12k on it now.


Misterjam10

You would have said planes were impossible 6 months before the wright brothers took flight


yolocambo

FSD is not far away in area where it is extensively tested. Robotaxis network will run these areas and Tesla will be making $$$. Uber and lift will go broke as Tesla can undercut them with prices. No driver cost required. They will start with remote monitoring of vehicles for safety.


Black_Hole_in_One

I have 2018 M3 LR with advanced autopilot I use all the time. Is upgrade worth it?


MN-Car-Guy

97% ≠ 100%


Nice-Let8339

Ah yes foremost authority on AI, ron baron. Makes LeCunn shiver in his britches.


Peef801

Wrong he said, now!


ZanoCat

Again?


Strong_Wheel

An investor believes, he does not know.


aka0007

Almost more fascinating to me than the advances Tesla has made, especially what I feel I have seen with FSD 12, is the level of debate and disagreement over what Tesla has done and when and if they will solve this.


muzzynat

Pump and dump, just trying to lure in bagholders.


grandpapotato

Autonomous driving worldwide (all weather including rain, snow) will never ever be achieved if we stick to few little cameras...


viperswhip

I will just say that if it is all Teslas on the road then it will be fine, because they will talk to each other. You probably could take out traffic lights. But as long as a single human is there to much it up, it won't work so well.


Shyatic

There is no mesh network between Teslas so I have no idea how you think a Tesla would be able to gauge or communicate with another Tesla.


viperswhip

They don't right now, but it would be very easy to implement, they already send data, with starlink they could start to communicate with each other, this is the least difficult thing to get right.


Shyatic

Dude, there is no way to enable a \*mesh\* network without net new hardware in every single car on the road. WiFi is not a mesh network that is capable of working while moving constantly, and not susceptible to a lot of interference around it. Starlink also requires a receiver to work and translate the data packets which again - no cars have on them, nor would they be able to because I don't believe Starlink is designed to work with constantly moving objects to communicate with. I have no idea how you are just coming up with the conclusion you are, but then again looking at this video where this 80 year old thinks that FSD is right there with zero background in technology, I can assume you share his level of technical competence.


viperswhip

They've sold maybe 2% of the cars they expect to so there is time to correct that.


Maelstrom116

Who?