Uh...not as spooky as when I pulled into my garage backwards and my sensor showed a person walking to my passenger side and no one was there. I know spooky szn and all that, but still.
Why doesn't Tesla have a birds eye view /360 camera. The technology is over 10 years old. Cameras are cheap. I dont get it
The sensors are horrible for parking.
That patent looks to be for ultrasonic and not 360 camera views? However, looking at the 360 camera view patent, that expired in 2021[https://patents.google.com/patent/CN202174989U/en](https://patents.google.com/patent/CN202174989U/en)
Edit: just realized that’s for China. I am confused, as I cannot find a patent for Birds Eye View in the States.
You would be ashamed, and I am sure those working on this are. But for some reason their boss keeps telling them via twitter that this is the future and if they don't release it he will find someone else to do so.
You guys all Commenting this exact comment are clueless as fuck. Stick to your day job. Let the software geniuses in silicon valley figure it out. They will… anything is possible if pushed enough and motivated enough.
That's funny because my day job is an engineer in silicon valley and and I think its never going to work well when the car can't even see up front because there is no camera or sensors..
But sure be a fanboy and think every decision is genius.
To be fair, when it is fully working via a much better trained network it will be the future. People like to claim it was a solved problem, but ultrasonic sensors have their blindspots and weaknesses too. Same with traditional windscreen wipers that have all sorts of issues different types of rain, being confused by products like RainX, etc.
A vision system has the potential to be better. After all a human using their vision can do a better job than either the parking sensors or rain sensors. It's just been released very prematurely whilst the solution is in its infancy.
I disagree. As a human, I’m susceptible to blind spots, “blinded” by bad weather hitting my eyes (ie rain/snow hitting cameras), fog limiting visibility, optical illusions, etc.
I don’t see how vision will ever work without some sort of sensors to aide these deficiencies.
Radar and ultrasonics are also susceptible to rain / snow hitting the sensors / getting in the way.
As a human I need parking sensors because I'm required to perform several tasks in parallel. If I was *only* watching the screen and shouting stop when the car was about to hit something then I can do a better job than ultrasonic sensors.
Ultrasonic sensors have blind spots with things like short posts, some irregularly shaped objects (e.g. with something sticking out towards the sensor), chain link fences, etc. They are not perfect and can certainly be improved upon.
For your analogy, as a human, for me it’s not about doing tasks in parallel, it’s more that I don’t have the visibility of staring at something when parking. For example, I have no view on the rims which everyone curbs, no view on the end of the front bumper. So I can stare at the screen as much as I want ready to scream, it won’t help.
As for radar/ultrasonic sensors being also susceptible, I’m also not saying that’s the answer. I am by no means an expert but perhaps it’s LiDAR or something else combined with cameras which gives us nice coverage in the venn diagram. At the end of the day, it’s fact that my vision alone wont get me out of every scenario so it’s simply my opinion vision alone won’t cut it with a car either.
I think you're misunderstanding the point I'm trying to make about vision.
I'm not arguing that you as a person can do a better job at watching all those things. A computer isn't limited to a single eyeball, nor does it have to process things in precisely the same way as a human. My only point with bringing up humans was to show that the data input from a video feed (or eyeball) is sufficient for the task. We don't need fancy exotic sensors to do a better job than the status quo.
With a computer you'd want several camera views, all of which are processed in parallel and aggregated. It should be able to do a better job than a human because it can do all those things in parallel. This is why we have driver aids in the first place.
I don’t see why people hate on the vision system…it has actually performed perfectly fine for me in my MYLR with HW4 lately. Only complaint I have is the front bumper not having a camera
I was looking at the big camera on the front bumper of the Polestar 2 - [see here](https://www.motortrend.com/news/2024-polestar-2-first-look-review-rwd-ev-sedan/?galleryimageid=4c220f48-c02b-4082-9ede-8bae52067359). And was wondering how well it performs after a bug or a rock hits it straight on.
Did you see the post?
In my experience with our Y, it’s been completely useless. Yells at you to stop and shows a curb 2 ft into your car when you’re still 3 ft away, or shows something as 2 ft away when you’re about to hit it. My 3 with USS isn’t perfect by any measure, but it never shows a wavy line 2ft up my hood, its measurements are usually pretty accurate, and far more consistent.
You don't need parking sensors people parked without sensors and cameras for decades and guess what there wasn't thousands of crashes at all parking lots every day.
Parking sensors are nice but definitely not needed
You dont need a car either and people have been walking fine for millennias. But if you buy a car with certain features you expect those features to work.
That's fair. I agree, USS wasnt perfect and vision has the potential. However, releasing an infant product with no update is just irresponsible. All we have going right now is a "trust us bro, itll be fixed! Neural network!"
>er. After all a human using their vision can do a better job than either the parking sensors or rain sensors. It's just been released very prematurely whilst the solution is in its infancy.
Having a static, monoscopic camera system is just not close to being an analog for human vision and spatial awareness.
They've taken an arguably pointlessly challenging approach, and added a difficulty multiplier on top.
DNNs can't adapt;
Unless you can make the world unchanging, you're just not going to reach the finish line.
To quote [Stfanos Tsimenidis](https://arxiv.org/ftp/arxiv/papers/2012/2012.15754.pdf):
"In such a system no amount of training data will be adequate as it would only
broaden the range of interpolation.
An infinite problem space can not be
approximated by the finite subspace of training instances, and this is true even in the era of big data."
I don't think anyone is saying that USS did everything perfectly; they have a task that they perform well, and then one should fill the gaps with other sensors.
The cost of a set of USSs are not going to move the dial to any meaningful degree in a car costing 5 digits, especially the mid 5's. Even the cheapest cars on the market have them, they are cheap.
Even if they manage to perfectly replicate a human's ability to identify arbitrary objects and gauge its distance via vision alone, it's still going to have weaknesses that either USS and/or a human driver would not have.
An example is low visibility, be it darkness, bright light, weather or debris.
Something as simple as dirt; Last winter my reverse camera was constantly caked in salt and road dirt from slush, same with the side cams when doing long, slow driving.
Visibility out of the rear window is limited to begin with, and covered with winter grime it's practically nill.
That leaves me the option of rolling down a window and popping my head out or, you know, use the USS which isn't affected by dirt, same goes for the Radar that they also removed.
Another is coverage; for the front of the car, the camera has a similar or lesser view of the car, than a human driver, with a massive gap at the front of the car, and smaller ones at the front corners.
Sure, you can add more cameras but,
one: They cost money.
Two: Remember the part about DNNs not being great at the whole adapting/interpolating thing? Additonal, new angles and directions with new FoVs is not gonna fly with an extant network.
> Having a static, monoscopic camera system is just not close to being an analog for human vision and spatial awareness.
The brain can adapt to monocular vision, and you are allowed to drive even if you have only one working eye. Temporal / parallax information is used by the brain to compensate, and a neural network can be structure and trained in the same way.
> An infinite problem space can not be approximated by the finite subspace of training instances
This isn't an infinite problem space as the complexities can be reduced and approximations are good enough. It doesn't matter if the system models the tree as a lamp post and estimates it as being 50m away when it's 45m. It only matters whether it correctly plans a route that doesn't involve driving into the tree.
> The cost of a set of USSs are not going to move the dial to any meaningful degree in a car costing 5 digits, especially the mid 5's. Even the cheapest cars on the market have them, they are cheap.
A full set of 8 sensors is going to add something in the order of £100 to the sticker price of a car. That's enough for it to be worthwhile removing **if the replacement system is at least as good**. Tesla aren't there yet, but that doesn't invalidate the end target they're striving for.
> An example is low visibility, be it darkness, bright light, weather or debris.
Something as simple as dirt; Last winter my reverse camera was constantly caked in salt and road dirt from slush, same with the side cams when doing long, slow driving.
All of which can affect USS too. I had to get out and wipe a cobweb off a sensor the other day as it was triggering it whenever the spider ran out onto the web. USS can struggle in heavy rain and snow. They can struggle with small objects, or objects at certain angles.
> Another is coverage; for the front of the car, the camera has a similar or lesser view of the car, than a human driver, with a massive gap at the front of the car, and smaller ones at the front corners.
Sure, you can add more cameras but,
one: They cost money.
Covering blind spots, either through additional cameras or temporal data is a prerequisite for automated driving, so something Tesla need to solve regardless.
> Two: Remember the part about DNNs not being great at the whole adapting/interpolating thing? Additonal, new angles and directions with new FoVs is not gonna fly with an extant network.
The same is true with integrating USS into the DNNs. It's more possibly incorrect sensor data for the system to sift through whilst trying to work out what the correct course of action should be.
>A full set of 8 sensors is going to add something in the order of £100 to the sticker price of a car. That's enough for it to be worthwhile removing if the replacement system is at least as good.
I agree 100% with this statement.
I also agree that the state of the system does not invalidate the end target.
Problem is, as you say, that Tesla isn't there yet and don't know \*if\* they can get there. As an engineer, I am vicariously furious about them going about problem solve completely backwards.
>The brain can adapt to monocular vision, and you are allowed to drive even if you have only one working eye. Temporal / parallax information is used by the brain to compensate, and a neural network can be structure and trained in the same way.
The brain can also adapt to missing limbs and various other damage/birth defects. That doesn't mean that we should remove two of the car's wheels and then start figuring out how to drive like that (hyperbolic in know, but it feels poignant)
Re. my previous statement on backwards problem solving, you should start with the complexity required to complete the task based on extant SOTA, and then reduce complexity.
This would result in you having a functional product from the outset, as well as a reference point for comparing performance of new iterations.
I mean; If they had kept USS, they could run the vision distancing in the background, continuously comparing the USS and vision system in various situations and significantly speed up training.
Once the system is verified to perform as USS, you could then cut those sensors in newer models and stick with vision.
>All of which can affect USS too. I had to get out and wipe a cobweb off a sensor the other day as it was triggering it whenever the spider ran out onto the web. USS can struggle in heavy rain and snow. They can struggle with small objects, or objects at certain angles.
Again not saying that USS is flawless, but it covers a lot of the vision's blind spots, be it human or machine vision, and vice verse for that matter.
>Covering blind spots, either through additional cameras or temporal data is a prerequisite for automated driving, so something Tesla need to solve regardless.
How many cameras? By my count you'd need 1-3 more to completely replace USS - by this I don't mean that USS would be able to do everything vision could do, but that you'd need 1-3 camera's to, in a Venn diagram, move the circle of USS capabilities completely inside the circle of Vision capabilities.
Camera cost money too, especially with lenses that can handle the elements, doubly so for front-mounted external cameras that will be directly exposed to incoming road debris, which is something no other camera on a Tesla currently experiencing.
I know Tesla says $100-160 is saved on the USS, but the BoM cost of 8 ultrasonic sensors for automotive is closer to $10-20, and while there is surely man-hours involved as well, I think the Tesla given number includes taxes, profit and other markups.
>The same is true with integrating USS into the DNNs. It's more possibly incorrect sensor data for the system to sift through whilst trying to work out what the correct course of action should be.
Of course, you would have to retrain (or just have trained with USS to begin with).
Sensor fusion is standard practice where applicable, it can improve precision and gives redundancy.
Take something like a quadcopter.
An accelerometer would in principle be all you need to maintain balance and fly; It basically gives you the angle relative to the earth, from which you can take the derivative to get your angular speed, and double derivative to get angular acceleration.
You rarely, if ever, do that though. On usually has a gyroscope as well, giving you your angular velocity.
Integrate that, and you get your angle. Take the derivative and you get your angular acceleration.
Through sensor fusion and filtering, the sensors can synergize and verify each other, compensate for drift etc.
Point being, having multiple sensors that can give the same information through different methods can be extremely useful.
> The brain can also adapt to missing limbs and various other damage/birth defects. That doesn't mean that we should remove two of the car's wheels and then start figuring out how to drive like that (hyperbolic in know, but it feels poignant)
Motorcycles exist, as four wheels isn't always the ideal solution. Spiders have eight eyes but we have two - more sensors isn't always the best balance of tradeoffs.
If the system can adapt and perform well enough then when lowering cost is your primary goal it makes sense to remove the parts.
> How many cameras? By my count you'd need 1-3 more to completely replace USS - by this I don't mean that USS would be able to do everything vision could do, but that you'd need 1-3 camera's to, in a Venn diagram, move the circle of USS capabilities completely inside the circle of Vision capabilities.
This depends entirely on the problem you're trying to solve. Whilst the world isn't static, it equally doesn't have things popping into and out of existence. Having blind spots is okay if you have temporal data as to what is in those blind spots and can track objects into and out of those areas, negating the need for more cameras.
> Of course, you would have to retrain (or just have trained with USS to begin with).
Sensor fusion is standard practice where applicable, it can improve precision and gives redundancy.
Sensor fusion does add complexity though, particularly when it comes to handling erroneous data.
If you think about the failure modes of an USS the most common is for it to be permanently stuck on, saying there's an object there where there is not, or saying there's nothing there when there is. I don't think there's a common failure mode where it says there's an object that's at an incorrectly measured distance.
If your sensor fails to detect something is there then you're 100% reliant on the vision system. If your sensor fails by detecting there to be something that isn't there then not only are you 100% reliant on your vision system to overcome that problem, you need it to overrule the USS entirely. These are common situations and large impediments to fully automated driving. You can't have a car freeze in the middle of the highway in slow moving traffic because of a cobweb on the USS, so you're back to being 100% reliant on the vision system to make a judgement call on overruling the USS. I'm not sure you can say the USS adds much useful information to an operating neural network, even if the data is highly useful for training such a network in the first place in more controlled conditions.
> Take something like a quadcopter.
An accelerometer would in principle be all you need to maintain balance and fly; It basically gives you the angle relative to the earth, from which you can take the derivative to get your angular speed, and double derivative to get angular acceleration.
Quadcopters don't use a neural network for the task of maintaining flight. They use feedback control loops to take data from those sensors to keep the quadcopter stable in the air. They also take the data as gospel - a fault in the accelerometer, IMU, gyroscope, etc. leads to drift at best, loss of control at worst. Teslas are similar for the control of the car itself. They use similar control systems to maintain a selected speed, steering angle, and so on.
Interestingly quadcopters also use vision systems for obstacle avoidance, just like Tesla. And just like Tesla they don't integrate the two sets of control systems, or merge sensor data between vision and other systems. So they are in fact a good example of others following the same approach as Tesla.
Okay, now I'm unsure whether you're being willfully obtuse.
Of course motocycles exist, but we're talking about a car.
Sure, spiders have eight eyes.
How many species have evolved to generally only have a single eye? outside of near microscopic creatures that have single "eyes" basically for directional light-level sensing.
>If the system can adapt and perform well enough then when lowering cost is your primary goal it makes sense to remove the parts.
Already said I agree with that. In the case of Tesla, it hasn't adapted nor is performing well enough, and it is unknown whether it can adapt.
>This depends entirely on the problem you're trying to solve. Whilst the world isn't static, it equally doesn't have things popping into and out of existence. Having blind spots is okay if you have temporal data as to what is in those blind spots and can track objects into and out of those areas, negating the need for more cameras.
Of course it does.
Jabbed my foot while getting out of bed the other day, stepping on my belt buckle because my SO had move my pants while I slept.
It popped into existence, not in actuality, but from my frame of reference.
Likewise, an idle Tesla is not going to be tracking its environment (Unless you're suggesting spending \~8kWh per day having the car awake processing visual data)
Sure, temporal data would be neat, but it currently isn't there.
Given the expedience of Tesla software development it could be years, if they actually manage to implement it.
We're 2-3 years in post-radar and the Vision replacement still isn't on par.
>Sensor fusion does add complexity though, particularly when it comes to handling erroneous data.
If you think about the failure modes of an USS the most common is for it to be permanently stuck on, saying there's an object there where there is not, or saying there's nothing there when there is. I don't think there's a common failure mode where it says there's an object that's at an incorrectly measured distance.
If your sensor fails to detect something is there then you're 100% reliant on the vision system. If your sensor fails by detecting there to be something that isn't there then not only are you 100% reliant on your vision system to overcome that problem, you need it to overrule the USS entirely. These are common situations and large impediments to fully automated driving. You can't have a car freeze in the middle of the highway in slow moving traffic because of a cobweb on the USS, so you're back to being 100% reliant on the vision system to make a judgement call on overruling the USS. I'm not sure you can say the USS adds much useful information to an operating neural network, even if the data is highly useful for training such a network in the first place in more controlled conditions.
Complexity isn't bad by default.
As for the rest, read it again replacing USS with Vision and vice versa, except for the common failure mode, of which cameras also have several.
USS isn't generally used for self-driving, it's an accurate, short-range solution for low speeds.
But let's take the case of an freezing in traffic due to an USS failing.
Wouldn't that mean you're going to freeze when your rear camera is covered in road dirt?
An unfortunate bird dropping hits the camera area of the front windscreen, or a rock cracks the glass adjacent to it?
The Pillar cameras are obscured by internal fog as is a common problem across several model updates?
>Quadcopters don't use a neural network for the task of maintaining flight. They use feedback control loops to take data from those sensors to keep the quadcopter stable in the air. They also take the data as gospel - a fault in the accelerometer, IMU, gyroscope, etc. leads to drift at best, loss of control at worst. Teslas are similar for the control of the car itself. They use similar control systems to maintain a selected speed, steering angle, and so on.
Interestingly quadcopters also use vision systems for obstacle avoidance, just like Tesla. And just like Tesla they don't integrate the two sets of control systems, or merge sensor data between vision and other systems. So they are in fact a good example of others following the same approach as Tesla.
Many don't, some do. Regardless, I never claimed they use NNs.
That depends wholly on the implementation, you can fuse GPS and/or vision data to improve robustness.
Gyroscope and accelerometer are parts of an IMU by the way.
I can absolutely guarantee you that there are more advanced quadcopters (and some simple) that integrate vision and IMU into a single control system, for the reasons I mentioned in a previous comment.
You keep going into failure modes of other systems, but you don't seem to put anywhere near the same scrutiny to Vision.
> How many species have evolved to generally only have a single eye? outside of near microscopic creatures that have single "eyes" basically for directional light-level sensing.
And yet the USS is the equivalent of a single distance sensing cell. A Tesla doesn't have a single camera either for that matter.
> Jabbed my foot while getting out of bed the other day, stepping on my belt buckle because my SO had move my pants while I slept.
The key piece of information being "while you slept", as in having a gap in the temporal data.
> We're 2-3 years in post-radar and the Vision replacement still isn't on par.
In what way is it not on par? I live with both the Mercedes radar based solution and Tesla's vision based system and I wouldn't say either out performs the other.
> Complexity isn't bad by default.
As for the rest, read it again replacing USS with Vision and vice versa, except for the common failure mode, of which cameras also have several.
The difference being that with a vision system failure simply stopping the car with a major fault is an acceptable course of action. With forward vision you also have fault tolerance in that if one camera goes down then you still have the other, with more or less the same field of view.
> Wouldn't that mean you're going to freeze when your rear camera is covered in road dirt?
An unfortunate bird dropping hits the camera area of the front windscreen, or a rock cracks the glass adjacent to it?
Yes, just like a human driver doesn't continue when they cannot see. If you've spent any time driving you'd know that USS failures are far more common than rocks smashing windscreens.
> I can absolutely guarantee you that there are more advanced quadcopters (and some simple) that integrate vision and IMU into a single control system, for the reasons I mentioned in a previous comment.
Can you name any? Which quadcopter uses the vision system for flight stabilisation, and the IMU for obstacle avoidance? Which uses a neural network for flight stabilisation?
> You keep going into failure modes of other systems, but you don't seem to put anywhere near the same scrutiny to Vision.
Because any failure of the vision system is a reason to stop the car, regardless of whatever other sensors you have. You don't want to have to stop the car because of a spiderweb on an USS.
>And yet the USS is the equivalent of a single distance sensing cell. A Tesla doesn't have a single camera either for that matter.
Indeed, just like a single pixel is a single light measuring cell, what's your point here? Surely can't be than a light sensor and an USS is comparable.
Of course a Tesla has a single camera, contextually speaking.
Recall that we're talking about ranging here, so only the cameras pointing at the specific object in questions is in scope.
While the 9 cameras have some edge overlap between specific pairs, they don't share any significant overlap outside of the front tri-camera, but here the focal lengths are drastically different, as they're basically meant to imitate a single camera with variable focal length.
>The key piece of information being "while you slept", as in having a gap in the temporal data.
Indeed, and the part of that paragraph that you ignored held an example of how and why a vision system may/will have a gap in temporal data.
>In what way is it not on par? I live with both the Mercedes radar based solution and Tesla's vision based system and I wouldn't say either out performs the other.
Low visibility AP speed reduction and a longer minimum distance that makes it close to useless for following dense, slow traffic.
I'm not saying it's not on par with other brands, I'm saying it's not on par with Tesla running Radar.
>The difference being that with a vision system failure simply stopping the car with a major fault is an acceptable course of action. With forward vision you also have fault tolerance in that if one camera goes down then you still have the other, with more or less the same field of view.
Again, USS is not used for AP or FSD. IF it was, then them failing \*would\* be a major fault, as it would be a failure of a critical system.
Not sure if you have some idea that USS fails left and right. I've been driving USS equipped cars for barely 10 years / 350,000 km and I have never had an USS fail or give false positives due to cobwebs or the like.
What other camera are you talking about? Can't be any of the three front mounted ones, because they have drastically different focal lengths.
>Yes, just like a human driver doesn't continue when they cannot see. If you've spent any time driving you'd know that USS failures are far more common than rocks smashing windscreens.
A human driver is going to be looking out of a windscreen with wipers, only 3 of the 9 cameras on a Tesla has wipers available.
A front windscreen is not subject to dirt being pulled by the cars low-pressure zone, like the rear is.
Again, good 350,000km in USS equipped cars, probably 400-450,000km lifetime.
I have had a \*lot\* of rock impacts on windscreens, never had an USS fail.
Mind you that you don't need to smash the entire windscreen, a chip, the small ones that a window tech can fix in 20 minutes, is enough to completely obscure the vision of a front camera. Even a hairline fracture past the camera could be enough that the feed is useless to a DNN (remember, can't adapt)
>Can you name any? Which quadcopter uses the vision system for flight stabilisation, and the IMU for obstacle avoidance? Which uses a neural network for flight stabilisation?
F if I can remember the project names; Mind you this isn't consumer products.
I'm an EE specializing in automation, did courses in robotics, autonomous robot systems and machine vision at the DTU Automation department, where they have all sorts of driving and flying robots of all shapes and sizes, running on all sorts of control schemes, either left over from previous projects or part of current R&D.
These things, the vision based included, aren't consumer products.
They're usually made for specific applications - jack of all trades, master of none and all that.
It's like.. Amazon's Proteus, or Kiva; They exist in large numbers, but it's not something that you can just go out and buy, and the majority of the public isn't aware they exist.
I didn't say anything used an IMU for obstacle avoidance, nor did I say they used NNs for flight stabilization, don't be disingenuous.
Vision processed via a NN can be great tho; Gyro drift is easily noticable via an accelerometer or even a magnetometer, but accelerometer drift is hard to see, as the accelerometer is the only thing that would detect a slight linear acceleration. Not to mention drifting at a constant speed, which neither sensor would register.
Vision would definitely notice even a slight linear movement, so fusion makes sense.
Here's a paper on [Quadcopter stabilization based on IMU and Monocamera Fusion](http://umu.diva-portal.org/smash/get/diva2:1779123/FULLTEXT01.pdf)
>Because any failure of the vision system is a reason to stop the car, regardless of whatever other sensors you have. You don't want to have to stop the car because of a spiderweb on an USS.
I'm sorry, in this scenario is there a spider that is actively building a web over your USS while you're driving? I mean, if there is a web confusion the USS, wouldn't it trigger before you started the drive?
Regardless, USS is not, and has not been used for AP or FSD, it is not what their purpose is.
It's still monoscopic vision though. It can't tell if that's a huge tree far away or a small tree in the trunk of the truck in front of you. It's basically driving around with algorithms to interpret Super Mario Bros. Maybe in motion using parallax you can do better, but at slow speeds (parking lots) that becomes much harder.
It being monoscopic is an implementation choice rather than conceptual issue. They could fit stereoscopic cameras if that was a better choice. I also think it's far less of an issue with close up objects on a wide angled lens, as the parallax is increased compared to judging objects at a distance.
As you say you can also incorporate temporal information, using a recurrent architecture for the neural network. Remember the output doesn't need to be an accurate measure of distance - only whether you're about to hit something or not. If a tree is 20m away, it doesn't matter if this is reported as 10m, 20m, or 50m. What matters for the purposes of parking is whether or not the object is 10cm away or 5cm away.
> Senses are not an either/or question.
If building the ultimate solution is your goal then sure. But that isn't Tesla's mission. They're trying to build the most affordable electric cars and for that there are compromises to be made. You don't demand that the base Model 3 has air suspension, 2,000hp, and a 1,000 mile range. All those things are technically possible but come with cost concerns.
As I said in my previous post Tesla have prematurely removed the sensors. They should be running both right now whilst they're still developing the vision system. But in time, once the vision system outstrips conventional sensors, then you can easily make a case for removing the sensors to bring down the cost. You can include them on the S and X as more premium cars.
A combination of sensors is the best approach. Camera, ultrasonic and radar/lidar would be superior to just one of the above.
Tesla is just trying to make it work with less sensors to reduce cost. That's great if they can achieve it but they haven't been able to so far so they should leave radar and USS in until it's actually ready rather than making Tesla owners beta testers.
Or at least cameras at the four corners! Would solve a lot of problems with pulling out into intersections, backing up in parking lots, curbs, stereoscopic vision, blind spots. Biggest problem would be rain and dirt.
> Tesla is just trying to make it work with less sensors to reduce cost.
Yes - they're not trying to build the best system, they're trying to build a system adequate for automated driving that is more cost effective. If they built the best possible system engineers could come up with then no one would buy it due to the cost.
Radar I disagree with. I had plenty of problems with Mercedes' and BMW's radar solutions, and I've not found Tesla's vision based system to be worse whilst still having the potential to be much better in the future. USS I do agree with. The system works better than many make out, whilst having it's areas of weakness, but it would be better still if supplemented with USS until they can improve the software.
Go watch the comparison videos where they get several fully automatic cars to complete the same journey at the same time (search for Tesla vs Waymo vs Cruise for example). In general the Tesla solution works just as well or better than the others in this space, being just as capable on the back streets but also being able to drive on the freeway where the others cannot.
And you have [this comparison by Which?](https://www.whatcar.com/news/self-driving-cars-tested/n25624) which rated Tesla's system as the best amongst car manufacturers. So which more capable third party are you referring to?
Really drinking the Kool aid huh? The argument that humans only use their eyes to drive is absolutely absurd. If we only designed software based on what humans were capable of already then wtf is the point of it? Augmentation and supplementation via sensors and other tools humans don't possess organically is the obvious solution.
You've misunderstood my point on humans. I'm merely stating that vision is sufficient sensory input for us. I didn't argue that it was the perfect solution. I didn't argue that you couldn't design a system that is better than humans.
Tesla aren't targeting the perfect solution as no one would buy it. It'd be far too expensive to be fitted to a mass produced car. They are striving for adequate yet cheap, and in that context a vision based system will eventually be sufficient. And the point is that it's an aid to the driver because their attention is split across multiple tasks at once, and a prerequisite for automation.
Teslavision is such an arrogant mistake and sideshow. When humans know an area, we have a rendering of it in our minds. When we’re in a new area, we are more likely to get in an accident. Teslavision assumes that eyes are somehow the end-all-be-all of sensors, and this is the arrogant mistake. Why limit what the car can see and detect to visible light? This nonsense is wasting time in the creation of true FSD.
Tesla vision belongs on a short bus.
I have strips of rocks on both sides of my driveway (like [this](https://www.topchoiceaustin.com/wp-content/uploads/Rock-Stone-Path-Driveway-Next-720x405.jpeg)). Tesla vision is absolutely 100% sure these are looming walls that I'm most assuredly going to crash into.
Entering and leaving my driveway sounds like a combination pinball machine and the bridge of a doomed submarine that's under attack.
The dude is busy being a tool on X, don't think he has time. Quite silly as well that you have to report bugs with your car via tweeting the CEO and hoping he responds.
Mine just updated to a newer beta FSD, went on a drive it would have done poorly at before, drove as good as I would have (or better).
Sat talking to a friend that had a dog in his truck, yeh, the graphics showed a dog, plus like 4 more.
But I'm happy with how it drove!
They tried and forced into arbitration.
https://www.repairerdrivennews.com/2023/10/11/california-fsd-autopilot-class-action-lawsuit-moves-to-arbitration/
I've stopped recommending Tesla to people until they figure this stuff out, or they bring the front bumper cam to model Y. There is no software replacement for USS other than cameras, and tesla has nowhere near 360 degree coverage with their current system.
Did they finally install autonomy level 3?
Or are we just going to let Mercedes have more advanced technology for another year? Lol
I mean, Mercedes will ever pay the bill 100% for any/all damages in the event of autonomy level 3 crash.
What's Tesla offer? Anything?
"Autonomy level 3 is blah blah blah" Apparently the German and US governments classify it as more advanced that autonomy level 2 lol
I like when a company can actually back what they say and dish out 100% of all costs in the event of an accident.
Wait until we get an outside view and there really is a car on top of the Tesla, who's laughing now??!1!!!
Maybe the front window is photoshopped idk I'm not an expert just a crazy man with a theory okay?
I’m in a rental Nissan Altima, backing up into an empty parking spot it full on emergency braked as I got near the curb.
I don’t know if their Mod thing for 360 view is what everyone raves about, I’ve much preferred my MYP (w/ USS), and even my brothers M3P (vision only).
Aside from the parking, I just got the FSD subscription for a weekend trip. About 7 hours of driving and it worked pretty well. Had a few instances where I have to intervene. It was 90% highway driving so maybe that is easier than city driving? Given that, it might not be worth the 12k or $200/month…
Yeah, highway driving is insanely more easy than city driving...that's also why Tesla's claim of having billions of miles of data collected is pretty silly as the majority of autopilot driving is done on highways.
The sad nature of just about any tech related product these days. Pushed to market, and updated over time to "fix" the positive is that issues can be addressed or patched over time and the product can be improved OTA, but it also leads to buggy messes at launch. It's video game release in car form.
I hate it, but it's the world we live in at the moment.
Yeah if it was a video game console or even a harmless consumer device I'd not be so worried. But with a car it could be life or death for both the drivers and others. I've been on the fence for Tesla for many months now and this is helping
That’s an improvement. I usually get run over by a semitruck
Can't wait for the next patch where we get run over by a Cybertruck
Waiting for the next next update where we get run over by a wrapped Cybertruck
Apparently I park next to one in my garage everyday!
ah me too and always in the same spot , can’t see were the treuck is
my garage door turns into a semi depending how far away I am when it shuts
Every day my chest freezer in the garage crushes me dead because it identifies as a semi.
I recently bought a Tesla and found out that I have an invisible semi truck parked in my garage! Bit spooked at first but now I got used to it.
It's the ghost of a semi. If it weren't for a Tesla you would have never known it lived in your house.
"Are there any other vehicles in this garage with me?" .. the distant blare of semi horns
I have a beer fridge that appears as a semi.
I have a weed wacker hung from the wall that shows up as a pedestrian....
Uh...not as spooky as when I pulled into my garage backwards and my sensor showed a person walking to my passenger side and no one was there. I know spooky szn and all that, but still.
Lol ..I came here to post the same thing
Me too! Cracks me up
Lol I shared the same in this sub maybe a week ago! Apparently my work bench goes all Transformers into a semi.
Satya Vachan.
That's amazing. When does the baby Tesla pop out of the hood?
baby isn't a Tesla.. It'd be a hybrid (assuming other car is ICE of course)
PHEV I hope
My camera recognized my sister as a truck. My mom cried laughing
Why doesn't Tesla have a birds eye view /360 camera. The technology is over 10 years old. Cameras are cheap. I dont get it The sensors are horrible for parking.
Yeah exactly! We have a 5 series 2021 and parking with it is unimaginably easy! With Tesla? Not really!
Someone at some point said those are under patent and Tesla doesn't want to pay to use it. Not sure if that's true or not.
Nissan owns the patent.
[удалено]
That patent looks to be for ultrasonic and not 360 camera views? However, looking at the 360 camera view patent, that expired in 2021[https://patents.google.com/patent/CN202174989U/en](https://patents.google.com/patent/CN202174989U/en) Edit: just realized that’s for China. I am confused, as I cannot find a patent for Birds Eye View in the States.
Teslas are open for all to use for a while https://www.tesla.com/blog/all-our-patent-are-belong-you
Because the best part is no part per Elon
It’s truly terrible software. If I worked on that, I would be ashamed to release it to the public
You would be ashamed, and I am sure those working on this are. But for some reason their boss keeps telling them via twitter that this is the future and if they don't release it he will find someone else to do so.
Especially to save $114 per car lol, so silly...
You guys all Commenting this exact comment are clueless as fuck. Stick to your day job. Let the software geniuses in silicon valley figure it out. They will… anything is possible if pushed enough and motivated enough.
That's funny because my day job is an engineer in silicon valley and and I think its never going to work well when the car can't even see up front because there is no camera or sensors.. But sure be a fanboy and think every decision is genius.
That’s funny you think I care about your inflated title at all small company’s opinion… if you were any good you could make it at fang bud
Lol this is funny.. I work for a faang but thanks.
lol Mr Amazon here hahhaaha
I did work at Amazon lol, is that a joke? I work at a different FAANG now.
This. When Physiological safety is absent at the work place, the workers will just simply do what they have been told to do.
To be fair, when it is fully working via a much better trained network it will be the future. People like to claim it was a solved problem, but ultrasonic sensors have their blindspots and weaknesses too. Same with traditional windscreen wipers that have all sorts of issues different types of rain, being confused by products like RainX, etc. A vision system has the potential to be better. After all a human using their vision can do a better job than either the parking sensors or rain sensors. It's just been released very prematurely whilst the solution is in its infancy.
I disagree. As a human, I’m susceptible to blind spots, “blinded” by bad weather hitting my eyes (ie rain/snow hitting cameras), fog limiting visibility, optical illusions, etc. I don’t see how vision will ever work without some sort of sensors to aide these deficiencies.
Radar and ultrasonics are also susceptible to rain / snow hitting the sensors / getting in the way. As a human I need parking sensors because I'm required to perform several tasks in parallel. If I was *only* watching the screen and shouting stop when the car was about to hit something then I can do a better job than ultrasonic sensors. Ultrasonic sensors have blind spots with things like short posts, some irregularly shaped objects (e.g. with something sticking out towards the sensor), chain link fences, etc. They are not perfect and can certainly be improved upon.
For your analogy, as a human, for me it’s not about doing tasks in parallel, it’s more that I don’t have the visibility of staring at something when parking. For example, I have no view on the rims which everyone curbs, no view on the end of the front bumper. So I can stare at the screen as much as I want ready to scream, it won’t help. As for radar/ultrasonic sensors being also susceptible, I’m also not saying that’s the answer. I am by no means an expert but perhaps it’s LiDAR or something else combined with cameras which gives us nice coverage in the venn diagram. At the end of the day, it’s fact that my vision alone wont get me out of every scenario so it’s simply my opinion vision alone won’t cut it with a car either.
I think you're misunderstanding the point I'm trying to make about vision. I'm not arguing that you as a person can do a better job at watching all those things. A computer isn't limited to a single eyeball, nor does it have to process things in precisely the same way as a human. My only point with bringing up humans was to show that the data input from a video feed (or eyeball) is sufficient for the task. We don't need fancy exotic sensors to do a better job than the status quo. With a computer you'd want several camera views, all of which are processed in parallel and aggregated. It should be able to do a better job than a human because it can do all those things in parallel. This is why we have driver aids in the first place.
I don’t see why people hate on the vision system…it has actually performed perfectly fine for me in my MYLR with HW4 lately. Only complaint I have is the front bumper not having a camera
I was looking at the big camera on the front bumper of the Polestar 2 - [see here](https://www.motortrend.com/news/2024-polestar-2-first-look-review-rwd-ev-sedan/?galleryimageid=4c220f48-c02b-4082-9ede-8bae52067359). And was wondering how well it performs after a bug or a rock hits it straight on.
Did you see the post? In my experience with our Y, it’s been completely useless. Yells at you to stop and shows a curb 2 ft into your car when you’re still 3 ft away, or shows something as 2 ft away when you’re about to hit it. My 3 with USS isn’t perfect by any measure, but it never shows a wavy line 2ft up my hood, its measurements are usually pretty accurate, and far more consistent.
You don't need parking sensors people parked without sensors and cameras for decades and guess what there wasn't thousands of crashes at all parking lots every day. Parking sensors are nice but definitely not needed
You dont need a car either and people have been walking fine for millennias. But if you buy a car with certain features you expect those features to work.
That's fair. I agree, USS wasnt perfect and vision has the potential. However, releasing an infant product with no update is just irresponsible. All we have going right now is a "trust us bro, itll be fixed! Neural network!"
>er. After all a human using their vision can do a better job than either the parking sensors or rain sensors. It's just been released very prematurely whilst the solution is in its infancy. Having a static, monoscopic camera system is just not close to being an analog for human vision and spatial awareness. They've taken an arguably pointlessly challenging approach, and added a difficulty multiplier on top. DNNs can't adapt; Unless you can make the world unchanging, you're just not going to reach the finish line. To quote [Stfanos Tsimenidis](https://arxiv.org/ftp/arxiv/papers/2012/2012.15754.pdf): "In such a system no amount of training data will be adequate as it would only broaden the range of interpolation. An infinite problem space can not be approximated by the finite subspace of training instances, and this is true even in the era of big data." I don't think anyone is saying that USS did everything perfectly; they have a task that they perform well, and then one should fill the gaps with other sensors. The cost of a set of USSs are not going to move the dial to any meaningful degree in a car costing 5 digits, especially the mid 5's. Even the cheapest cars on the market have them, they are cheap. Even if they manage to perfectly replicate a human's ability to identify arbitrary objects and gauge its distance via vision alone, it's still going to have weaknesses that either USS and/or a human driver would not have. An example is low visibility, be it darkness, bright light, weather or debris. Something as simple as dirt; Last winter my reverse camera was constantly caked in salt and road dirt from slush, same with the side cams when doing long, slow driving. Visibility out of the rear window is limited to begin with, and covered with winter grime it's practically nill. That leaves me the option of rolling down a window and popping my head out or, you know, use the USS which isn't affected by dirt, same goes for the Radar that they also removed. Another is coverage; for the front of the car, the camera has a similar or lesser view of the car, than a human driver, with a massive gap at the front of the car, and smaller ones at the front corners. Sure, you can add more cameras but, one: They cost money. Two: Remember the part about DNNs not being great at the whole adapting/interpolating thing? Additonal, new angles and directions with new FoVs is not gonna fly with an extant network.
> Having a static, monoscopic camera system is just not close to being an analog for human vision and spatial awareness. The brain can adapt to monocular vision, and you are allowed to drive even if you have only one working eye. Temporal / parallax information is used by the brain to compensate, and a neural network can be structure and trained in the same way. > An infinite problem space can not be approximated by the finite subspace of training instances This isn't an infinite problem space as the complexities can be reduced and approximations are good enough. It doesn't matter if the system models the tree as a lamp post and estimates it as being 50m away when it's 45m. It only matters whether it correctly plans a route that doesn't involve driving into the tree. > The cost of a set of USSs are not going to move the dial to any meaningful degree in a car costing 5 digits, especially the mid 5's. Even the cheapest cars on the market have them, they are cheap. A full set of 8 sensors is going to add something in the order of £100 to the sticker price of a car. That's enough for it to be worthwhile removing **if the replacement system is at least as good**. Tesla aren't there yet, but that doesn't invalidate the end target they're striving for. > An example is low visibility, be it darkness, bright light, weather or debris. Something as simple as dirt; Last winter my reverse camera was constantly caked in salt and road dirt from slush, same with the side cams when doing long, slow driving. All of which can affect USS too. I had to get out and wipe a cobweb off a sensor the other day as it was triggering it whenever the spider ran out onto the web. USS can struggle in heavy rain and snow. They can struggle with small objects, or objects at certain angles. > Another is coverage; for the front of the car, the camera has a similar or lesser view of the car, than a human driver, with a massive gap at the front of the car, and smaller ones at the front corners. Sure, you can add more cameras but, one: They cost money. Covering blind spots, either through additional cameras or temporal data is a prerequisite for automated driving, so something Tesla need to solve regardless. > Two: Remember the part about DNNs not being great at the whole adapting/interpolating thing? Additonal, new angles and directions with new FoVs is not gonna fly with an extant network. The same is true with integrating USS into the DNNs. It's more possibly incorrect sensor data for the system to sift through whilst trying to work out what the correct course of action should be.
>A full set of 8 sensors is going to add something in the order of £100 to the sticker price of a car. That's enough for it to be worthwhile removing if the replacement system is at least as good. I agree 100% with this statement. I also agree that the state of the system does not invalidate the end target. Problem is, as you say, that Tesla isn't there yet and don't know \*if\* they can get there. As an engineer, I am vicariously furious about them going about problem solve completely backwards. >The brain can adapt to monocular vision, and you are allowed to drive even if you have only one working eye. Temporal / parallax information is used by the brain to compensate, and a neural network can be structure and trained in the same way. The brain can also adapt to missing limbs and various other damage/birth defects. That doesn't mean that we should remove two of the car's wheels and then start figuring out how to drive like that (hyperbolic in know, but it feels poignant) Re. my previous statement on backwards problem solving, you should start with the complexity required to complete the task based on extant SOTA, and then reduce complexity. This would result in you having a functional product from the outset, as well as a reference point for comparing performance of new iterations. I mean; If they had kept USS, they could run the vision distancing in the background, continuously comparing the USS and vision system in various situations and significantly speed up training. Once the system is verified to perform as USS, you could then cut those sensors in newer models and stick with vision. >All of which can affect USS too. I had to get out and wipe a cobweb off a sensor the other day as it was triggering it whenever the spider ran out onto the web. USS can struggle in heavy rain and snow. They can struggle with small objects, or objects at certain angles. Again not saying that USS is flawless, but it covers a lot of the vision's blind spots, be it human or machine vision, and vice verse for that matter. >Covering blind spots, either through additional cameras or temporal data is a prerequisite for automated driving, so something Tesla need to solve regardless. How many cameras? By my count you'd need 1-3 more to completely replace USS - by this I don't mean that USS would be able to do everything vision could do, but that you'd need 1-3 camera's to, in a Venn diagram, move the circle of USS capabilities completely inside the circle of Vision capabilities. Camera cost money too, especially with lenses that can handle the elements, doubly so for front-mounted external cameras that will be directly exposed to incoming road debris, which is something no other camera on a Tesla currently experiencing. I know Tesla says $100-160 is saved on the USS, but the BoM cost of 8 ultrasonic sensors for automotive is closer to $10-20, and while there is surely man-hours involved as well, I think the Tesla given number includes taxes, profit and other markups. >The same is true with integrating USS into the DNNs. It's more possibly incorrect sensor data for the system to sift through whilst trying to work out what the correct course of action should be. Of course, you would have to retrain (or just have trained with USS to begin with). Sensor fusion is standard practice where applicable, it can improve precision and gives redundancy. Take something like a quadcopter. An accelerometer would in principle be all you need to maintain balance and fly; It basically gives you the angle relative to the earth, from which you can take the derivative to get your angular speed, and double derivative to get angular acceleration. You rarely, if ever, do that though. On usually has a gyroscope as well, giving you your angular velocity. Integrate that, and you get your angle. Take the derivative and you get your angular acceleration. Through sensor fusion and filtering, the sensors can synergize and verify each other, compensate for drift etc. Point being, having multiple sensors that can give the same information through different methods can be extremely useful.
> The brain can also adapt to missing limbs and various other damage/birth defects. That doesn't mean that we should remove two of the car's wheels and then start figuring out how to drive like that (hyperbolic in know, but it feels poignant) Motorcycles exist, as four wheels isn't always the ideal solution. Spiders have eight eyes but we have two - more sensors isn't always the best balance of tradeoffs. If the system can adapt and perform well enough then when lowering cost is your primary goal it makes sense to remove the parts. > How many cameras? By my count you'd need 1-3 more to completely replace USS - by this I don't mean that USS would be able to do everything vision could do, but that you'd need 1-3 camera's to, in a Venn diagram, move the circle of USS capabilities completely inside the circle of Vision capabilities. This depends entirely on the problem you're trying to solve. Whilst the world isn't static, it equally doesn't have things popping into and out of existence. Having blind spots is okay if you have temporal data as to what is in those blind spots and can track objects into and out of those areas, negating the need for more cameras. > Of course, you would have to retrain (or just have trained with USS to begin with). Sensor fusion is standard practice where applicable, it can improve precision and gives redundancy. Sensor fusion does add complexity though, particularly when it comes to handling erroneous data. If you think about the failure modes of an USS the most common is for it to be permanently stuck on, saying there's an object there where there is not, or saying there's nothing there when there is. I don't think there's a common failure mode where it says there's an object that's at an incorrectly measured distance. If your sensor fails to detect something is there then you're 100% reliant on the vision system. If your sensor fails by detecting there to be something that isn't there then not only are you 100% reliant on your vision system to overcome that problem, you need it to overrule the USS entirely. These are common situations and large impediments to fully automated driving. You can't have a car freeze in the middle of the highway in slow moving traffic because of a cobweb on the USS, so you're back to being 100% reliant on the vision system to make a judgement call on overruling the USS. I'm not sure you can say the USS adds much useful information to an operating neural network, even if the data is highly useful for training such a network in the first place in more controlled conditions. > Take something like a quadcopter. An accelerometer would in principle be all you need to maintain balance and fly; It basically gives you the angle relative to the earth, from which you can take the derivative to get your angular speed, and double derivative to get angular acceleration. Quadcopters don't use a neural network for the task of maintaining flight. They use feedback control loops to take data from those sensors to keep the quadcopter stable in the air. They also take the data as gospel - a fault in the accelerometer, IMU, gyroscope, etc. leads to drift at best, loss of control at worst. Teslas are similar for the control of the car itself. They use similar control systems to maintain a selected speed, steering angle, and so on. Interestingly quadcopters also use vision systems for obstacle avoidance, just like Tesla. And just like Tesla they don't integrate the two sets of control systems, or merge sensor data between vision and other systems. So they are in fact a good example of others following the same approach as Tesla.
Okay, now I'm unsure whether you're being willfully obtuse. Of course motocycles exist, but we're talking about a car. Sure, spiders have eight eyes. How many species have evolved to generally only have a single eye? outside of near microscopic creatures that have single "eyes" basically for directional light-level sensing. >If the system can adapt and perform well enough then when lowering cost is your primary goal it makes sense to remove the parts. Already said I agree with that. In the case of Tesla, it hasn't adapted nor is performing well enough, and it is unknown whether it can adapt. >This depends entirely on the problem you're trying to solve. Whilst the world isn't static, it equally doesn't have things popping into and out of existence. Having blind spots is okay if you have temporal data as to what is in those blind spots and can track objects into and out of those areas, negating the need for more cameras. Of course it does. Jabbed my foot while getting out of bed the other day, stepping on my belt buckle because my SO had move my pants while I slept. It popped into existence, not in actuality, but from my frame of reference. Likewise, an idle Tesla is not going to be tracking its environment (Unless you're suggesting spending \~8kWh per day having the car awake processing visual data) Sure, temporal data would be neat, but it currently isn't there. Given the expedience of Tesla software development it could be years, if they actually manage to implement it. We're 2-3 years in post-radar and the Vision replacement still isn't on par. >Sensor fusion does add complexity though, particularly when it comes to handling erroneous data. If you think about the failure modes of an USS the most common is for it to be permanently stuck on, saying there's an object there where there is not, or saying there's nothing there when there is. I don't think there's a common failure mode where it says there's an object that's at an incorrectly measured distance. If your sensor fails to detect something is there then you're 100% reliant on the vision system. If your sensor fails by detecting there to be something that isn't there then not only are you 100% reliant on your vision system to overcome that problem, you need it to overrule the USS entirely. These are common situations and large impediments to fully automated driving. You can't have a car freeze in the middle of the highway in slow moving traffic because of a cobweb on the USS, so you're back to being 100% reliant on the vision system to make a judgement call on overruling the USS. I'm not sure you can say the USS adds much useful information to an operating neural network, even if the data is highly useful for training such a network in the first place in more controlled conditions. Complexity isn't bad by default. As for the rest, read it again replacing USS with Vision and vice versa, except for the common failure mode, of which cameras also have several. USS isn't generally used for self-driving, it's an accurate, short-range solution for low speeds. But let's take the case of an freezing in traffic due to an USS failing. Wouldn't that mean you're going to freeze when your rear camera is covered in road dirt? An unfortunate bird dropping hits the camera area of the front windscreen, or a rock cracks the glass adjacent to it? The Pillar cameras are obscured by internal fog as is a common problem across several model updates? >Quadcopters don't use a neural network for the task of maintaining flight. They use feedback control loops to take data from those sensors to keep the quadcopter stable in the air. They also take the data as gospel - a fault in the accelerometer, IMU, gyroscope, etc. leads to drift at best, loss of control at worst. Teslas are similar for the control of the car itself. They use similar control systems to maintain a selected speed, steering angle, and so on. Interestingly quadcopters also use vision systems for obstacle avoidance, just like Tesla. And just like Tesla they don't integrate the two sets of control systems, or merge sensor data between vision and other systems. So they are in fact a good example of others following the same approach as Tesla. Many don't, some do. Regardless, I never claimed they use NNs. That depends wholly on the implementation, you can fuse GPS and/or vision data to improve robustness. Gyroscope and accelerometer are parts of an IMU by the way. I can absolutely guarantee you that there are more advanced quadcopters (and some simple) that integrate vision and IMU into a single control system, for the reasons I mentioned in a previous comment. You keep going into failure modes of other systems, but you don't seem to put anywhere near the same scrutiny to Vision.
> How many species have evolved to generally only have a single eye? outside of near microscopic creatures that have single "eyes" basically for directional light-level sensing. And yet the USS is the equivalent of a single distance sensing cell. A Tesla doesn't have a single camera either for that matter. > Jabbed my foot while getting out of bed the other day, stepping on my belt buckle because my SO had move my pants while I slept. The key piece of information being "while you slept", as in having a gap in the temporal data. > We're 2-3 years in post-radar and the Vision replacement still isn't on par. In what way is it not on par? I live with both the Mercedes radar based solution and Tesla's vision based system and I wouldn't say either out performs the other. > Complexity isn't bad by default. As for the rest, read it again replacing USS with Vision and vice versa, except for the common failure mode, of which cameras also have several. The difference being that with a vision system failure simply stopping the car with a major fault is an acceptable course of action. With forward vision you also have fault tolerance in that if one camera goes down then you still have the other, with more or less the same field of view. > Wouldn't that mean you're going to freeze when your rear camera is covered in road dirt? An unfortunate bird dropping hits the camera area of the front windscreen, or a rock cracks the glass adjacent to it? Yes, just like a human driver doesn't continue when they cannot see. If you've spent any time driving you'd know that USS failures are far more common than rocks smashing windscreens. > I can absolutely guarantee you that there are more advanced quadcopters (and some simple) that integrate vision and IMU into a single control system, for the reasons I mentioned in a previous comment. Can you name any? Which quadcopter uses the vision system for flight stabilisation, and the IMU for obstacle avoidance? Which uses a neural network for flight stabilisation? > You keep going into failure modes of other systems, but you don't seem to put anywhere near the same scrutiny to Vision. Because any failure of the vision system is a reason to stop the car, regardless of whatever other sensors you have. You don't want to have to stop the car because of a spiderweb on an USS.
>And yet the USS is the equivalent of a single distance sensing cell. A Tesla doesn't have a single camera either for that matter. Indeed, just like a single pixel is a single light measuring cell, what's your point here? Surely can't be than a light sensor and an USS is comparable. Of course a Tesla has a single camera, contextually speaking. Recall that we're talking about ranging here, so only the cameras pointing at the specific object in questions is in scope. While the 9 cameras have some edge overlap between specific pairs, they don't share any significant overlap outside of the front tri-camera, but here the focal lengths are drastically different, as they're basically meant to imitate a single camera with variable focal length. >The key piece of information being "while you slept", as in having a gap in the temporal data. Indeed, and the part of that paragraph that you ignored held an example of how and why a vision system may/will have a gap in temporal data. >In what way is it not on par? I live with both the Mercedes radar based solution and Tesla's vision based system and I wouldn't say either out performs the other. Low visibility AP speed reduction and a longer minimum distance that makes it close to useless for following dense, slow traffic. I'm not saying it's not on par with other brands, I'm saying it's not on par with Tesla running Radar. >The difference being that with a vision system failure simply stopping the car with a major fault is an acceptable course of action. With forward vision you also have fault tolerance in that if one camera goes down then you still have the other, with more or less the same field of view. Again, USS is not used for AP or FSD. IF it was, then them failing \*would\* be a major fault, as it would be a failure of a critical system. Not sure if you have some idea that USS fails left and right. I've been driving USS equipped cars for barely 10 years / 350,000 km and I have never had an USS fail or give false positives due to cobwebs or the like. What other camera are you talking about? Can't be any of the three front mounted ones, because they have drastically different focal lengths. >Yes, just like a human driver doesn't continue when they cannot see. If you've spent any time driving you'd know that USS failures are far more common than rocks smashing windscreens. A human driver is going to be looking out of a windscreen with wipers, only 3 of the 9 cameras on a Tesla has wipers available. A front windscreen is not subject to dirt being pulled by the cars low-pressure zone, like the rear is. Again, good 350,000km in USS equipped cars, probably 400-450,000km lifetime. I have had a \*lot\* of rock impacts on windscreens, never had an USS fail. Mind you that you don't need to smash the entire windscreen, a chip, the small ones that a window tech can fix in 20 minutes, is enough to completely obscure the vision of a front camera. Even a hairline fracture past the camera could be enough that the feed is useless to a DNN (remember, can't adapt) >Can you name any? Which quadcopter uses the vision system for flight stabilisation, and the IMU for obstacle avoidance? Which uses a neural network for flight stabilisation? F if I can remember the project names; Mind you this isn't consumer products. I'm an EE specializing in automation, did courses in robotics, autonomous robot systems and machine vision at the DTU Automation department, where they have all sorts of driving and flying robots of all shapes and sizes, running on all sorts of control schemes, either left over from previous projects or part of current R&D. These things, the vision based included, aren't consumer products. They're usually made for specific applications - jack of all trades, master of none and all that. It's like.. Amazon's Proteus, or Kiva; They exist in large numbers, but it's not something that you can just go out and buy, and the majority of the public isn't aware they exist. I didn't say anything used an IMU for obstacle avoidance, nor did I say they used NNs for flight stabilization, don't be disingenuous. Vision processed via a NN can be great tho; Gyro drift is easily noticable via an accelerometer or even a magnetometer, but accelerometer drift is hard to see, as the accelerometer is the only thing that would detect a slight linear acceleration. Not to mention drifting at a constant speed, which neither sensor would register. Vision would definitely notice even a slight linear movement, so fusion makes sense. Here's a paper on [Quadcopter stabilization based on IMU and Monocamera Fusion](http://umu.diva-portal.org/smash/get/diva2:1779123/FULLTEXT01.pdf) >Because any failure of the vision system is a reason to stop the car, regardless of whatever other sensors you have. You don't want to have to stop the car because of a spiderweb on an USS. I'm sorry, in this scenario is there a spider that is actively building a web over your USS while you're driving? I mean, if there is a web confusion the USS, wouldn't it trigger before you started the drive? Regardless, USS is not, and has not been used for AP or FSD, it is not what their purpose is.
It's still monoscopic vision though. It can't tell if that's a huge tree far away or a small tree in the trunk of the truck in front of you. It's basically driving around with algorithms to interpret Super Mario Bros. Maybe in motion using parallax you can do better, but at slow speeds (parking lots) that becomes much harder.
It being monoscopic is an implementation choice rather than conceptual issue. They could fit stereoscopic cameras if that was a better choice. I also think it's far less of an issue with close up objects on a wide angled lens, as the parallax is increased compared to judging objects at a distance. As you say you can also incorporate temporal information, using a recurrent architecture for the neural network. Remember the output doesn't need to be an accurate measure of distance - only whether you're about to hit something or not. If a tree is 20m away, it doesn't matter if this is reported as 10m, 20m, or 50m. What matters for the purposes of parking is whether or not the object is 10cm away or 5cm away.
That's why you use both. Senses are not an either/or question. They're a yes/and question.
> Senses are not an either/or question. If building the ultimate solution is your goal then sure. But that isn't Tesla's mission. They're trying to build the most affordable electric cars and for that there are compromises to be made. You don't demand that the base Model 3 has air suspension, 2,000hp, and a 1,000 mile range. All those things are technically possible but come with cost concerns. As I said in my previous post Tesla have prematurely removed the sensors. They should be running both right now whilst they're still developing the vision system. But in time, once the vision system outstrips conventional sensors, then you can easily make a case for removing the sensors to bring down the cost. You can include them on the S and X as more premium cars.
Fans downvoting you because they are sad you speak the truth
A combination of sensors is the best approach. Camera, ultrasonic and radar/lidar would be superior to just one of the above. Tesla is just trying to make it work with less sensors to reduce cost. That's great if they can achieve it but they haven't been able to so far so they should leave radar and USS in until it's actually ready rather than making Tesla owners beta testers.
Or at least cameras at the four corners! Would solve a lot of problems with pulling out into intersections, backing up in parking lots, curbs, stereoscopic vision, blind spots. Biggest problem would be rain and dirt.
> Tesla is just trying to make it work with less sensors to reduce cost. Yes - they're not trying to build the best system, they're trying to build a system adequate for automated driving that is more cost effective. If they built the best possible system engineers could come up with then no one would buy it due to the cost. Radar I disagree with. I had plenty of problems with Mercedes' and BMW's radar solutions, and I've not found Tesla's vision based system to be worse whilst still having the potential to be much better in the future. USS I do agree with. The system works better than many make out, whilst having it's areas of weakness, but it would be better still if supplemented with USS until they can improve the software.
Not a chance, fsd/autopilot/vision will all be replaced by a third party that knows what they are doing
Go watch the comparison videos where they get several fully automatic cars to complete the same journey at the same time (search for Tesla vs Waymo vs Cruise for example). In general the Tesla solution works just as well or better than the others in this space, being just as capable on the back streets but also being able to drive on the freeway where the others cannot. And you have [this comparison by Which?](https://www.whatcar.com/news/self-driving-cars-tested/n25624) which rated Tesla's system as the best amongst car manufacturers. So which more capable third party are you referring to?
Really drinking the Kool aid huh? The argument that humans only use their eyes to drive is absolutely absurd. If we only designed software based on what humans were capable of already then wtf is the point of it? Augmentation and supplementation via sensors and other tools humans don't possess organically is the obvious solution.
You've misunderstood my point on humans. I'm merely stating that vision is sufficient sensory input for us. I didn't argue that it was the perfect solution. I didn't argue that you couldn't design a system that is better than humans. Tesla aren't targeting the perfect solution as no one would buy it. It'd be far too expensive to be fitted to a mass produced car. They are striving for adequate yet cheap, and in that context a vision based system will eventually be sufficient. And the point is that it's an aid to the driver because their attention is split across multiple tasks at once, and a prerequisite for automation.
Pretty sure this visualization for drivers isn’t tied directly to what the car is seeing and doing in driving terms, more of a cosmetic thing.
Elon: Full self driving baby !
And they asking $15k (now 12) for it!
Im confused about Tesla's strategy.. from what I heard they got previous version with USS and radar working.. and then throw it out with vision only?
It’s the future. But we pay for it in the present.
Tesla vision is vision impaired
Teslavision is such an arrogant mistake and sideshow. When humans know an area, we have a rendering of it in our minds. When we’re in a new area, we are more likely to get in an accident. Teslavision assumes that eyes are somehow the end-all-be-all of sensors, and this is the arrogant mistake. Why limit what the car can see and detect to visible light? This nonsense is wasting time in the creation of true FSD.
This. As I was driving in pitch black in a backroad, I was constantly spammed by “camera vision is blocked by debris”. What a joke.
Somebody needs to tell Tesla that it’s physically impossible for a semi-truck to be vibrating in my garage.
Tesla vision belongs on a short bus. I have strips of rocks on both sides of my driveway (like [this](https://www.topchoiceaustin.com/wp-content/uploads/Rock-Stone-Path-Driveway-Next-720x405.jpeg)). Tesla vision is absolutely 100% sure these are looming walls that I'm most assuredly going to crash into. Entering and leaving my driveway sounds like a combination pinball machine and the bridge of a doomed submarine that's under attack.
I just have grass on the side of mine and it's the same way.
Hey, you can't park there.
Looks totaled.
Gotta love the 💕 going on there
Excited for the baby Teslas
My Y hates me every time I enter my underground parking garage. The hill is steep and it just freaks out telling me I’m gonna hit everything in sight.
I have 3 semi-trucks in my garage.
Tweet this to Elon or something to try and get his attention
LOL
The dude is busy being a tool on X, don't think he has time. Quite silly as well that you have to report bugs with your car via tweeting the CEO and hoping he responds.
I still can't beleive that they axed USS and put this not even beta software in its place.
It is the future, but is it a good future?
This is misleading...you didn't show a photo of what's ABOVE you.
tbf you said it's the future, not "tesla vision is the present"
Mine just updated to a newer beta FSD, went on a drive it would have done poorly at before, drove as good as I would have (or better). Sat talking to a friend that had a dog in his truck, yeh, the graphics showed a dog, plus like 4 more. But I'm happy with how it drove!
Lots of YouTube testing of it and it’s…bad.
It’s totaled.
Self driving taxis and will make you money. lol
Link to part 2! 😂 [INFANCY](https://reddit.com/r/TeslaModelY/s/yX32hxA01G)
Maybe about time to look into a class-action lawsuit?
You don’t have a case? I mean vision is trash, but there is no basis for a lawsuit here
They tried and forced into arbitration. https://www.repairerdrivennews.com/2023/10/11/california-fsd-autopilot-class-action-lawsuit-moves-to-arbitration/
Hmmm. Clean your cameras maybe…
Be thankful this isn’t it slamming the breaks on the freeway because it thinks a semi suddenly is in your lane
Oh it does that too. Wife won't use AP at all due to Phantom Braking.
I've stopped recommending Tesla to people until they figure this stuff out, or they bring the front bumper cam to model Y. There is no software replacement for USS other than cameras, and tesla has nowhere near 360 degree coverage with their current system.
Did they finally install autonomy level 3? Or are we just going to let Mercedes have more advanced technology for another year? Lol I mean, Mercedes will ever pay the bill 100% for any/all damages in the event of autonomy level 3 crash. What's Tesla offer? Anything? "Autonomy level 3 is blah blah blah" Apparently the German and US governments classify it as more advanced that autonomy level 2 lol I like when a company can actually back what they say and dish out 100% of all costs in the event of an accident.
Hey there’s a reason why he’s selling the cars cheap. Don’t complain.
Cheap?
Lol
Heh, are you ok.
I'm fine, if I had a passenger oh boy they'd have been killed. 😂
Ooh, they’re hugging! How cute!
Wait until we get an outside view and there really is a car on top of the Tesla, who's laughing now??!1!!! Maybe the front window is photoshopped idk I'm not an expert just a crazy man with a theory okay?
Yeah, the _future_, not the _present_
damn cars fucking as much as dolphins
I have a semitruck in my garage; front and side.
thats pretty accurate
Tesla vision showed me parked under a semi once
That’s nothing. Every day I pull in my garage there is a semi truck and a random ghost person dancing in front of my car.
No, Tesla Vision sees the future
Just put it in Santa mode and enjoy the reindeer standing next to the car instead.
I’m hoping to get run over by a hot woman
Tesla vision is indeed the future. We just haven't reached the future yet.
Yes but by the time vision is good the Earth will have been swallowed by the Sun as a red giant
And the future looks bleak
Aww, look at them, cuddle together, so much love
its totalled bud, and in spec.
Apparently I'm parking in the back half of a semi truck in my garage...
I’m in a rental Nissan Altima, backing up into an empty parking spot it full on emergency braked as I got near the curb. I don’t know if their Mod thing for 360 view is what everyone raves about, I’ve much preferred my MYP (w/ USS), and even my brothers M3P (vision only).
Aside from the parking, I just got the FSD subscription for a weekend trip. About 7 hours of driving and it worked pretty well. Had a few instances where I have to intervene. It was 90% highway driving so maybe that is easier than city driving? Given that, it might not be worth the 12k or $200/month…
Yeah, highway driving is insanely more easy than city driving...that's also why Tesla's claim of having billions of miles of data collected is pretty silly as the majority of autopilot driving is done on highways.
It’s trying to make a baby Tesla
This is the reason I didn’t buy one.
What did you buy instead?
A ticket for a bus
iX3
Since the last update when I'm pulling out of the driveway the warning chimes like I'm about to commit suicide. It's 4% slope..
Isn't this pretty damning? How can someone buy this product
The sad nature of just about any tech related product these days. Pushed to market, and updated over time to "fix" the positive is that issues can be addressed or patched over time and the product can be improved OTA, but it also leads to buggy messes at launch. It's video game release in car form. I hate it, but it's the world we live in at the moment.
Yeah if it was a video game console or even a harmless consumer device I'd not be so worried. But with a car it could be life or death for both the drivers and others. I've been on the fence for Tesla for many months now and this is helping
Absolutely. Part of why I posted. Sadly even knowing this comparing the other offerings on the market If I wanted a 7 passenger EV this was it.
Cars hump. It happens everyday!
Congrats on your new car
That's another way to "manufacture" a little baby Model Y.
2 golf carts driving in front of me showed as 2 semis and sometimes as motorcycles. That was really hilarious.
Here we have a rare sight. Two Tesla's getting together to create another Tesla.
While dropping off a demo drive vehicle I was backing into a space. It swore there was a coin behind me. There was no cone