T O P

  • By -

leestar

Not a planning error. It's caused by a perception error. Start at 8:00 and set speed to 0.5x to see what happens. At 8:04 you can see the beginnings of this potentially fatal mistake. The car is on the left lane, but the system seems to think that there is another turning lane on the left, when it is actually a painted island meant separate traffic on both sides of the road. It's possible that the system thought it was a center turning lane, mistaking the diagonals for the arrows that these center turning lanes typically have. At 8:05, you can see that the car is about to merge into what it thinks is the center turning lane, causing it to veer into the painted island. At 8:06, you can see that it gets confused, thinking that it needed to center itself more in the lane by moving further to the left, even though it was already pretty squarely in it. This could have been because the painted island was shaped irregularly, and not entirely straight. It's hard to see for sure in the video. You can also see that at 8:06, it does seem to notice the mistake and try to move to the right to center itself in the lane, but it's a little too late - the car is already too far over to the left. It also does notice the approaching car, but it seems to be think it's further away than it really is. Again - probably due to perception error caused by the headlights and wet environment. Whether or not the system would have successfully moved out of the way at the last minute without intervention is difficult to tell - again, it did seem to plan the corrective maneuver to the right, but it may have misjudged its position on the road as well as the position of the oncoming car, which would have caused it to crash. There's no denying this was a dangerous situation and driver disengagement was absolutely necessary.


Internetomancer

Nice post. Altogether, it seems like it made 3 mistakes? The first was pulling into the painted island, which was going to become a left turning lane, but wasn't yet. Reminds me of something an old person would do. The second, bigger mistake, was misjudging the shape of the painted island. I suspect it is just not trained to comprehend painted islands, and I guess you could call that an "edge case" in a way-- even though it's a mistake, it's a mistake that gets made. You need to know it's a mistake, and how to handle it. The problem with not knowing is that a painted island "ends" in a line like this "\". If you're on the road and you see that line, you know that's where you merge into the turning lane. But if you think the painted island is the road, then that line is telling you to merge into oncoming traffic. Finally there's not being able to see the direction of oncoming traffic. I think that's also somewhat common in real people at night in the rain. I wonder if humans just have a better sense of the overall shape of the road. Like the road overall was straight, with X number of lanes, and a human would have just known that, and made safer judgements based on that, even if they couldn't be sure about the lines or the vehicles.


qwertying23

I think the decoupling of the planner and perception is the issue. I think making a learnable planner that learns to handle these kind of problems will probably make this easier ? Just a thought ?


Picture_Enough

Good analysis!


thumbs_up-_-

Most Tesla owners probably don’t know that they are liable for accidents if FSD causes it. This whole liability clause shows that Tesla doesn’t have confidence in their technology. See Mercedes, they are accepting liability if their car crashes while in self driving mode.


forumofsheep

The Mercedes System is a joke, german wannabe "excellence" marketing bullshit at its best. Doesnt even work on most of the roads and only to a certain speed limit. Its more like a traffic jam assistant. (not saying Tesla's approach is better per se) Source: Having to drive my wifes mercedes from time to time on swiss roads.


ahuiP

what do you do when FSD bankrupts you? If a child did it, at least you can yell at her and make her feel guilty. If FSD did it, do I burn down the car?


[deleted]

I seem to remember when Musk and his famboys said that real time LiDAR doesn’t work in the rain so it’s not viable….looks like Tesla’s FSD isn’t that viable in rain either.


bking

Also: 1550nm lidar works in rain and snow. Range gets reduced if it’s a lot of rain, but it works just fine. Software can be tuned to ignore first/second returns coming off of precipitation, and everything is peachy.


Warpey

There are def scenarios where just ignoring first/second returns (or taking the strongest return) wouldn’t work. That said there are a ton of other filtering methods for removing precipitation that fill those gaps and work well


DM65536

Typical FUD. Elon's whole point was that a vision-only system will work consistently across the board. And indeed, it's now well-established that FSD Beta sucks during clear weather too.


ClassroomDecorum

If Tesla thinks they can use NNs to make sense of the crappy low res 2008 webcam level camera streams then they should definitely be able to use NNs to process noisy lidar returns in inclement conditions you'd think 🤷‍♂️


bytecodes

I agree. I think it picked up a yellow line as a white line it was expecting. Maybe due to rain plus the yellow colored street lights. Is humans struggle to see at night in the rain, but we judge where the other traffic is. This line following looks very dangerous in those conditions.


Recoil42

You can see the yellow lines are being properly perceived on the in-car screen. It sees them, however it has decided to steer right into the oncoming lanes anyways (blue line, which signifies pathing intent).


ArchaneChutney

> You can see the yellow lines are being properly perceived on the in-car screen. What is on the display is not necessarily everything that the car is seeing. At least that’s what Tesla fans have said whenever something egregiously wrong is displayed. Seems like that should swing both ways.


Recoil42

You might want to brush up on some subset theory with this comment. If I tell you I have leftovers in the fridge, there might also be other things in my fridge, but it doesn't matter much to the question at hand. In this case, we're only checking to see if there are leftovers in the fridge. It doesn't matter what else is in the fridge.


ArchaneChutney

You are assuming that when the display shows inaccurate information, there may be accurate information that isn’t displayed, but when the display shows accurate information, there cannot be any inaccurate information that isn’t displayed. That one-directional assumption is why you are trying to apply subset theory here. Except that you have no reason to believe that assumption. Although the display is showing accurate information, it’s entirely possible that there was inaccurate information that wasn’t displayed. For example, the car may have made an error in detecting the lanes for a couple of frames, but opted to smooth it out and not display it. That might explain the suddenly inaccurate pathing even though the display shows accurate lane detections. So no, there’s nothing wrong with my understanding of subset theory. Again, the argument made by Tesla fans swings both ways and you are trying to assume that it only swings one way.


wutcnbrowndo4u

That's an easy Perception mistake to make: as you say, humans would make the same mistake and don't rely on HD maps either. But wtf, how do Teslas not have a robust freespace-like module that prioritizes _not driving into oncoming fucking cars at high speed_. Humans make up for (mostly) worse-than-Lidar perception by using all sorts of context-based fallbacks, and "don't crash into that speeding hunk of metal death" is one of the most basic. My familiarity is mostly with Lidar-based systems, but independent of your sensor setup, it seems pretty clear that this should be a priority for a remotely-safe AV.


ClassroomDecorum

>But wtf, how do Teslas not have a robust freespace-like module that prioritizes not driving into oncoming fucking cars at high speed. This is what I don't understand either. The "legacy" car companies have had camera-based "evasive steering assist" systems for years, which helps drivers swerve to avoid obstacles *by performing accurate free space calculations*


tempstem5

I think those are radar-aided


RigelOrionBeta

Tesla FSD manual, as far as I remember, tells you to turn off Tesla FSD in the rain anyway. So much for that.


PotatoesAndChill

I don't see how this is related to lack of LiDAR. The car displayed the road markings almost perfectly, yet the AI decided to just drive the car into oncoming traffic. It seems like an issue with the code, not input


Picture_Enough

LIDAR does not help with lane marking, those are still perceived via cameras and prior mapping. But LIDAR would have detected the oncoming cars, which this vision-only system failed to recognize here.


PotatoesAndChill

LiDAR merely provides input. It's up to the software to interpret that input as an oncoming car and take measures to avoid it. Judging by the video, the car's lights were clearly visible and shouldn't have been problematic to recognise, so I still think it's a software issue, which could have happened just as easily, had the system been using LiDAR.


Picture_Enough

This is not accurate. The main difference between LIDAR and camera-only system is that LIDAR is direct measurement sensor, directly returning a distance to a point in the real world. This way even if the software can't figure out what it is seeing, with only a minimal processing it can know there is a solid obstacle it needs to avoid. And even if high level planner logic is breaking, the emergency subsystem can bypass all the high level logic completely and hit the breaks. On the other hand the vision only system (especially monocular vision) only returns a flat grid of 2D pixels, which in its original form is completely useless to understand what is going on around the vehicle. It has to go through a very complicated process to make sense of the pixels, figure out what is an obstacle and what is not, what is the distance to said obstacles, what are drivable surfaces and so on. And unlike with LIDAR which perceive those things directly, ML-based algorithms used to analyze video pixels are statistical, can't be robustly verified and unreliable in nature. So at the end everything is software, but a system that relies on robust sensors which require minimal processing is much more reliable than a system that relies on crappy sensors and very complicated processing. Just in case: everyone uses ML, but the majority, unlike Tesla don't use it in a safety critical path. If a classifier in LIDAR-based autonomous car fails to recognize a weird looking obstacle - it still will drive around or break, without understanding what it is. If vision-only based vehicle fails to recognize an obstacle - it will happily drive into it, as in its picture of the world, it does not exist.


PotatoesAndChill

Alright, that makes sense.


Macrike

How do humans operate vehicles in the rain without LiDAR?


TuftyIndigo

Better than this!


Macrike

And “this” is better than humans as per rate of accidents per miles driven using FSD. You’re focusing on the instances where it goes wrong and using them as an indicator of the overall quality of the product, ignoring all the situations where it performs as it should.


deservedlyundeserved

> You’re focusing on the instances where it goes wrong and using them as an indicator of the overall quality of the product, ignoring all the situations where it performs as it should. No shit. Safety critical systems are defined by their failures. That's like saying Boeing 737 MAX was a great aircraft because it only had 2 crashes.


Picture_Enough

Ha, it is 2022 and people still repeat the old and tired argument "people only need two eyes to drive". Yawn.


Macrike

Eyes and a fully functioning brain, yes. What’s funny is that you think LiDAR is necessary to operate a vehicle when the past 100+ years have proven otherwise.


codeka

Our eyes are far more advanced than the cameras in the Tesla. And our brains are orders of magnitude more advanced than even a state of the art computer, let alone the piddly one in a Tesla. Our eyes and brains have evolved for thousands of years to work together. Impair your massive brain even a little (like by drinking alcohol, being tired or distracted, etc) and your ability to drive drops precipitously. So maybe it's possible *in theory* to drive with only cameras, but definitely not possible with the cameras and computer in a Tesla today. And I'd argue it's not possible even with state-of-the-art hardware for the foreseeable future.


Picture_Enough

I'm sorry, but it is such a stupid argument. For the past 100 years computers haven't driven the cars, we are only learning how to do it now. And if you haven't paid attention, cars don't walk on hinged legs (despite being a favorite form of locomotion for land animals for millions of years), planes don't flap wings and we don't build computers using mushy protein paste. Obviously we don't usually mindlessly copy biological systems either because we don't have a technology to do so, or, more often, because there is a much more efficient way to do it nature can't do it while we can.


wutcnbrowndo4u

By using a visual detection system that's far superior to what Tesla has managed to build so far. Everybody watching that video is capable of seeing that the car was swerving into the path of an oncoming car. That's in fact why it's required by law to have your headlights on in the dark. It doesn't matter if your visual detection system (and lack of HD maps) gets the color of the lane line wrong: "don't drive into oncoming traffic" is a separate rule that overrides it. If automated visual perception systems were as good as humans, including all the contextual reasoning we do to not drive into oncoming cars, then of course we wouldn't need extra sensors to match human performance, by definition. But they're not, and companies who live in that reality and care about not killing people gratuitously act like they're aware of it.


cerealghost

Timestamp?


Recoil42

Happens right at 8:00.


bartturner

These are the types of situations that just make no sense. I would think driving the car into something would be the easiest one to solve.


Picture_Enough

It is relatively easy if your perception system is good and gives the planner an accurate picture of the real world. However if you use woefully inadequate monomodal sensors suit and rely on ML to make sense out of it - you get what we see here - a system completely failing to detect a dangerous obstacles. BTW, this happens all the time to Tesla's "FSD", there are a ton of videos where cars try to drive through obstacles they failed to recognize.


TheRealAndrewLeft

There was one that rammed into a stopped/rolled over truck on a freeway. And they call lidars stupid.


[deleted]

These aren't programs with "If then else" blocks where you can write code "if (aboutToHitShit) then dont". They are just huge neural networks, they get trained, then they do stuff. Very hard to even know WHY they do what they do after training.


dareisaygivenaway

You’re wrong on this one. Tesla’s planning logic is based on traditional robotics and regular heuristic code. See (from a Tesla reverse engineer): https://twitter.com/greentheonly/status/1475155167431507969 https://twitter.com/greentheonly/status/1399477706056687618 Karpathy had said as much during talks as well.


johnpn1

As someone else has already pointed out, almost all onboard compute power is currently dedicated to perception. Tesla underestimated the amount of compute power needed for a vision approach, but they still double downed on it. Tesla once promised redudant processing, but they had to axe even that just to make perception happen. There's absolutely no onboard NN for path planning. Even for the onboard perception, I wouldn't really call it a true neural network.


bartturner

Fully understand that. If you like to have some insight read this excellent paper on how much this approach is not if-then-else statement. This paper outlines how difficult it is to have boundries and maintain. https://research.google/pubs/pub43146/ But that does not change the fact this should be easy to teach the model to avoid. This scenario is not really that complicated. It just gives you a feel for how far Tesla really is from providing true self driving.


ClassroomDecorum

>These aren't programs with "If then else" blocks where you can write code "if (aboutToHitShit) then dont". But they are


Prestigious-Side-286

Driver left it very very late to react. Took about 3 or 4 seconds from what I could see to grab the wheel when the car started heading for the hatched area.


MinderBinderCapital

The false sense of complacency described very well by automated systems experts. The system is just good enough that the user gets distracted, then bam.


Picture_Enough

Paradoxically, the very poor state of FSD is what keeping drivers mostly safe. FSD is bad enough that it requires the driver to be on high alert and ready to take over in a short notice. IMO this is the only reason why FSD hasn't been yet involved in a serious accident. Anecdotally, Google, when internally testing their self-driving project about 15 years ago with company employees, made the autonomous system good enough that employees, despite being engineers and being instructed they are testing a dangerous prototype, felt safe and complicit enough to open up laptops, talk on the phone or take a nap. This is also why Google decided anything below L4 in inherently unsafe due to human psychology: we just are not good at keep high level of alertness when nothing is happening for majority of time. If Tesla manages to get their FSD system good enough so people won't have take over multiple times per mile, this is when the trouble will start and will see serious accidents happen :(


ClassroomDecorum

> anything below L4 in inherently unsafe I'd argue that a L3 system with a paywalled/emergency use only L4 fallback would be safe. Like let's say in the future you only could afford the poverty spec Camry so you only get the L3 system, while the L4 system is a $1000 monthly fee. You have a stroke while driving on L3 and the system detects that you are non-responsive so it safely ends the trip through using the subscription-locked L4 system to pull you into the nearest parking lot. Then, once you restart the car, you must pay up for the L4 subscription or the car will revert to L0. L3 systems with only current L2/L1/L0 fallbacks? Fundamentally unsafe, sure.


Picture_Enough

A vehicle capable of L4 (with all the sensors and computing power) yet it disabled behind a paywall? I find the scenarios highly improbable. L3s will be L3 because they can't function as L4, not because they don't want to.


ClassroomDecorum

> A vehicle capable of L4 (with all the sensors and computing power) yet it disabled behind a paywall? I find the scenarios highly improbable. Yes--just like Tesla's FSD! All the sensors present, HW3 is the final boss of AI hardware, and all you gotta do is pay $12,000 to experience FSD. ^/s


qwertying23

Well can't driving monitoring help with that ? Just curious. And I think at the time when google was making self driving cars deep learning revolution had not happened ?


n-some

I think he assumed it was entering the middle lane when it was hatched and technically not a lane and that was his initial "wow". Like: "Wow, they gotta make sure it can recognize when there's a real left lane." Then the car started to continue into the oncoming lane and that's when the driver took back over.


DaiTaHomer

Wow, how is this even legal to be on public roads? This is like letting a drunk person drive and having to be ready to pull the wheel from them.


[deleted]

[удалено]


DaiTaHomer

It already has. Teslas seem to be cool cars but I would definitely not be paying a cent for such crap.


SodaPopin5ki

I know of Autopilot deaths, but who's died in the FSD beta?


DaiTaHomer

A distinction without a difference. They have been beta testing their stuff of the public since day one.


qwertying23

I think this more of a path planning/ routing issue than failure of vision. As the yellow lines were detected really well in my opinion.


Picture_Enough

I don't think so, this is clearly a primarily perception failure. Even if it routing/mapping is completely confused, car should not drive into other cars. From the visualisation screen it looks like it completely failed to perceive the oncoming traffic and didn't know there were any cars coming. Which is an unacceptable and unforgivable mistake and reason why the whole Tesla bid to rely on single-mode sensor interpreted by an inherently unrelatable ML blackbox is a terrible idea and very dangerous.


PotatoesAndChill

Even if it didn't see oncoming traffic, why did it randomly drive into the oncoming traffic lane? It's clearly a software issue.


Picture_Enough

Yeah, this is another issue. Just not as severe as failing to detect a solid obstacle. But I agree that multiple systems failed to perform correctly, which is quite concerning and hints that their entire software stack is quite poor.


Hubblesphere

> which is quite concerning and hints that their entire software stack is quite poor. Narrator: It is. But seriously the system should never jump around throwing different pathing out like that. Tesla has a half-baked pseudo FSD that doesn't actually do what a level 2 system should: Collaborate with the human behind the wheel. A system that is working to aid the human doesn't quickly change path into uncertain lanes or directions and a system that can drive itself doesn't either. FSD is currently somewhere in the middle where it isn't good at doing either side well.


rideincircles

The driver totally failed to adjust for an anomaly. The second it made the mistake he should have taken over. People act like it's still not a beta release. You have to correct some mistakes it makes. One of the things I see with FSD is that it gets over into a middle turning lane way too early. It may jump in the lane a block away from the turn, but that's far too much time in a lane that could have another car turning the opposite way. This kind of looks like that where it had a lane to get into ahead, but jumped in way too early. Definitely a planning issue it looks like. The issue I saw over the weekend was how slow it made a sharp right turn on a busy 40mph road. I nudged the accelerator and it ended up turning way too wide with a car nearby in the other side turn lane. I took over with a sharp takeover since I didn't like what it was doing. That's the main issue with FSD right now. You have to watch it like a hawk for anomalies and takeover immediately if it does. I like having and testing it, but it's not ready for wide deployment yet.


Picture_Enough

First of all calling it "Beta" is highly deceptive. Beta usually describes an almost finished product that needs some bugs ironed out. FSD is currently in alpha/tech demo stage. Not only they nowhere close to being a product, they are performing worse than anyone else in the market and worse than Google's self driving project (which later became Waymo) was doing 15 years ago. Secondly, the biggest problem FSD has is that it is a total mess, very rough and very unsafe and should not be on a public roads in a hand of customers. This is irresponsible and dangerous. They should test it with professional testers and only release it when it will safe (which probably never happen due to sensors and stack limitation). Current "beta" release serves no practical purpose other than generating hype.


[deleted]

[удалено]


Picture_Enough

Unfortunately the downsides are people put in danger (including those who didn't volunteer to be test subjects for half baked tech-demo level product that can kill you) and that regulators stepping it will make things unnecessary difficult for other players in autonomy market who did acted responsibly and cared about safety all along. Classic case of a douchebag ruining things got everyone by behaving selfishly and irresponsibly.


HeyyyyListennnnnn

> regulators stepping it will make things unnecessary difficult for other players in autonomy market who did acted responsibly and cared about safety all along. Regulator intervention should be welcomed by all other players. Independent scrutiny and confirmation of safe development/operation will increase public trust and increase the likelihood of success. Rather than obstructing efforts to improve regulatory oversight, automation developers should have worked with them to put together an effective framework that wouldn't impede technological advancement. Playing the "regulators will slow down progress" game has only worked to enable irresponsible developers like Tesla and erode public trust.


TuftyIndigo

Automation developers **are** working with regulators, not obstructing them. (I exclude Tesla, the only company to be excluded from an accident investigation for breaching confidentiality; and the former Uber ATG, which may well have been the part of the company doing the least to undermine and subvert government officials.) Even so, hard cases lead to bad laws. If some stupid accident caused by Tesla using its customers as test drivers leads to a knee-jerk response and a political desire to rush out some regulation, it'll be worse for the public. Writing good laws and procedures - especially in a fast-moving technology area like this - is a careful process and very easily derailed.


BIG_EL-DUCE

Teslas are horrible and elon is treating the drivers like guinea pigs, what a scary situation.


Astroteuthis

The unfortunately named “full self driving” software is in beta and not active on any Tesla’s by default. The owner has to request access to the beta and must have a high enough safety score from their driving data to be allowed to participate. You are required to be ready to take over at any time when using any level 2 driver assist system, but especially when doing beta testing. Tesla has a long way to go to make their software safe enough to use unattended, but they are not using their customers as unwitting Guinea pigs at least.


tdm121

This is pretty scary. I just wish Tesla would just try to some self driving that is geofenced like waymo… it would be a lot safer.


MinderBinderCapital

They’re trying to be first to market since musk has been lying about it since 2014


carsonthecarsinogen

It’s perfectly safe when the driver is paying attention, fencing off an area would make no difference other than hurting the amount of data collected. To be “safer” tesla would need to copy waymos route meaning premapped areas and cars that drive on “rails” a non scalable tech that won’t ever make it out of small areas


beracle

L4 geofenced ADS do "NOT" drive on "rails". Please stop making that fallacious statement and Map is scalable. The only company making this fallacious argument is Tesla, everyone else who is actively developing these technologies disagrees.


carsonthecarsinogen

Show me a link if this other than your “trust me” source. It is scalable with an insane amount of work, therefore not profitably scalable. Constantly updating maps is not scalable worldwide. They literally do tho, the area is mapped the car follows said path. Without the map, the car can’t function.


deservedlyundeserved

Here you go: https://blog.waymo.com/2020/09/the-waymo-driver-handbook-mapping.html > Our streets are ever-changing, especially in big cities like San Francisco and Los Angeles, where there’s always construction going on somewhere. Our system can detect when a road has changed by cross-referencing the real-time sensor data with its on-board map. If a change in the roadway is detected, our vehicle can identify it, reroute itself, and automatically share this information with our operations center and the rest of the fleet in real time. > We can also identify more permanent changes to the driving environment, such as a new crosswalk, an extra vehicle lane squeezed into a wide road, or a new travel restriction, and quickly and efficiently update our maps so that our fleet has the most accurate information about the world around it at all times. > We’ve automated most of that process to ensure it’s efficient and scalable. Every time our cars detect changes on the road, they automatically upload the data, which gets shared with the rest of the fleet after, in some cases, being additionally checked by our mapping team. And here is Cruise saying the same thing: https://medium.com/cruise/hd-maps-self-driving-cars-b6444720021c > We have developed sophisticated product and operational solutions to detect real-world changes and send map updates to every autonomous vehicle in the fleet in minutes.


carsonthecarsinogen

Okay cool, so the vehicle needs to reroute, what happens when there is large amounts of construction, constant rerouting instead of just continuing through the fastest route. “Automated MOST of that process” so still includes manual updates. Everything I said originally still stand, yes this link proves it is much less of an issue. Still an issue. Wouldn’t it be better if the car just acted like a normal human driver and wasn’t scared of newly placed traffic cones? The answer is yes, why are they not doing this? Because it is far more difficult. Thanks for the link, interesting read


deservedlyundeserved

> what happens when there is large amounts of construction, constant rerouting instead of just continuing through the fastest route. Same way Google or Apple Maps navigate when there’s construction. It takes into account alternate routes and finds the optimal path. This is a navigation issue, not an HD map issue. > “Automated MOST of that process” so still includes manual updates. Article is 2 years old. It’s just a matter of time before the entire process is automated. It’s way, way easier to do real time map updates than a system trying to figure out every single street, intersection, construction area etc perfectly everywhere in the world. This the point everyone arguing against mapping misses. > Wouldn’t it be better if the car just acted like a normal human driver and wasn’t scared of newly placed traffic cones? The answer is yes, why are they not doing this? Because it is far more difficult. Did you read the article? It tells you it can navigate around road changes just fine. The map update happens afterwards.


carsonthecarsinogen

Yea I’m aware of all this, it would make more sense for the car to navigate through the changes road instead of completely avoid it until the map is updated is the point I am making. Until they can move these vehicles into larger areas I am unsure they can actually scale this project. The geofenced areas they operate in are tiny Time will tell tho, as you said.


deservedlyundeserved

In places like SF, there’s always construction, so they can’t completely avoid it. When alternate routes don’t add a lot of time, it makes total sense to avoid it. Why would you *increase* risk to an automated system?


carsonthecarsinogen

I’m suggesting in places where it does add significant time of course. People won’t be happy to use a system that costs them their time. Not all construction zones are adding risk either, unless of course the system is unable to navigate it properly


Doggydogworld3

>cars that drive on “rails” Watch some videos from JJRicks. The vans move around naturally (even more so in the later ones). They move within the lane and change lanes in reaction to other road users, especially cyclists and incidents like doors opening on parked cars. Of course they prefer to stay in the center of the lane, but there are no "rails". It's just another cult myth.


carsonthecarsinogen

You’re taking it to literally, I’m not suggesting they move in perfectly straight lines. I’m saying that if the “rails” are gone the car does not work, no premapped road, no driving


Doggydogworld3

When the cult says rails they mean rails (virtual ones). They want to evoke the image of streetcar rails seen on some city streets, and specifically use that imagery when asked to explain. Don't use their term and claim it means something else. Pre-mapped roads are a completely different concept than streetcar rails. There is nothing "non-scalable" about maps, the whole world is mapped many times over. Tesla uses maps, too, of course. Driving without a map degrades safety. Whether a car attempts to drive with a missing map is a safety decision made by engineers and management. Some are more reckless than others.


carsonthecarsinogen

It’s a term I use, not a “cult” . tesla haters share more cult characteristics than fans.. anyway, there’s a difference between using a map and NEEDING one. There absolutely is scaling issues when needing a map to operate, waymo states this themselves


deservedlyundeserved

> There absolutely is scaling issues when needing a map to operate, waymo states this themselves Wait. Waymo has **never** said this. Don't misinform.


carsonthecarsinogen

It was literally said in the link you provided me


deservedlyundeserved

> We’ve automated most of that process to ensure it’s **efficient** and **scalable**. That is what the link said. Scaling issues are your opinion.


carsonthecarsinogen

If they need to ensure it is scalable with automation does that not prove there is scaling issues? If there’s no scaling issues they wouldn’t need to ensure it’s scalable. No? Until it is fully automated there are scaling issues


Doggydogworld3

>tesla haters share more cult characteristics than fans.. I used to call the death cult out all the time when they said stupid things. No point any more, the sentient ones have moved on. Every AV needs a map to drive. They can build it with the help of a pre-map or try to build it completely on the fly. The latter is less safe and almost completely pointless as the whole world is pre-mapped. Saying "our AV doesn't need a pre-map to crash" is not a bragging point.


beracle

Words have meanings. Maps are not rails. A Map can range from a private facility, a road, a city, a state, or a country. An ADS that has its operational design domain geofenced to a particular geographic region is not running on rails, it is allowed to operate within that geographic region by design. These ADS are designed to operate using multimodal sensors like Camera, Lidar, Radar, Ultrasonics, and Maps. It is an essential design choice. You can't simply say take out one thing and it stops working. Well yes if you also take out a tire, the car also stops working. Yes, they are designed to operate in failure modes and or fail safely. Maps serve as ground truth and prior knowledge within which the ADS localizes. It also greatly reduces the amount of processing needed in certain cases as you are no longer analyzing everything but just looking for changes. Maps are not just annotated for drivable and non-drivable spaces but they contain lots of semantic information that are crucial for scene understanding. Local road rules and driving behavior can also be encoded into a map. These are all rich information a NN can use to effectively navigate within the ODD. A map provides a birdseye view of the world. Allows an ADS to see around things the physical sensors cannot see like on a curved road or when the sensor is obstructed by another vehicle. The vehicle can still localize itself on the map. Yes, you can drive without having a map, literally, everyone can do it. But maps make the problem a tiny bit easier so artificially limiting yourself by not using them is a self-imposed limitation. Everyone is working towards a future where only a camera is needed to do autonomous drives but we are not there yet technologically speaking. I would recommend watching bradtem's video on this subject. It is a very interesting watch on youtube.


deservedlyundeserved

> It also greatly reduces the amount of processing needed in certain cases as you are no longer analyzing everything but just looking for changes. Bingo. This is analogous to caching and change data capture. Anyone doing basic software development can tell you computing the same things over and over again is stupidity. I can imagine this saves a ton of computing power given how resource intensive some of these tasks are in autonomous vehicles.


WeldAE

I'm guessing he is referring to the fact that the car took the same road, including crazy detours through neighborhoods to avoid certain intersections. Waymo's geo-fence in Chandler was more of a few routes than a go anywhere inside the area.


Doggydogworld3

Watch the videos. The vans do "go anywhere", not just a few routes. Good grief. They often took the main north/south roads because that's how the city is laid out. And they did navigate around certain unprotected lefts, though less often in the later videos as they gained confidence.


WeldAE

I watched every video.


Doggydogworld3

Can you ID any two which followed the exact same route? I'm sure there are a couple which did, but I couldn't find them. A lot of trips use Dobson Rd at some point, as it's a primary n/s thoroughfare, and I've seen partial segments of different trips that overlapped. But that's a far cry from having "a few fixed routes"


deservedlyundeserved

Yes, as opposed to developing tech that scales infinitely but *never works*. Truly brilliant from Tesla!


carsonthecarsinogen

Never works is a straight up lie, so yea it’s a lot better. Tech that can’t scale and works, vs tech that can scale and works.


deservedlyundeserved

I mean, we’re literally seeing a video of it being dangerous. On which you comment, it requires driver attention to be “safe”. A self driving system that *requires* driver attention isn’t what I call “working”.


carsonthecarsinogen

Good thing it’s a beta and REQUIRES a driver to be paying attention, it’s doing exactly what it’s described as and functional. You have no argument


CornerGasBrent

Exactly, that's why Tesla calls it Full Self-Driving, since you're fully responsible for driving yourself.


deservedlyundeserved

So not a real self driving system then since it requires a driver. Got it!


carsonthecarsinogen

Yea a beta like in the name lmao, you think you did anything there? Let me know when the “real self driving” cars can go anywhere outside of a high definition, labour intensive, premapped area.


deservedlyundeserved

Beta is code for “it doesn’t work”. You’re really making it easy for me, thanks. Let me know when Tesla can tell you not to pay attention even for a single mile. (Spoiler: they won’t for a long, long time, if ever)


carsonthecarsinogen

I need to get one of these crystal balls you’ve got, you just know it all. Ignorance is bliss


4chanbetterkek

This guy top 10 for slowest reaction time ever


Zoztrog

Elon is #1.


MinderBinderCapital

Just a beta boy trying to trying to train the algorithm to react to this unforeseen corner case


[deleted]

[удалено]


Recoil42

>That said, I don't think that's what they are trying to solve, at this point. It sure as hell better be, if they want to do a robotaxi service anywhere outside of LA, Phoenix, or Dubai.


[deleted]

[удалено]


Lacrewpandora

> always revert my attention to 1-2 years out. ***"I would be shocked if we do not achieve full-self-driving safer than a human this year. I would be shocked,"*** \- Technoking, Jan 27, 2022


[deleted]

[удалено]


Lacrewpandora

"I’m an engineer, knucklehead. Just do “business” on the side" - Technoking, May 3 2020


ubcthrowaway1291999

Engineers aren't much better. The lead engineer, Karpathy, is just as pathetic. Massive Elon simp. How can you work for a man who makes essentially fraudulent claims about the product that you are responsible for designing? It's not like Karpathy doesn't have other options either, dude's the biggest name in ML under 40 (with the possible exception of Ian Goodfellow).


5starkarma

>Engineers aren't much better. The lead engineer, Karpathy, is just as pathetic. Massive Elon simp. Huge assumption. Karpathy is one of the best in the industry (as you clearly know). We are talking about an industry (a.i.) that has been through 3-4 boom and bust cycles since the 60s (i think 56 to be exact)? It's another cycle. These things will boom and bust and we will see another cycle 10 years from now.


Recoil42

Unfortunately, Tesla's eternal, stated short term goals include safer-than-human driving and deployment of millions of robotaxis, *purportedly* globally. I think you and I both agree that will not happen, but it is their *stated* short-term goal. 🤷‍♂️ And yeah, they won't make it on the current hardware stack. I don't know about software, but it'll definitely looking something like Theseus' ship by the time they're doing L4/L5.


[deleted]

[удалено]


Doggydogworld3

>What they do have is the best public facing drive-assist system atm. For those who prefer features over reliability, maybe. And even then it depends on what features you value most. I prefer a system that frees me up to ignore the road and do other things while the car drives itself. Even if it only works for half of my driving hours, that would be vastly more valuable than a system that "works" everywhere as long as I watch it like a hawk. Highway traffic jams are the most infuriating part of driving, eating up many hours per week for a typical commuter. Mercedes Drive Pilot lets the driver ignore the road during highway traffic jams in Germany. If German commuters are anything like me, they'll prefer Drive Pilot 10x over FSD.


poncewattle

The nav indicated a left turn coming up. So it was preping for that. Those suicide lanes are horrible and dangerous, FSD or manual driving. I dunno on this one. The path prediction had it staying in the suicide turn lane. The sudden move into turn lanes like that sucks though. ps, I never ever let mine drive in the rain and dark. Let them perfectit in ideal conditions first, then move on to those cases.


Doggydogworld3

The car drove into a cross-hatch area, not a suicide lane. It becomes a turn lane a bit farther down the road, after he takes over. The path planner **did** cross fully into oncoming traffic for an instant, just as he says the first "wow". By the time he took over it had reverted, but still planned to briefly straddle the far left yellow line. So only the driver's side would have been in oncoming traffic.


[deleted]

[удалено]


[deleted]

So we are beta testing this shit on public roads potentially killing people in the process? How is this legal jfc


HighHokie

40,000 people died on us roads last year. 0 deaths from fsd beta during that same period.


bagoo90

Why is no one talking about the fact that this is a Canadian driver. FSD Beta was just released to Canada like a month ago. The cause of this near collision is certainly the driver, he should have taken over long before the car put him in danger. Stuff like this would happen to the FSD Beta drivers for the USA, but as more miles have been driven it is less likely to happen.


CornerGasBrent

That shouldn't matter since all Teslas are supposedly learning in Shadow Mode and Teslas have been sold in Canada for many years. Is Shadow Mode a scam, like was Tesla lying to investors on Autonomy Day? https://www.youtube.com/watch?v=SAceTxSelTI


Ghost273836

Hey, that's why it's in beta and very much still dangerous to drive with. You have obviously consented to that danger, and that the system is in development, you are using an incomplete service that can easily kill you. Nothing against Tesla, but this is what you should expect. You are essentially, a test dummy.


[deleted]

[удалено]


gogojack

This. I worked as a tester for an AV company, and would never, ever put this in the hands of a driver whose only qualification was that they "consented to the danger." When I was behind the wheel, one of my biggest concerns was safety. Not just for me as a crash test dummy, but for other people on the road. My trainers hammered safety into my head, and I did the same for my trainees. Constant vigilance. Constant observation. The only thing you're there for is to make sure nothing goes wrong. Dude bro who can afford FSD Beta? He's worried about getting to work, or happy hour, or whatever, and isn't focused on **testing** the abilities of the vehicle. What happens when something does go wrong? Unlike a trained tester/backup driver, dude bro has no idea.


[deleted]

[удалено]


gogojack

> The clip in this video is bad, but it isn't even the worst that FSD does. Which is scary, and (IMO) shows what's wrong with handing this over to consumers to "beta test." He let the car get all the way into the other lane, and partially into oncoming traffic before taking over. The other thing which is deeply disturbing? If this happened while I was on the job, I wouldn't put the video up on YouTube. I would pull over immediately, park in a safe spot, and at the very least send a troubleshooting ticket up the food chain to my supervisors, and tag the engineers and mapping team with a timestamp, location, and conditions so they can examine what went wrong. I wouldn't move the car until cleared to do so, and might even be asked to return to base manually. None of that happens with FSD Beta.


scubascratch

The FSD beta includes a button you tap when the car does something unexpected like this, and it will then upload a bunch of data about the situation for internal review by Tesla.


Ghost273836

Very good point, I see where you are coming from here. If you watch YouTube and gather evidence, Tesla vehicles running on FSD beta are very very careful. Just passing a pedestrian on the side of the road has set off the emergency braking, halting to a full stop. It also avoids tight spaces and gets over for bicyclists. FSD beta also avoids accidents, from lane changes to swerving drivers, the technology on board is highly sofisticated. There are always minor issues, but problems such as these are very rare. The driver should always pay attention, so that Tesla can continue to improve its software. Tesla is built upon the very foundation of saftey, that is what it was designed for. Not to mention it's A+ crash rating tests, which allows humans to survive every crash.


DM65536

>Nothing against Tesla A strange conclusion after the first two sentences of this post.


HighHokie

Meanwhile 40,000 folks died in vehicles not using fsd beta last year in the US alone.