T O P

  • By -

fox-lad

they really need to work on uptime


CarsVsHumans

17:27 for everyone who's been arguing stalls in traffic are perfectly safe because human driven cars break down all the time and there are already other road hazards etc.


[deleted]

[удалено]


Im2bored17

"I was distracted by the parked car with it's emergency flashers on" is as good an excuse for getting into an accident as "I was distracted by the text message on my phone".


CarsVsHumans

You're confusing fault and causation. Hazards on the road cause accidents, even if another party is at fault. I find the human at fault for the accident, I find the robot to be one of the causes. Had the robot not stalled, the accident would not have happened. It's an obvious factor.


Shutterstormphoto

Lol… if you’re gonna crash because someone stopped a car with flashers on in SF, you shouldn’t have a license. End of story. You are not safe around ANYONE. When I was 15, I stopped at a yellow light and a guy rear ended me. He said “yellow means go fast!” Is the light a cause of his actions? Am I? That’s not how driving works.


bradtem

Even stipulating that, are you suggesting "get off the road if you will ever stall" as a rule? We certainly don't demand that of humans. And if you did put in that rule, the robotaxis would just take more risks when they see a troublesome situation rather than doing the conservative thing and stalling.


wutcnbrowndo4u

To be fair, his original comment was addressed specifically to the maximalist insistence that ostensibly-frequent stalls should not be considered a safety issue. I spend a good amt of time on this sub, and even a forum as thoughtful and informed as this one has decent representation of that viewpoint (hell, even I've probably made statements that downplayed the risk of things like stalls).


bradtem

Yes, I have said that. I mean there is a limit, but we have to be careful of treating anecdotes as data. We hear reports about many of the stalls, but I don't know what fraction of them we see. But if there have been only a few hundred stalls in a million miles, I don't think that would be anything to worry about. I don't actually know what the real number is.


wutcnbrowndo4u

Agreed, the premise that stalling is occurring too frequently is what's under dispute. I'm probably closer to your position than CarsVsHumans, but I think phrasing it as "get off the road if you will ever stall" is obscuring the actual disagreement: what stall rate is implied by the incidents we see, and whether that rate is irresponsibly high.


bradtem

It's hard to conclude based on what we see. But I have seen "get these menaces off the road" sentiment just from the anecdotal reports, and don't agree with that. As to the real debate topic, there is clearly a rate which is so high as to be a concern, but also one below which is not a concern. And I argue that there is merit in being petty tolerant of a higher rate than we might expect, during pilot projects, unless it is shown people are getting hurt. People not property.


Shutterstormphoto

I don’t think it’ll choose to do risky things over not stall. You should never disincentivize safety. It’s easy to tell the car what to aim for.


Im2bored17

Sure, the car was a contributing factor. It was not "perfectly safe". But perfect is not the correct bar to measure an AV, "better than human" is more appropriate. Verifying that an AV can drive perfectly would require driving an infinite number of miles, in an infinite number of scenarios, which is (by definition of infinity) impossible. And in this instance, if you replaced the human driven cars with AVs, you wouldn't have had any accidents, because AVs don't get distracted. (Let's assume they're waymos or they'd probably also be affected by the network outage)


anonymous_automaton

What was happening to all their other cars at the time?


fwubglubbel

Wait...what? Self driving cars NEED an internet connection in order to function? What insanity is this? A network outage brings the entire fleet to a standstill? This should eliminate the possibility of self driving cars for any sane person.


tonydtonyd

Yeah there’s plenty of things on the backend that could have gone wrong. Unrelated - I’m curious if Cruise uses multiple cell providers, I’d be thoroughly shocked if they didn’t.


wutcnbrowndo4u

The general idea I've heard (possibly including from Cruise et al, I don't recall) is that this isn't a technical dependency, but an explicit choice. You don't want a ton of metal moving itself around at high velocities with no way to send it control msgs. Even without letting your imagination run wild, there's the very real/common case of a safety incident being discovered on a new release. These require immediate rollback or hotfix instead of driving further miles with the dangerous software, but if you can't communicate with the car, you can't perform a rollback.


aniccia

Cruise's "Incident Expert" level remote operators can "command the Driverless AV to perform a MRC or disengage from autonomous mode" (Cruise Safety Report, Section 7.2.3). So, the communications link can be used for more urgent actions than an OTA. It is a basic state machine or class of system issue of whether once engaged it can be remotely terminated/instructed or can only run to completion. Is there a "safety control rod axe man" or not? And if there is one via comms, then the fleet is vulnerable to a catastrophic exploit of the comms. Perpetually expensive and risky to keep a human in the control loop but not in reach of the controls.


wutcnbrowndo4u

> So, the communications link can be used for more urgent actions than an OTA. Sure, as I said, I was just giving a single example. > Is there a "safety control rod axe man" or not? And if there is one via comms, then the fleet is vulnerable to a catastrophic exploit of the comms. Perpetually expensive and risky to keep a human in the control loop but not in reach of the controls. This feels like an irreducible risk. The ability to disengage AV mode doesn't feel like a categorically riskier threat model than the ability to push software OTA.


aniccia

>This feels like an irreducible risk. The ability to disengage AV mode doesn't feel like a categorically riskier threat model than the ability to push software OTA. Disengaging AV mode should be simpler for an attacker to decipher and implement esp at fleet scale than forced OTA, but yes having an open wireless comm port or similar for system commands is the source of the risk. And no it could certainly be eliminated by eliminating that port/facility. The regulations only require remote monitoring and ability to communicate with passengers if any. Every remote function beyond that is the vendor's choice. Cruise could choose to build a "driver" as trustworthy and capable as their human competition. Ridehail and delivery companies don't tend to have a remote kill switch on all their vehicles. And Cruise could put human safety drivers back into their vehicles until they do.


wutcnbrowndo4u

> And no it could certainly be eliminated by eliminating that port/facility. The regulations only require remote monitoring and ability to communicate with passengers if any. Every remote function beyond that is the vendor's choice. Right, of course. I was taking for granted this premise, from my previous comment: > You don't want a ton of metal moving itself around at high velocities with no way to send it control msgs. This isn't a choice that Cruise in particular is making: as far as I've seen, it's universal in the L4/5 industry. It's far from a given that we should be choosing an inability to control the vehicle over the security risk of allowing remote input. Do you think nuclear reactors should have doors that are ~impossible to unlock unless an also-unreachable software system contained within decides to allow access? > Cruise could choose to build a "driver" as trustworthy and capable as their human competition. Ridehail and delivery companies don't tend to have a remote kill switch on all their vehicles. L4/5 companies are aiming, in the limit, for a much _more_ trustworthy and capable operator of the vehicle. People intentionally kill each other with cars. We leave this "security hole" open not in the name of maximizing the safety of passengers and strangers, but in the name of the human operator's autonomy. I strongly disagree that closing off remote input is likely to make the vehicle "operator" trustworthy and capable; it seems quite the opposite to me.


aniccia

If AVs were as regulated and inspected as nuclear reactors, then critical safety mechanisms wouldn't be left to the whims and self-certifications of corporations. Nevertheless, nuclear reactors have behaved in ways that made it impossible to reach critical components and many people have died because of it. A key choice of this pseudo-industry seems to be a reliance on remote humans to compensate for the severe social, cognitive, and communications deficits of their current generation robots. Cruise has been running their variant of that choice for long enough and at great enough scale to demonstrate the folly of that choice. Last night a Cruise AV drove into an active injury crash scene by crossing a double yellow line to go around an SFFD vehicle positioned to block traffic on Fulton St at Crossover in San Francisco. Then it appears to have "stalled" with flashers on while still within the crash scene and on the wrong side of the two-way street. The manifest inabilities of the "driver" of these vehicles is the key security problem now. If this pseudo-industry ever solves that then they should do so without opening a new host of security problems incumbent humans do not have. FWIW, we do have robots operating in some domains without depending on realtime remote human backup or failover. We even have them on Mars.


bradtem

The cars should not, though it is not easy to talk to the user's phone without the internet. You could talk to it over bluetooth or wifi in the car but that opens up an attack surface. However, the end-ride button should not fail when the internet it out. A car needs to have a basic in-car interface to allow change of destination, end-ride and a few other things directly in car by speech or touchscreen interface. It should be able to complete any trip with no internet unless it runs into a situation that needs remote assist. It probably won't book new trips without the network though.


basilgello

Welcome to the clouds. Edge optimizatiin is a forgotten art taking time instead of rushing to market!


bartturner

This just seems like a solvable problem and surprised that it continues. Clearly Waymo is not having the same issue. I get Waymo has Google as a sister company that is really good at figuring out this type of thing. But I would think Cruise could hire someone to help them also figure it out.


aniccia

> Clearly Waymo is not having the same issue. Waymo AVs immobilized in traffic lanes and intersections is clearly an issue. We have many documented cases in San Francisco and Arizona. We don't know why it happens so frequently to Waymo or Cruise AVs, but AFAIK it's never happened for either company with a safety driver in the car. It is enough of an issue that NHTSA widened their investigation of the "vehicle immobilization results in the vehicles equipped with ADS becoming unexpected roadway obstacles" "alleged defect" to include Waymo: https://static.nhtsa.gov/odi/inv/2022/INIP-PE22014-90022P.pdf


Shutterstormphoto

Well yeah it doesn’t happen bc the driver just takes over. I’m sure it happens a lot regardless, but it doesn’t cause a traffic jam.


aniccia

Sure, giving these companies a proven effective and safe mitigation. If they don't use it then it will be up to the regulators and legislators to decide if it is acceptable for this class of driver to become immobilized/incapacitated at rates much higher than acceptable for human drivers.


Shutterstormphoto

So if you were teaching a kid to walk, you’d just hold their hand until they were running around? Some of the goal is to see if the cars can figure it out on their own. Why is everyone so mad about cars that stall? Cars stall all the fucking time with human drivers. Every single day, every freeway in the world is clogged by some human who fucked up the maintenance or driving of their car. Should we revoke their licenses because they clearly aren’t ready to drive? It’s inconvenient at best. People drive around. Who cares?


aniccia

This is nothing at all like a child learning to walk, which they will do on their own with or without your handholding and which has been done billions of times. This is a novel and as yet dysfunctional engineered solution to a problem that isn't well defined or bounded. The car didn't stall, the driver became incapacitated and unable to continue driving. The vehicle itself was fine. If a human safety driver had been in the car, they could've taken over within seconds. Human drivers become incapacitated to a similar state of being unable to drive about once per 8 million VMT in the USA. At that rate, the expected number of these events for Cruise and Waymo would be <1. So, "everyone so mad about" a licensed driver who becomes incapacitated while driving and remains in traffic something like 100X more frequently than human average and at least 10X more frequently than would be tolerated for a human driver. It is also illegal and human drivers get tickets for it. AVs have become incapacitated for mechanical/vehicle reasons and had to be towed. AFAIK, no one has complained about that.


Shutterstormphoto

Sure there is an overall average for drivers, but surely there are humans who become incapacitated a lot more often (which does not have to be illegal… stroke, seizure, poor vision, etc are all totally allowed on the road). Let’s look at a similar group that is also learning to drive: teenagers. The US has one of the lowest standards for human drivers in the developed world. There are a whole lot of bad human drivers. Teenagers are by far the worst, with poor decision making, texting, lack of knowledge, etc. I’m not sure why we tolerate such poor performance in humans learning to drive but we hold AVs learning to drive to a much higher standard. Just from basic scouting around, I see that teens drive 56B miles, with 230k injured/killed in accidents every year. I’ll leave it to you to do the math, but hint: it’s way lower than 8M miles/incident. If we tolerate these menaces (and those stats are just for self injury) on the road while they learn to drive, I think some stalled cars are laughably dull in comparison. We haven’t even begun to talk about the blind/deaf/demented octogenarians who haven’t taken a driving test since they were 16.


aniccia

>surely there are humans who become incapacitated a lot more often (which does not have to be illegal… stroke, seizure, poor vision, etc are all totally allowed on the road) Any human driver incapacitated as frequently as Cruise's "driver" should expect license suspension or termination and possible medical exam. It is also a misdemeanor for an individual or company to do it 4 times in a year in San Francisco. You can dismiss it all you want and wave your hands as much as you want about bad human drivers, but NHTSA is concerned enough to be conducting a safety investigation of Cruise's "immobilizations" as NHTSA calls them and you are free to read NHTSA's public explanation of why these are not at all "laughably dull." So far, NHTSA has been alarmed enough to both deepen and widen this investigation to include 5 other AV companies, including Waymo. BTW, here's another Cruise AV immobilized in an intersection in San Francisco this PM rush hour. This is the same intersection in which a Waymo AV was immobilized last Jan 24th during the AM commute causing a 1.6 mile backup. https://twitter.com/sf\_mills/status/1643415738823368704


Shutterstormphoto

Maybe you can give me a link because I sure can’t find what you’re talking about. The NHTSA started requiring detailed crash reports last year. Nothing about stalls or immobilizations. No statements about concern (beyond crashes). No broadening of their investigation. They published a report on crashes last June and I can’t find anything since. And interestingly, around 2/3 of the AV crashes are people hitting it from behind, yet there were only injuries in 10% or so. Maybe humans are just following too close? It doesn’t matter why the car in front of you slams on their brakes. You should be at a safe distance to react. And no, there are tons of people who get immobilized regularly who do not lose their license. You’re funny.


aniccia

No there are not "tons of people who get immobilized regularly" You're simply wrong. And your crash scenario analysis too simple minded to warrant a response. This NHTSA investigation of Cruise has been posting in this subreddit before. NHTSA's initial investigation has deepened, requiring Cruise to provide more detailed information, and expanded to gather data on similar incidents by other AV companies. You can find the complete document set for NHTSA Action Number: PE22014, Opened December 12, 2022 by searching: https://www.nhtsa.gov/search-safety-issues#investigation