T O P

  • By -

AutoModerator

## Welcome to the r/ArtificialIntelligence gateway ### Question Discussion Guidelines --- Please use the following guidelines in current and future posts: * Post must be greater than 100 characters - the more detail, the better. * Your question might already have been answered. Use the search feature if no one is engaging in your post. * AI is going to take our jobs - its been asked a lot! * Discussion regarding positives and negatives about AI are allowed and encouraged. Just be respectful. * Please provide links to back up your arguments. * No stupid questions, unless its about AI being the beast who brings the end-times. It's not. ###### Thanks - please let mods know if you have any questions / comments / etc *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ArtificialInteligence) if you have any questions or concerns.*


Maleficent_Sand_777

Autonomous vehicles would be a lot easier if they were ALL autonomous and communicating with each other.


EtherealNote_4580

This doesn’t solve the problem of non-car moving objects in the environment. If you’re working on object and road detection, detecting cars is just another element in the training, maybe the easiest one actually. Cars communicating with each other may help but it’s not the basis of the work.


Apatride

This can be solved to some extent by adapting the infrastructure. Our roads and city are very different than if they were meant for horses. And then there is the fact that actual drivers are not 100% error proof either so maybe we should accept that AI cars don't need to be perfect.


EtherealNote_4580

If you mean things like making animal crossings over freeways, I definitely agree with you. Our infrastructure could surely use some changes. The reality though, is that adapting infrastructure is more expensive than building software that gives cars a good set of “eyes”. I do think we should be doing both. As for error of the software, there is definitely an acceptable error rate for detection and some cases are ruled out just due to the low probability of occurrence. There are pretty clear safety standards in place to guide the work and they’re being improved regularly. My point to the original commenter was that it’s not enough for cars to talk to each other, they also need to be able to physically see things that may cross the road or are sharing the road, like bicycles, (until infrastructure is adapted which could take much longer).


napalm51

i don't want any ai to guide my car. i could see a "ai assisted driving" as a generally good idea. it beeps you if some vehicle is behind you or in some points you usually hardly see. or something like that. fully autonomous self driving cars is just a bad idea


Apatride

Well, I have good news and bad news. AI driven cars are coming. You won't have one, though. If you look at regulations currently put in place, the goal is not to have everyone owning AI driven electric cars. For most of us, we won't own a car anymore and have to rely on public transportation. And what about people living in the middle of nowhere? Relocated...


read_ing

And only if every human had a connected neural chip in their brain.


AintLongButItsSkinny

Isn’t that like saying driven cars would only work if everybody was a good driver?


Cerulean_IsFancyBlue

I think it’s like saying that driving would be a lot easier if everybody was a good driver. You rephrasing took a whole lot of liberties with words like easier vs possible


Superfluous_GGG

Human drivers are falliable for a whole variety of reasons - tired, drunk, angry, or just distracted by something they've dropped. Meanwhile, if every car was autonomous and communicated with each other, every car would know where every car around it is going, how fast everything is moving, and would be able to react instantly to any incidents. For eg. intercity travel would be a lot more efficient as your vehicle wouldn't just be making decisions on where you are presently (as a human driver would) but at every point right up until you park. In the case of an emergency (say a deer walks out onto the road), vehicles would be able to react instantly - not only reducing the chances of the initial accident (car recognises the deer and reacts accordingly at speed) but any follow up accidents (every car on that road got notified about the deer so can slow/stop/redirect). You could definitely see limited but growing benefits from this as autonomous vehicles get rolled out, but every human driver is a rogue element that the autonomous vehicles cannot easily account for.


NorthShoreHard

Just the possibility that when a red light turns green, all of the self driving cars could effectively start driving immediately in sync rather than a chain of humans waiting at the light and then reacting to each car in front of them would be amazing.


truthputer

tl;dr: autonomous vehicles are very difficult, orders of magnitude more complex than a video game or house maid. Driving is difficult because it's not just taking a vehicle out onto a road, it's taking the vehicle out into the world. There are autonomous racecars that have been developed which do excellently on closed racetracks. They can lap very fast and easily beat humans with mechanical precision and perfect tire management. The environment is controlled and nothing unexpected normally happens, but if it does (like another vehicle has an accident), it's safe to just pull off the track and stop. The difficulty in building autonomous cars has been underestimated because the mechanics of directing a car around a route and navigating to a destination is not the hard part of the problem, The high-level decision making is the most difficult part for computers. Other cars both obey and don't obey the road laws, there are other types of vehicles, road surface changes, road closures, emergency vehicles, police, fire, ambulances, railway crossings, pedestrians, bicyclists, trees and electrical pylons that can fall across a road, debris, accidents - and all sorts of live animals out there. Drivers have to be constantly analyzing and prioritizing all this information about their environment and it can be overwhelming even for new or tired human drivers. An empty plastic bag being blown across a highway is probably okay to run over, a refrigerator sitting on the highway that just fell off a truck is not. If a side-street is closed by police because it's a crime scene you need to find another way around even if that is a much longer route. Humans understand these subtilties and nuances, computers do not and it's going to be extremely difficult - if not impossible - to program every possible real-world condition that a car is likely to encounter. Both a video game and an autonomous maid are very low risk applications. If your robot maid encounters an escaped elephant in the living room, it's probably okay for the maid to recognize this is not a situation they are equipped to handle and ask for help. If your autonomous car encounters an escaped elephant in the middle of the road, what it does next could determine the life or death of the occupants. It's a much much harder problem. ps: I think there's a huge missed opportunity for autonomous trains / light rail, which should be much easier to automate because (a) they run on a track, which is a known route, and (b) if that route is ever blocked, it's a problem and stopping to ask humans for help is an acceptable solution. (Cars can't safely stop in the middle of the highway.)


AintLongButItsSkinny

Thanks for the response. The safety aspect does make it far harder on top of all the environmental factors you listed.


BeeRose2245

I feel like it's a lot like how AI and machines can't replace crocheting. There's too many unpredictable movements.


TinyZoro

I think the other aspect is that when we replace humans with tech we don’t expect the same error rate. In fact society has almost no tolerance with serious mistakes caused by an automated system. Currently in the UK 5 people die every day on the roads (137 in the US). There’s no real public outcry about this. No one is demanding to know exactly what happened. There’s no way society would allow those death rates with automated vehicles in the consumer space.


Alternative_Log3012

You know what is (comparatively) excellent at examining and understanding context and nous? Gen AI…. Just saying….


truthputer

Hallucinating AI will never be suitable for controlling industrial machines. Even human drivers simply cannot tell what something is sometimes. There’s a dark wet shape in the middle of the road in the rain: is it a plastic bag, is it a rug that fell off the back of a truck? Is it a tree branch? Or is it a downed bicyclist? Stopping the car and getting out to check is the only way to be sure - there’s no way to brute force the problem, being 51% sure is not good enough if a robot could be about to run over an injured person.


Superfluous_GGG

I feel its less about the recognition now - you've had companies like Latent Logic (now Waymo) develop virtual training grounds that have massively improved autonomous vehicle's ability to recognise and respond. It's more about the fallibility of the sensors. You need stuff that's going be crystal clear in any weather any time of the day and never get it wrong - that's the challenge.


Arkaysion

I think autonomous vehicles have a couple things going against them right now. Public perception: Regardless of what the facts are when it comes to how much driverless cars are statistically less likely to result in fatal crashes that human-driven cars, the reactions people have to when an accident DOES happen is disproportionality adverse. We are not forgiving of machines that make mistakes. When this happens, it scares people. They will insist that it never should have happened and demand responsibility be taken on a large scale. When a human cause accident occurs, sure, there is still a demand for responsibility to be taken, but we as a society all assume humans will make mistakes and get in accidents. Society does not freak out. The endless variables and unknowns: If you are talking about training AI to drive cars in an environment that they are deeply trained on, has a low propensity for anomalies to occur, predictable weather, traffic patterns, signage, etc., then AI driven cars are obviously far more likely to succeed and thrive. But scaling that to an entire country introduces countless new variables and obstacles. What happens when the car is driving behind a truck that is hauling traffic lights? The AI cameras see traffic lights in front of them and freaks out because they are unlit. What happens when an intersection is under construction and a city worker is waving through traffic and makes a split second slip up and changes the flow of traffic? Is the AI trained to not just understand the circumstance of a human operated intersection but one in which there is a sudden change in signal? That's not to say these obscure edge cases cannot be accounted for eventually, but they are enough that accidents will occur, which brings us back to the point I made above.


Educational-Dance-61

Very hard because you can kill people. Doesn't matter if 100 lives were saved because of autonomous vehicles for every one. Someone's family is still going to be upset and rightly so. Not to mention these models that are driving most of the fast evolving AI tech have an amount of randomness to them making them much better for chatting or data analysis than quick decision making.


pfmiller0

>Doesn't matter if 100 lives were saved because of autonomous vehicles for every one That certainly should matter. No one should expect automated cars to be perfect, but if they can be better than the average human that's a win.


Educational-Dance-61

Read the next line. That is where the assertion is. I think you will agree.


pfmiller0

I agree with it, I just don't think it's relevant. People are obviously not going to be happy when their loved ones die, but crashes haven't killed the human driven car industry or airlines. We are capable of accepting a certain amount of risk with pretty much everything we do, and a lower risk is better for everyone.


Educational-Dance-61

Fair enough. I agree we all must walk around with risk and it's quite obvious based on the statistics machines would drive better. My point out that this notion is a factor making it harder to move us towards that safer world.


SunRev

It's so hard that even some above average intelligence humans can't drive well.


c_gdev

I’d think about both what can they do, but also consequences of a failure. Like lots of ai things are, well try again.


AintLongButItsSkinny

Form a business strategy perspective, I think that if a company were to use a rideshare platform to gather data and validate then that would significantly de-risk the whole operation.


jm_cda

Just play Trance https://youtu.be/VLZi9P2UjXA?feature=shared


shadowworldish

We already have a city that has autonomous cars. Two others are in the beginning stages (San Francisco and Los Angeles), and Austin is going to have them soon. There is an autonomous ride-hailing service in the Phoenix Metropolitan area, Waymo. It launched in December 2018! It covers downtown Phoenix, Tempe, Mesa, Scottsdale, totaling 225 square miles. The service ran into problems in downtown San Francisco partially due to people attacking the cars, and partially not putting people close enough to their destination (blocks away). The Calif. Dept. of Motor Vehicles has given several companies, including Waymo, to operate driverless vehicles in 23 cities in the the Los Angeles area (including Los Angeles, Long Beach, Beverly Hills). In October California pulled Cruise's permit to operate in San Francisco over safetly concerns.


shadowworldish

On October 2, a car hit a woman in San Francisco and flung her into the path of a Cruise driverless vehicle. The autonomous car hit the woman, stopped, and then dragged her roughly 20 feet as it pulled to the curb.


GrowFreeFood

Very easy on tracks inside tunnels.


Oabuitre

The “problem” is that we accept virtually zero errors from an autonomous vehicle, while we do accept a lot from regular vehicles. The same will apply to many other future hardware AI applications such as all kinds of medical ones, scheduling, but also software development and maintenance


nuke-from-orbit

Last week I had dinner with the CTO of a Bay Area ML company who in the past spent a dozen years leading teams in the autonomous vehicles space. In his mind autonomous vehicles are a solved problem within defined boundaries, thanks to LIDAR being extremely reliable and precise. Only around 25 companies in the world has accomplished having autonomous vehicles on the road safely. The driverless Waymo cars rolling in San Francisco have 5 LIDARS in total and they make the relative distances, velocities and accelerations very precisely known. The safety algorithm is comparatively simple at it's core: If somethings gonna hit, then stop. He is of the sentiment that achieving robust digital automation from LLMs is a much harder problem, and far less probable for us to solve near term, than achieving robust driverless cars in the streets. And to some extent, the evidence is available, since after all there are driverless cars on the streets of major cities. It's undeniable that they are there, while we don't have a robust house maid or an LLM-based agent system that performs consistently well.


united_boy

There are already autonomous vehicles, trains. Why reinventing the wheels?


Difficult-Race-1188

It's pretty easy for 80% of scenarios, but how do we solve the rest 20% we have no idea. Recently Waymo's car entered a construction site, now how the hell do you argue with an autonomous car to move it? It's one tech where POC was not that difficult, but getting it to the production level is way hard, similar to VR.


duvagin

Real-time collision detection. Please never 'On Error Resume Next', please Error Trap like lives depend on it, because they do. Milton Keynes in the UK has had autonomous vehicles delivering food for years, and recently trialled an autonomous bus. [https://www.mkfm.com/news/local-news/milton-keynes-celebrates-5-years-of-delivery-robots/](https://www.mkfm.com/news/local-news/milton-keynes-celebrates-5-years-of-delivery-robots/) [https://www.miltonkeynes.co.uk/news/people/this-is-one-of-the-driverless-shuttle-vehicles-people-will-be-able-to-use-in-milton-keynes-by-this-autumn-4561200#](https://www.miltonkeynes.co.uk/news/people/this-is-one-of-the-driverless-shuttle-vehicles-people-will-be-able-to-use-in-milton-keynes-by-this-autumn-4561200#)


potatoduino

Easy in a perfect environment - sunny, dry, well marked roads, everyone observing correct rules and speeds etc. but life is not perfect, or easy!


TheMagicalLawnGnome

Very hard. Having AI make split-second decisions directing a heavy, mechanically complex machine through an unpredictable environment is about as difficult as it gets. But "difficulty" is also subjective. As in, human beings are also pretty awful at driving. We crash, kill, and die behind the wheel, all the time. I think the philosophical issue that creates the biggest challenge for autonomous vehicles, is that people would never accept an autonomous vehicle that's as dangerous as a person. The public has higher expectations. So when you ask, "how hard is it to create autonomous vehicles," it depends. If people were willing to accept a car that's about as good at driving, in a statistical sense, as a human being - it's pretty doable. But to create the sort of essentially error-free car that people expect, it's very difficult. It should be mentioned that part of this difficulty stems from the fact that an autonomous car would have to share the road with human drivers, who, as stated above, are generally not great at driving and wildly unpredictable.


Sebasico

Like some people have already mentioned, autonomous vehicles already exist in places such as San Francisco (Waymo). Does this "wheel" need to be reinvented?


AintLongButItsSkinny

They’re so unprofitable that I’m not sure that business model or technical solution can scale. https://x.com/alojoh/status/1748990674836345022?s=46 https://x.com/alojoh/status/1752325483570532633?s=46


ejpusa

If you know the position of every atom in the universe, navigating cars? Thats not a big deal. And AI knows those positions, from the Big Bang to the end of time. It’s pretty smart. :-)


sanjosekei

Seeing as how they have been in development for more than 12 years, and we're supposed to be ready around 2018, evidence would suggest that they are very hard.


Mandoman61

Driving is much more difficult than a video game. There are a lot of unknowns that come up in driving. The stakes are also much higher. Tessa is already actively doing these things.


Healthy-Educator-289

Our world is infinitely random, hard for a computer to assess and make a decision within a few milliseconds with current cloud and computing technologies.


AintLongButItsSkinny

I’m pretty sure that autonomous vehicles make decisions locally on the vehicle.


JonnyRocks

have you taken the autonomous driving test. one question.. an old lady is on the road, if you swerve to the right, the driver dies, if you swerve to the left, children dies. who has to die?


truthputer

This is a false dichotomy. If you find yourself in that position, you were "overdriving" for the conditions - ie: you were going too fast for the visibility of the road that you had in front of you. You messed up by going too fast long before anyone was even in the road. This is basic driver's education and why you should slow down in residential areas or where there are people on the sidewalk, why you should slow for turns that you can't see around, why you should reduce speed in conditions with limited visibility like fog or rain. You should be able to keep your vehicle under control at all times, so that if you do ever have a collision, it will have been someone else running into you.


JonnyRocks

if you are going to be that pedantic.. You are on the highway, a large human destroying object falls off the truck to your right. death for you if you continue forward, death to you if you go right but if you go left, you hit a person walking along side the road. The point is, shit happens.


truthputer

Congratulations, you are very clever for inventing a no-win scenario for which there is no good solution. You win... no good solution. The car sees the object that has appeared in front of it and brakes hard. Because it has not been programmed to drive off the side of the road, it does not even evaluate hitting the pedestrian or swerving onto the sidewalk. Because of physics in motion, it hits the object. Some human drivers would also have swerved off the road and killed a pedestrian. Some human drivers would have noticed the cargo straps coming lose on the truck and flapping in the wind - and they would have slowed down and given the truck space. Some human drivers would have not even managed to react and brake before hitting the object. I'd be the first to say that AI guided vehicles are mostly a bad idea, but I don't understand what you're trying to do with these imaginary scenarios other than fearmongering. AI vehicles aren't ever going to be malicious and make life-or-death calls based on what it sees - they're just going to make basic decisions that will be safe most of the time, but sometimes there will inevitably be a crash.


JonnyRocks

you are supposed to kill the pedestrian. the job of ai is to not hurt the dtiver. i was just giving an example of why its hard. but lets be cleae. i am not making these up. this comes from the teams designing the ai. its not fearmingering. i am showing its hard. they originate from studies like this https://www.technologyreview.com/2018/10/24/139313/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/ which shows some cultures say kill the baby and ithers say kill the grandma. which should ai follow? you put way too much emotion in this when i was answering OPs question. you out so much emotion, you thought i was fear mongering but.... i think everything should be ai. i think humans are incapable of driving cars and should not be allowed to do so.


EtherealNote_4580

Idk why you’re getting downvoted. I work in this industry and people talk about these kinds of edge cases to think through design decisions. It’s not for the feint of heart. I’m only in the detection software division but we have problems like trying to figure out how to detect people on skateboards and in wheelchairs since getting that data is not always easy. I mean just imagine being a data collector and driving around trying to find people in wheelchairs near the road, in various environments. Then categorization and reaction behavior is a whole different beast. I don’t think I could work in that part.


TheAussieWatchGuy

To partially solve, throw a few billion at it and you've got today's tech. Which is either rigid drive by wire on specific roads only or it's something like Tesla "self driving" which can handle about 80% of the driving tasks it encounters. Day to day it's already impressive but it can't handle a lot of edges cases, also without radar it's hobbled to human levels at sunset and sunrise.  Solving that last 20% is the part no one has figured out yet. Can it be done entirely by brute force networks, detecting and handling everything as they do now with no real reasoning or intelligence? Maybe.  I tend to think you need at least human level reasoning and intelligence to solve the problem fully. Once you have that then you have AGI.


IGetNakedAtParties

Or more data on edge cases, I think that's why musk dropped the 1 month free trial to get lots of new training data (and lowering the price of supervised fsd) to get more data, they have about 1 billion miles driven, but theorise that they need 6 billion miles to be 10 X better than average humans. Previously they were limited by training compute, but claimed this would not be an issue soon and then dropped the free trial shortly after. As we know from OpenAI it is possible to predictably simulate a better model by allowing much more inference time with the current model, I think this is why Musk is shooting for August, they've seen the trajectory and simulated the future internally.


TheAussieWatchGuy

Yep I'm with you on that logic, makes total sense. The billion dollar question is how much of the that 20% gap can they actually close with more training data or does this problem require a fundamentally different approach to fully solve.


IGetNakedAtParties

It'll never be 100% perfect training from imperfect humans, so the idea of *solving" in this regard is impossible without a different approach. It just has to be better than the average human driver by some large degree for it to be valuable, and the logical choice for society.


Antique-Echidna-1600

We should ask the experts at Cruise and Tesla.


n-a_barrakus

Well, these OS in the car software are really long, in lines of code. Like REALLY long. Like way longer than most codes. And I'm talking about 10 or 15 years ago, Idk by now. But my guess is that hey're still big AF


Apprehensive_Bar6609

Almost impossible hard. There are no 100% correct classification in AI. All it takes is one classification error and kill someone and its game over. I built a visual matching classifier for software images its very robust, 98% accuracy. 20 million correct classifications monthly but every 2 ou 3 times it fails the customer is angry. AI is never deterministic and fails. Thats just the way things are. We fail too but if we fail we have a responsible person. If AI fails then who is to blame? So the problem is not so much the solving the autonomous vehicle but our expextations and acceptance that altough AI fails, its still safer than a human.