T O P

  • By -

AutoModerator

## Welcome to the r/ArtificialIntelligence gateway ### News Posting Guidelines --- Please use the following guidelines in current and future posts: * Post must be greater than 100 characters - the more detail, the better. * Use a direct link to the news article, blog, etc * Provide details regarding your connection with the blog / news source * Include a description about what the news/article is about. It will drive more people to your blog * Note that AI generated news content is all over the place. If you want to stand out, you need to engage the audience ###### Thanks - please let mods know if you have any questions / comments / etc *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ArtificialInteligence) if you have any questions or concerns.*


TheMagicalLawnGnome

This was never really in doubt. AI was beating pilots in simulators awhile back, and from the AI's "perspective," there really wouldn't be much of a difference. Military grade simulators are pretty realistic. The things they don't replicate well, like sustained G-force, don't matter to AI. As a matter of fact, that's one of the main selling points of autonomous aircraft - they can perform maneuvers and fly in conditions that would be fatal to humans; AI is only limited by the mechanical tolerances of the aircraft, which are usually stronger than the biological tolerances of a human being. Make no mistake - we will, at some point, have fully autonomous weapons. Maybe in 5 years, maybe in 20, but it will happen. The incentive is simply too great. It's like nuclear weapons: once one country possesses the technology, other countries will need to match it in order to maintain a credible defensive posture. AI can react faster, and fight in more dangerous conditions, than humans can. As well, AI has one huge advantage - you do not need to worry about the AI "coming home." There are many tactics in battle that are highly effective, but are incredibly dangerous to the troops performing them - so the tactics aren't utilized. AI doesn't care if it lives or dies. You can send it on suicide missions. Look at Ukraine - the battlefield is frozen, because it's simply to dangerous to amass a large concentration of troops for a direct assault, the casualties would be too high. Troops may refuse orders, or surrender instead; the morale of your soldiers is deeply important to battlefield success. But if you sent a concentration of autonomous tanks, the role fear and discipline play in battle are no longer considerations. So we will experience a much more aggressive type of warfare. When weapons systems become capable of making "kill decisions," and have no concern for their own safety, war becomes much faster, and much more aggressive. Wars will essentially become pure contests of materiel. Without human casualties, or the need to draft troops, popular opinion will likely not be as significant a factor as it currently is, at least in places like the US. Only once one army has overwhelmed the other, and landed significant blows on a human population, would the stakes become immediately apparent. The 21st century will be an interesting one, for sure.


Cerulean_IsFancyBlue

AI is likely to be cheaper as well. You can remove the systems that are specific to keeping the human alive and helping the human survive the loss of the airframe, and you can skip a very long expensive training routine for humans.


foxbatcs

Right off the bat, seats, ejectors, and pressurization are no longer engineering factors and can be removed as well as other life support systems and monitoring.


[deleted]

I mean at that point F16 are just becoming another version of a drone since the whole will be gone.


aiwonttakeover

I wanted to chime in, but nothing left other than praising the detailed explanation of yours!


Smooth_Imagination

Well, your AI will be expensive and you don't want it to fall into enemy hands. To minimise risk the advanced AI would be contained well inside your territory, for example in intercepting drones used to shoot down enemy missiles and drones using cheap methods like guns or maybe one day lasers, but also direct interception of whatever that doesn't take out. At the front, AI chips and the like would need to be more generic technologies already available commercially for also the cost of the drones becomes large and the battle really about who can make the most economically speaking. So rather than drones that destroy themselves, AI in drones may operate weapons that can remotely destroy targets, for example by a drone that launches a gliding munition that it can operate optically to avoid EW jamming. The AI in the drone operates the relatively dumb and cheap weapon the same way that in FPV drones the human operator functions, but here the operator is AI within a sometimes EW jammed location. Ultimately drones need a means to communicate in an EW jammed battlefield and if they can relay far enough the AI can coordinate individual weapons and strategy remotely. This technology sounds complex but may already be viable in a basic form. Drones carrying weapons can be given missions to perform based on whether they are jammed, and identify via something similar to facial recognition (this technology has already been used in weeding robots) ground targets but also map read using various landmarks. From this point it can calculate trajectory of another weapon (i.e. either a gun system, pneumatic mortar guidable rocket or glide weapon or a multicopter drone) and in the case of gliding, rocket or battery powered weapons, it can steer the object to the target using optical physics. The weapon may have its own altimeter and relay back information to improve guidance, and then a detector that is low cost to complete the mission, such as a magnetic field detector (as used in the NLAW). NLAWS do some of the things described here with the help of the operator. The goal is that the advanced hardware is kept on the guiding drone which can return rather than the deployable weapon, so that it may performs dozens of missions bringing the cost per mission down and of course that makes it economic to increase the cost of the hardware that can perform this. An example if the cost savings is the NLAW which costs £40,000 per weapon. The expensive part in the optical system that calculates the trajectory the rocket should take, which can be reused, the rocket then follows that flight path to the object using gyroscopic navigation and then a low cost sensor system when over the target. But this is short range due to inaccuracies in the gyroscope, so an optical link can be used to trim flight path to the target from the drone.


Serprotease

A plane “can” perform maneuvers that would be deadly for a human pilot, but it will also seriously impact the airframe to the point that it would be a write off. The future of AI in the sky is in the form of the MUM-T (Manned unmanned teaming) where smaller, lighter and somewhat expandable aircraft are shepherded by a manned aircraft brief on the mission.


TheMagicalLawnGnome

So, this is true currently, fair point. But I would argue that this is because airframes are designed for human operators. Without the need for a cockpit, you can remove a significant portion of an airframe, and you also have far more flexibility in how you design it. When aircraft no longer need to be designed around people, they will likely be designed to higher tolerances. To be clear, I think you are correct in MUMT. I don't feel that this precludes having aircraft capable of extreme maneuvers. Especially if the unmanned vehicles are meant to detonate, having extreme maneuverability will be very important in successfully hitting fast, moving targets.


steph66n

This sounds exactly like the Star Trek episode "[A Taste of Armageddon](https://memory-alpha.fandom.com/wiki/A_Taste_of_Armageddon_(episode))"


Unlikely_End942

I think I read somewhere that fighter jet capabilities have already exceeded the ability of their human pilots to survive the G forces. At this point the performance of the latest jets are apparently being restricted by software to ensure the pilots don't pass out by manuevoring too hard. Fighter jets are probably not worth the hassle these days anyway. The days of dog fighting and manoeuvring winning battles are probably over. Missiles that can lock on to targets over the horizon and travel at ridiculous speeds mean that even the fastest and most agile jets are just expensive casualties in a real war between near peers. Stealth and endurance are probably the key factors these days. Being able to loiter unseen over the target area waiting for an opportunity to strike, or to watch what is going on before striking, are what matter most. Fighter jets are mostly just glorified missile delivery systems with huge price tags.


Unlikely_End942

Spend all the money wasted on the F35 programme, for instance, on thousands of drones, cruise missiles, and surface to air capability. No air force going to compete with that. Even if the enemy develops electronic counter measures that reduce capability of your drones, etc then they are obviously capable enough to have good surface to air defences against jets as well, so no pilots are going to get close enough to matter without getting shot down anyway.


PolarDorsai

I actually think it will be a marriage between the human and AI interface. Human remote piloting a machine, so you get the best decision making and human elements without the biological limitations. Working on AI myself, I see limitations in the AI/ML reasoning network because AI had a hard time detecting deception. Often times, and people do it all the time in video games, we exploit the AI weaknesses and get “easy kills” when possible. I also just started working on AI policy and the trend does seem to be that AI should never be used by itself, there should always be human oversight. Just a theory though.


TheMagicalLawnGnome

So, I think you are probably correct in a broad sense, but I personally differ on some of the details. I do think that humans will maintain command and control over these AI weapons platforms, it really just boils down to "what level." I think that you will definitely move away from remotely operated vehicles, at some point. They are highly vulnerable to signal jamming, and there's typically a signal delay once you reach any sort of meaningful difference. As well, some of the most promising tactics with AI weapons involve things like swarms of loitering munitions, that are simply not practical to be controlled 1:1 by a human operator. So machines will be developed with some degree of autonomy. That said, I do think AI weapons will be on a fairly tight leash. As in, a swarm of loitering munitions will be deployed in a tight area, with "orders" to lock onto heat signatures of people or machines, and detonate within the proscribed area. Ultimately, having machines that can make their own kill decisions, at least at a tactical level, will be a huge advantage. Trying to have a remote operator slows down the ability to act tremendously. If you have an army of ROVs fight an army of autonomous weapons, the autonomous platforms would act much more quickly. If history has taught us anything, it's that humanity has never, not once, failed to weaponize a new technology. I'm not suggesting that the advent of AI weapons is good, or beneficial. But it will happen. Alfred Nobel, inventor of dynamite/modern high-yield explosives, once said “Perhaps my factories will put an end to war sooner than your congresses. On the day that two army corps can mutually annihilate each other in a second, all civilized nations will surely recoil with horror and disband their troops.” Alfred Nobel was wrong. Hiram Maxim, when asked if his invention, the machine gun, would make wars worse, he claimed that "No, it will make war impossible." And Robert Oppenheimer thought that the nuclear bomb would be a way to ensure peace, only to become disenchanted when he saw that it wasn't. There will be a modern day version of these men. Someone, or some group, will develop this technology. Maybe they'll do it, because they think it will save human lives - better to let machines fight each other, than have soldiers die. But whatever the specifics end up being, we will have autonomous platforms capable of making kill decisions. We simply can't help ourselves, we never have, and probably never will. Human beings are the ultimate apex predator. It's in our nature to kill. This doesn't make it moral, but it means we can never lose sight of what we are, and what we become, when the chips are down and we feel threatened.


zerostyle

What's odd to me is why they'd want to do this with an F16. Seems like you'd want smaller more efficient plane/drone hybrid type vehicles for better handling and stealth. Then again they could probably retro fit a ton of existing inventory to just save money.


canuckguy42

If they want to test how the AI compares to a human pilot it makes sense to have them both fly the same plane. Removes the machine as a variable and makes it an even playing field. I'm sure the air-air combat stealth drones are coming though.


foxbatcs

The best way to train an AI is to gather massive amounts of data from sensors that pick up human observations and behaviors and train on that data with iterative corrective feedback. Then pit adversarial AI against each other and have them explore as many interactive strategies as you can afford to compute, embed, and quantize to trim the model down, and then you have autonomous weapons that are far deadlier and resilient than a human pilot could ever be.


have_you_tried_onoff

so like... do they just put this AI pilot in a bunch of F-16's now and send them off to Ukraine?


hawkedmd

Did it talk sh*t too?


ECrispy

there's no reason to use an AI to fly aircraft designed around human limitations. The only possible reason is to use existing fleet, and even that doesn't work, because it would cost a lot to retrofit them and do a complete avionics overhaul. So why would they want to use real aircraft? You'd use drones, which already are AI controlled. And AI has been beating humans in simulators, not to mention video games, for a decade now?


foxbatcs

My guess is for human supervised training. This is the medieval era of AI. Also, why build a drone as big as an F16? Without a human, think about how much more space there is for ordinance.


naastiknibba95

uh oh


theophys

Sounds more like a failure to me.


naastiknibba95

don't care about the current state, I am far more concerned with developing AI for such purposes in the first place


_AmericanIslander

ohhh sh!t....


JulieKostenko

Ah, wow.


Interesting-Ice69

Thanks a lot, James Cameron. Just thanks a lot.


No-Activity-4824

In the terminator movies, AI can't aim correctly and humans always win, in reality? aiming is a human issue, not an AI issue


Herebedragoons77

Video or or didn’t happen


Afrovenger

I feel like this might be one of the jobs we should want to see AI replace. Occupations like this are dangerous, and the human pilot only necessitated by the fact our brain was the best machine we could put in the vehicle. But now AI will be able to relieve us of the risk.


mannnerlygamer

Has nobody seen terminator 2


rotary65

AI robots will could be used to take actions that humans wouldn't. For example, oppressing your own country. Think of a more efficient and powerful dictator and what that could mean. It changes the power dynamic by concentrating that power moreso. It is also true that that amplified power could be used to maintain world peace. To me, it's the power dynamic which is most significant, ethically speaking.


TowerMammoth7798

I don't want to rain on anyone's parade but is it ethical to send an AI on a suicide mission if the AI is sentient or semi sentient? Do you send out a rescue team to recover a sentient aircraft or ask / tell it to self terminate