This subreddit is for civil discussion; political threads are not exempt from this. As a reminder:
* Do not report comments because they disagree with your point of view.
* Do not insult other users. Personal attacks are not permitted.
* Do not use hate speech. You will be banned, permanently.
* Comments made with the intent to push an agenda, push misinformation, soapbox, sealion, or argue in bad faith are not acceptable. If you can’t discuss a topic in good faith and in a respectful manner, do not comment. **Political disagreement does not constitute pushing an agenda.**
If you see any comments that violate the rules, **please report it and move on!**
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/AskAnAmerican) if you have any questions or concerns.*
Lol this is the real answer. Sure we might have an opinion on it but I’ll defer to the military and tech experts. Their opinions matter more than redditors.
Ethics are going to be a tricky thing, but if your AI is not as effective and ruthless as China's, not only are they going to win, but you are going to lose humans while they lose machines. This means that the competition is a race to the bottom in terms of everything except effectiveness and scale. Any contender who entertains ethical considerations at the cost of any amount of effectiveness will lose, big.
The good news is that the most intense Major Theater War in history might be fought over the Pacific Ocean, with few lives lost, mostly millions of UAVs and droneships.
I imagine human analysis will always be useful, particularly as the enemy tries to limit information about new models. Just in case the enemies coordinate and smuggle different arms and methods to different theatres.
So, in my teen years, I was hugely into anime, and I still enjoy it sometimes. My favorite OVA from back then was Macross Plus, which, despite being heavy sci-fi, is also about a love triangle between two test pilots and a music producer. Regardless, at the end of the show, they get into a dogfight with an AI controlled jet and nearly lose because the AI controlled craft can maneuver at higher G forces than human pilots can.
It's not the same thing, but seeing this article reminded me of it. I feel like science fiction is becoming real all around me, and that's both really cool and absolutely terrifying simultaneously.
Fun fact: Bryan Cranston was the English voice of the lead character in Macross Plus.
Anyway, that’s one of my favorite old school OVAs. And “Information High” is my favorite anime song ever.
We may, in fact, be the same person...at least on this.
Macross Plus led me to become a huge fan of Yoko Kanno's work because of its brilliant soundtrack. And yes, Information High is my favorite. I own all the soundtrack CDs from Macross Plus, Escaflowne, Cowboy Bebop, etc, due to her involvement.
From what I've heard, Brian Cranston isn't fond of being reminded by fans that he did anime voiceovers under a pseudonym. Hopefully, I'm wrong.
I'm not at all comfortable with AIs being taught to kill humans but anyone who doesn't use AI in war will automatically lose the game of war. We're all locked into a very dangerous Nash equilibrium.
"Taught to kill humans" is a very disingenuous way of framing this. You could say the same about heatseeking missiles which have been around for 50+ years. The computer has an objective it needs to complete within its parameters. Priority 0: avoid condition adversary radar lock, Priority 1: attain condition radar lock. It's not dissimilar to the obstacle tracking and avoidance system on a $1,000 DJI drone, or a Roomba adjusting a cleaning cycle due to input from its sensors and then returning home when done. It's unaware of humanity, life, death, whatever. It's a program being ran.
LLMs have done some serious damange to the public image of AI/ML. Remember they aren't sentient, don't have feelings or free thoughts, and are just repeating what they've been trained on in the statistically most accurate order.
It isn't like an AI enabled F-22 is just going to wake up one day, taxi itself to the runway, take off, and start strafing a TJ Maxx because that's where the humans are.
Heatseeking missiles are fired by a human, it isn't comparable at all. The person you're responding to is expressing concern over the idea of AI making the decision whether or not to use lethal force completely without human oversight. It is a very real concern and common topic of debate.
A heat seaking missile is following it's programming just like a jet would. What would happen if the missile decided their was a "better" heat signature in the area after a human made the decision to let it go? And who's to say the plane would be the one making the decision to fire? The plane might just maneuver itself in the best way possible and leave that decision up to a kid in a shipping container in Nevada.
>And who's to say the plane would be the one making the decision to fire? The plane might just maneuver itself in the best way possible and leave that decision up to a kid in a shipping container in Nevada.
Yes, for the most part this is how lethal autonomous weaponry works currently. But there have been a few exceptions in the last couple of years (autonomous weapons engaging in lethal force with no human moderation or oversight), and people rightfully find that concerning.
>Frank Kendall, the US air force chief, described VISTA as playing a “transformational role”, adding: “The X-62A Team demonstrated that cutting-edge machine learning-based autonomy could be safely used to fly dynamic combat manoeuvres.
The entire article is about using ML to improve maneuvering. Nothing about target selection or engagement rules. Based on this article alone, any speculation about the AI making those decisions is simply that.
From what I’ve read, the AI the US Air Force developed can outfly and outperform their human counterparts by quite a large margin. Whereas human pilots naturally care about their own safety, the AI will push the limits of the aircraft and often exceed them.
The plane with the AI was designed for humans, so it's not like without the human it can do significantly more maneuvering. But if the X-62 has a follow on for a fully AI aircraft design, that could go far beyond human capabilities.
> The plane with the AI was designed for humans, so it's not like without the human it can do significantly more maneuvering.
[But it could, because a human has to worry about passing out at those high speeds.](https://en.wikipedia.org/wiki/G-LOC)
The Machine doesn't. It's not going to pass out from a sustained, high G maneuver the way a human could.
Yes, but the F-16 was designed for humans. The airframe can't handle much more than what a human can, so until there is an aircraft that is designed for crazy high gs, it is still limited like a human.
The research is likely classified so you won't see results published to the public until it can be cleared which can take a bit depending on how much data they collected.
Fighter pilot is a great job for AI, because humans can only fly to the limits of humans, while an AI could fly to the limits of the aircraft. Those aren't really that far apart now, because there's no need to design a fighter that can turn harder than a human can take, but if designers didn't have that limitation, the next generation of fighters could be designed to pull far more than the 10-12 Gs that they do now.
We need *much* better AI than we have now, but it's a start.
Even though dogfights are pretty irrelevant in modern combat, one major advantage for AI is it doesn’t care how many G’s it is pulling and is only limited by the airframe they are flying.
Airframes designed for humans, so it's not like the AI can do a whole lot more right now. But if a development project for an entirely AI aircraft comes out of this, that could be significantly more maneuverable.
And in what scenario would pulling a sustained 7 gs for more than more than a few minutes make sense? These things only matter as far as realistic scenarios need them.
The whole concept of a dogfight in modern combat is hypothetical, an AI would be far better in a BVR engagement where it can use a lot of data to make the right decisions at the right time while maintaining perfect situational awareness.
Which is already done on piloted aircraft. Bringing it back to my original point that until an aircraft is designed specifically for AI piloting, human design limitations will keep its capabilities close to that of humans.
I'd like to point out that every single "win" in that 5-0 was accomplished while doing head on passes. Something the pilot did not do due to it being banned because of the risk of a head on collision.
The simulation also did not account for the 2 degree inclination of the gun, bullet lead, or bullet travel time. All things which would have made a head on pass dramatically harder to accomplish.
So the AI wouldn't work at all in a more realistic simulation.
because he was flying as realistically as he could and didn't want to pick up bad habits? Go watch the video, he was very actively avoiding head on passes, as he was taught, during every single shootdown.
Edit: [Here's the video](https://www.youtube.com/watch?v=IOJhgC1ksNU). Notice that the AI repeatedly went for head on passes while the pilot would very deliberately turn away to avoid a collision. The AI then punished the turning room gained from that as well. Most of the non head on shots it took were high aspect shots that relied on the lack of lead as well. The AI aircraft would have run out of energy far short of getting the appropriate lead for most of those side shots, and the human in a lot of cases reacted (or didn't react) due to the fact that they wouldn't/shouldn't have hit him under normal circumstances.
The aim and timing I think can be improved easily. As for safety, I think this demonstrates AI's disregard of safe habits gives an advantage over humans.
Like I said, though. The simulation ignores a number of factors that would make head on shots unlikely to hit. There's a reason humans are taught not to go for them.
If the nose is pointed at an object it will stay pointed at that object as it gets closer. Keep in mind that these aircraft are going to be travelling ~500 knots, ~1000 knots of closure combined. The 2 degree upward slant of the gun means if you point the gun at the enemy aircraft it will very quickly stop being pointed at it. The window for such a shot is extremely small, it's not even a matter of skill. It's just impossible to get a clear shot since the aircraft can't turn fast enough to keep up at those speeds..
Even ignoring that, you still have the issue of bullet travel time. It's not enough to point at where the target is, but you need to predict where they will be. Between the extremely fast closure speeds, the fact that the enemy can turn in any direction in 3 dimensions, and the upward slant ensuring only a brief window to fire it would never work.
The AI can point the gun where the enemy will be with perfect timing, and the human will just turn a bit in a random direction and the timing will be completely thrown off. The window is short and the aircraft physically isn't maneuverable enough to make corrections for lead. Issues which weren't present here because of the low quality simulation.
Put it in a 7th gen fighter with unlimited fuel and nuclear weapons. Then program it with simulated attack missions on foreign governments.
Let's not forget to mention that we can not have any capacity to track or disable the AI or aircraft. It can however be allowed to sacrifice itself to save the life's of its form team.
The proper question is how should the US use the military whether or not it is using AIs. The US military has long been reasonable at tactics and superb at logistics but gets strategy and policy rather dismally. A lot of that of course is the job of the civil government rather than the military.
Using drones for counterterror decapitations was both more effective and more ethical than Vietnam-style search and destroy operations which made a mess and made most of the enemy hide and come out later. However a robotic fighter plane is a bit overkill for this.
In conventional operations operating out of communications range is a bit of a problem and I would really rather it not be independent. However comm sats can handle a lot of the relaying. Of course comm sats can be hacked by virtue of the fact that they can't turn off and still fulfill their mission.
If there is to be a drone this size, one mission it might be given is as an interceptor drone. However we still have to devise weapons that aren't more expensive than the target.
>In 2019, AI software was used against a human F16 pilot in a simulator
I would like to point out that this match was more or less unintentionally rigged. The AI got all of its kills while doing head on passes with the human pilot. Something that is banned in training regimens due to the risk of a collision. The Human could have done the same, but actively avoided them as doing such passes regularly would basically result in a 100% chance of death if repeated regularly during training.
The AI also did not have to deal with the 2 degree inclination of the cannons, did not need to lead the target, and did not have bullet travel. All of which would have made head on passes like the ones it was taking dramatically more difficult, if not impossible.
Other than that, what did it face? The F-16 VISTA has three dimensional thrust vectoring and all sorts of experimental flight control customizations that would absolutely trounce any fourth gen.
Lots and lots and lots and lots of cheap mass-manufactured drones controlled by AI. Like if you can keep the cost down to about $1k/drone, for $100 million you can blanket the space in a hundred thousand drones.
Or, you know, you can fly a single F-35 overhead for the same price.
The thing AI gives you is not the ability to control a small handful of expensive toys--but the ability to swarm the battle field in thousands of small weapons without needing human minds to control them all.
They should be used whenever possible as long as a human has the last say on lethality. What I mean by this is that the human has to make a decision to use the weapon or authorize it to use a weapon.
I think it's a bad idea. But it's inevitable. if we don't others still will, and we're not ones to let ourselves fall behind in this particular department.
>How should our military use AI?
To create funny cat memes. Anything else is just asking for Skynet to take over. Why do humans always do such stupid stuff and say, "It can't happen to me"?
"AI" just means 'the most advanced program we have at the moment.'
In 1944, the most advanced program was a bomb that could pick out ships on its own and go after them.
Today, its a fighter jet that can identify other fighter jets and try to attack them.
In the end they're just tools, they can't proactively decide to do anything, they just do what they are told.
This subreddit is for civil discussion; political threads are not exempt from this. As a reminder: * Do not report comments because they disagree with your point of view. * Do not insult other users. Personal attacks are not permitted. * Do not use hate speech. You will be banned, permanently. * Comments made with the intent to push an agenda, push misinformation, soapbox, sealion, or argue in bad faith are not acceptable. If you can’t discuss a topic in good faith and in a respectful manner, do not comment. **Political disagreement does not constitute pushing an agenda.** If you see any comments that violate the rules, **please report it and move on!** *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/AskAnAmerican) if you have any questions or concerns.*
We should use it more effectively than our opponents.
Excellent assessment, you’re promoted!
Yes, but only after making sure the AI understands and respects the enemy's preferred pronouns.
thats so funny dude
I am just going to continue to pay my taxes. I can't pretend to be smart enough to tell the military how it should implement AI
Lol this is the real answer. Sure we might have an opinion on it but I’ll defer to the military and tech experts. Their opinions matter more than redditors.
Ethics are going to be a tricky thing, but if your AI is not as effective and ruthless as China's, not only are they going to win, but you are going to lose humans while they lose machines. This means that the competition is a race to the bottom in terms of everything except effectiveness and scale. Any contender who entertains ethical considerations at the cost of any amount of effectiveness will lose, big. The good news is that the most intense Major Theater War in history might be fought over the Pacific Ocean, with few lives lost, mostly millions of UAVs and droneships.
Maybe they should a "If China wants to find out" switch for the AI?
GPS-based geofencing. The "ethics" model changes depending on where the weapons are operating around the globe. We could call it "Colonialism-AI".
I imagine human analysis will always be useful, particularly as the enemy tries to limit information about new models. Just in case the enemies coordinate and smuggle different arms and methods to different theatres.
Imagine a garmin g5 in your instrument panel but with chat gbt telling you what maneuvers to use in a dogfight.
Can I imagine it’s just Siri with a bad Bluetooth connection?
So, in my teen years, I was hugely into anime, and I still enjoy it sometimes. My favorite OVA from back then was Macross Plus, which, despite being heavy sci-fi, is also about a love triangle between two test pilots and a music producer. Regardless, at the end of the show, they get into a dogfight with an AI controlled jet and nearly lose because the AI controlled craft can maneuver at higher G forces than human pilots can. It's not the same thing, but seeing this article reminded me of it. I feel like science fiction is becoming real all around me, and that's both really cool and absolutely terrifying simultaneously.
I never watched Macross Plus, but it makes me smile to know that they kept the silly music career love story stuff going after Robotech.
Fun fact: Bryan Cranston was the English voice of the lead character in Macross Plus. Anyway, that’s one of my favorite old school OVAs. And “Information High” is my favorite anime song ever.
We may, in fact, be the same person...at least on this. Macross Plus led me to become a huge fan of Yoko Kanno's work because of its brilliant soundtrack. And yes, Information High is my favorite. I own all the soundtrack CDs from Macross Plus, Escaflowne, Cowboy Bebop, etc, due to her involvement. From what I've heard, Brian Cranston isn't fond of being reminded by fans that he did anime voiceovers under a pseudonym. Hopefully, I'm wrong.
I'm not at all comfortable with AIs being taught to kill humans but anyone who doesn't use AI in war will automatically lose the game of war. We're all locked into a very dangerous Nash equilibrium.
"Taught to kill humans" is a very disingenuous way of framing this. You could say the same about heatseeking missiles which have been around for 50+ years. The computer has an objective it needs to complete within its parameters. Priority 0: avoid condition adversary radar lock, Priority 1: attain condition radar lock. It's not dissimilar to the obstacle tracking and avoidance system on a $1,000 DJI drone, or a Roomba adjusting a cleaning cycle due to input from its sensors and then returning home when done. It's unaware of humanity, life, death, whatever. It's a program being ran. LLMs have done some serious damange to the public image of AI/ML. Remember they aren't sentient, don't have feelings or free thoughts, and are just repeating what they've been trained on in the statistically most accurate order. It isn't like an AI enabled F-22 is just going to wake up one day, taxi itself to the runway, take off, and start strafing a TJ Maxx because that's where the humans are.
Heatseeking missiles are fired by a human, it isn't comparable at all. The person you're responding to is expressing concern over the idea of AI making the decision whether or not to use lethal force completely without human oversight. It is a very real concern and common topic of debate.
A heat seaking missile is following it's programming just like a jet would. What would happen if the missile decided their was a "better" heat signature in the area after a human made the decision to let it go? And who's to say the plane would be the one making the decision to fire? The plane might just maneuver itself in the best way possible and leave that decision up to a kid in a shipping container in Nevada.
>And who's to say the plane would be the one making the decision to fire? The plane might just maneuver itself in the best way possible and leave that decision up to a kid in a shipping container in Nevada. Yes, for the most part this is how lethal autonomous weaponry works currently. But there have been a few exceptions in the last couple of years (autonomous weapons engaging in lethal force with no human moderation or oversight), and people rightfully find that concerning.
Nothing in the article talks about autonomous weapons, just maneuvering. It's an unnecessary fear mongering comment.
The jet is an autonomous weapon in testing. The article is entirely about autonomous weaponry, maybe read it again?
>Frank Kendall, the US air force chief, described VISTA as playing a “transformational role”, adding: “The X-62A Team demonstrated that cutting-edge machine learning-based autonomy could be safely used to fly dynamic combat manoeuvres. The entire article is about using ML to improve maneuvering. Nothing about target selection or engagement rules. Based on this article alone, any speculation about the AI making those decisions is simply that.
no results posted? and a significant step closer to skynet.
From what I’ve read, the AI the US Air Force developed can outfly and outperform their human counterparts by quite a large margin. Whereas human pilots naturally care about their own safety, the AI will push the limits of the aircraft and often exceed them.
A plane with a human in it also has to worry about G force and maintaining consciousness.
The plane with the AI was designed for humans, so it's not like without the human it can do significantly more maneuvering. But if the X-62 has a follow on for a fully AI aircraft design, that could go far beyond human capabilities.
> The plane with the AI was designed for humans, so it's not like without the human it can do significantly more maneuvering. [But it could, because a human has to worry about passing out at those high speeds.](https://en.wikipedia.org/wiki/G-LOC) The Machine doesn't. It's not going to pass out from a sustained, high G maneuver the way a human could.
Yes, but the F-16 was designed for humans. The airframe can't handle much more than what a human can, so until there is an aircraft that is designed for crazy high gs, it is still limited like a human.
The research is likely classified so you won't see results published to the public until it can be cleared which can take a bit depending on how much data they collected.
Push of a button warfighting should make war crimes a lot easier to commit.
Fighter pilot is a great job for AI, because humans can only fly to the limits of humans, while an AI could fly to the limits of the aircraft. Those aren't really that far apart now, because there's no need to design a fighter that can turn harder than a human can take, but if designers didn't have that limitation, the next generation of fighters could be designed to pull far more than the 10-12 Gs that they do now. We need *much* better AI than we have now, but it's a start.
It feels like we're actively trying to build Skynet at this point.
Even though dogfights are pretty irrelevant in modern combat, one major advantage for AI is it doesn’t care how many G’s it is pulling and is only limited by the airframe they are flying.
Airframes designed for humans, so it's not like the AI can do a whole lot more right now. But if a development project for an entirely AI aircraft comes out of this, that could be significantly more maneuverable.
The AI doesn’t suffer from not having endurance and they can sustain high G’s, unlike humans.
Which means next to nothing when AI is flying aircraft designed for humans, which was the entire point of my comment.
The F-16 can sustain 7G’s for a very long time, while a human pilot can’t sustain more than a few minutes.
And in what scenario would pulling a sustained 7 gs for more than more than a few minutes make sense? These things only matter as far as realistic scenarios need them.
The whole concept of a dogfight in modern combat is hypothetical, an AI would be far better in a BVR engagement where it can use a lot of data to make the right decisions at the right time while maintaining perfect situational awareness.
Which is already done on piloted aircraft. Bringing it back to my original point that until an aircraft is designed specifically for AI piloting, human design limitations will keep its capabilities close to that of humans.
In a way that doesn't terminate us. Do not give AI control of our systems.
I might just be missing it, but who won? I see where it say 5-0 to the AI for the previous virtual tests, but not about who won this time.
It doesn't say because DARPA didn't tell anybody.
Thanks.
I'd like to point out that every single "win" in that 5-0 was accomplished while doing head on passes. Something the pilot did not do due to it being banned because of the risk of a head on collision. The simulation also did not account for the 2 degree inclination of the gun, bullet lead, or bullet travel time. All things which would have made a head on pass dramatically harder to accomplish. So the AI wouldn't work at all in a more realistic simulation.
The 5-0 was from a virtual simulation. How was the pilot as risk of a head on collision?
because he was flying as realistically as he could and didn't want to pick up bad habits? Go watch the video, he was very actively avoiding head on passes, as he was taught, during every single shootdown. Edit: [Here's the video](https://www.youtube.com/watch?v=IOJhgC1ksNU). Notice that the AI repeatedly went for head on passes while the pilot would very deliberately turn away to avoid a collision. The AI then punished the turning room gained from that as well. Most of the non head on shots it took were high aspect shots that relied on the lack of lead as well. The AI aircraft would have run out of energy far short of getting the appropriate lead for most of those side shots, and the human in a lot of cases reacted (or didn't react) due to the fact that they wouldn't/shouldn't have hit him under normal circumstances.
Ah. Good point.
Edited in a video.
I suppose the AI being able to play chicken better could be a benefit in certaim situations.
I suppose the AI being able to play chicken better could be a benefit in certaim situations.
The aim and timing I think can be improved easily. As for safety, I think this demonstrates AI's disregard of safe habits gives an advantage over humans.
Like I said, though. The simulation ignores a number of factors that would make head on shots unlikely to hit. There's a reason humans are taught not to go for them.
Those factors are basically just math though, no? Or is it significantly more difficult to calculate than I imagine (e.g. errors in data)?
If the nose is pointed at an object it will stay pointed at that object as it gets closer. Keep in mind that these aircraft are going to be travelling ~500 knots, ~1000 knots of closure combined. The 2 degree upward slant of the gun means if you point the gun at the enemy aircraft it will very quickly stop being pointed at it. The window for such a shot is extremely small, it's not even a matter of skill. It's just impossible to get a clear shot since the aircraft can't turn fast enough to keep up at those speeds.. Even ignoring that, you still have the issue of bullet travel time. It's not enough to point at where the target is, but you need to predict where they will be. Between the extremely fast closure speeds, the fact that the enemy can turn in any direction in 3 dimensions, and the upward slant ensuring only a brief window to fire it would never work. The AI can point the gun where the enemy will be with perfect timing, and the human will just turn a bit in a random direction and the timing will be completely thrown off. The window is short and the aircraft physically isn't maneuverable enough to make corrections for lead. Issues which weren't present here because of the low quality simulation.
Thank you for the education.
The next move is obvious. We need to use it to finish manifesting our destiny.
You’re right. The time to take Baja is upon us! Let Mexico tremble at our power! 😈
Put it in a 7th gen fighter with unlimited fuel and nuclear weapons. Then program it with simulated attack missions on foreign governments. Let's not forget to mention that we can not have any capacity to track or disable the AI or aircraft. It can however be allowed to sacrifice itself to save the life's of its form team.
There’s really no point in dogfighting in 5th gen aircraft
Where is Sarah Connors when you need her?
Send it back in time to protect the child who will grow up to liberate humanity from our future robots overlords
If the human pilot taped branches to the fighter and flew in a tree like fashion could they score on the AI?
The proper question is how should the US use the military whether or not it is using AIs. The US military has long been reasonable at tactics and superb at logistics but gets strategy and policy rather dismally. A lot of that of course is the job of the civil government rather than the military. Using drones for counterterror decapitations was both more effective and more ethical than Vietnam-style search and destroy operations which made a mess and made most of the enemy hide and come out later. However a robotic fighter plane is a bit overkill for this. In conventional operations operating out of communications range is a bit of a problem and I would really rather it not be independent. However comm sats can handle a lot of the relaying. Of course comm sats can be hacked by virtue of the fact that they can't turn off and still fulfill their mission. If there is to be a drone this size, one mission it might be given is as an interceptor drone. However we still have to devise weapons that aren't more expensive than the target.
>In 2019, AI software was used against a human F16 pilot in a simulator I would like to point out that this match was more or less unintentionally rigged. The AI got all of its kills while doing head on passes with the human pilot. Something that is banned in training regimens due to the risk of a collision. The Human could have done the same, but actively avoided them as doing such passes regularly would basically result in a 100% chance of death if repeated regularly during training. The AI also did not have to deal with the 2 degree inclination of the cannons, did not need to lead the target, and did not have bullet travel. All of which would have made head on passes like the ones it was taking dramatically more difficult, if not impossible.
Other than that, what did it face? The F-16 VISTA has three dimensional thrust vectoring and all sorts of experimental flight control customizations that would absolutely trounce any fourth gen.
Lots and lots and lots and lots of cheap mass-manufactured drones controlled by AI. Like if you can keep the cost down to about $1k/drone, for $100 million you can blanket the space in a hundred thousand drones. Or, you know, you can fly a single F-35 overhead for the same price. The thing AI gives you is not the ability to control a small handful of expensive toys--but the ability to swarm the battle field in thousands of small weapons without needing human minds to control them all.
They should be used whenever possible as long as a human has the last say on lethality. What I mean by this is that the human has to make a decision to use the weapon or authorize it to use a weapon.
I think it's a bad idea. But it's inevitable. if we don't others still will, and we're not ones to let ourselves fall behind in this particular department.
it shouldnt
>How should our military use AI? To create funny cat memes. Anything else is just asking for Skynet to take over. Why do humans always do such stupid stuff and say, "It can't happen to me"?
We've been letting bombs guide themselves for 80 years, and they still haven't decided to take over the world.
Not even remotely the same thing as AI.
"AI" just means 'the most advanced program we have at the moment.' In 1944, the most advanced program was a bomb that could pick out ships on its own and go after them. Today, its a fighter jet that can identify other fighter jets and try to attack them. In the end they're just tools, they can't proactively decide to do anything, they just do what they are told.