What puzzles me is that as of yet, there doesn't seem to be any talk of using "AI" (ie Neural Net stuff) for planning and strategy, which seems like a MUCH more powerful application than this stuff. Sure, it can learn how to fly planes and drones, or drive tanks or terrifying robot dogs, but isn't the real potential here in duck-taping GPT-3 to a mountain of past battlefield data and specs and having it run war games for a while?
Yes, I realize I'm almost literally describing SkyNet, but isn't that where the power would be, for better and worse? The strength of computers has always been the ability to analyze literally incomprehensible (to humans) amounts of data and derive unique insights from them, and current machine learning has magnified this ability several-fold. AIs have now repeatedly been dropped into all kinds of video games and gone from morons to competent to beating all human players and exploiting bugs no one even knew existed. In a grim sense war is just another game. It has constraints, resources, definable win and loss states, etc. Consider that conflicts have often tilted based on a single novel strategy discovered by limited human commanders, and then ponder what an AI that improves its Mario speedrun via nearly imperceptible collision glitches could do.
Potentially, weapons don't matter as much as how well you can use them.
simply put because AI is way too stupid for that kind of stuff. When we talk about AI today most of what we talk about is actually just processing sensory input at a low level.
Wargames and strategy require cognition and level of contextual awareness and understanding of human motivation and intent that falls well into the realm of sci-fi AGI. No military confrontation is generic enough to run on some ML pattern-matching or travelling salesman type algorithm.
I mean just to talk about how low-level and limited AI is, when you take any modern learning system, you teach it Atari Breakout, if you adjust the panel by a bunch of pixels or rotate the screen you need to relearn or transfer-learn the game because the algorithm doesn't even understand the ontology of the game and really just manipulates pixels.
Even Deepmind's starcaft bot was stupid in this sense. It didn't understand starcraft strategy at all (which was visible in it's nonsensical unit combinations it was building) but simply out-microed its opponent, it just had better mechanics.
> Even Deepmind's starcaft bot was stupid in this sense. It didn't understand starcraft strategy at all (which was visible in it's nonsensical unit combinations it was building) but simply out-microed its opponent, it just had better mechanics.
Protoss Alphastar was fantastic. It was definitely not just out-microing opponents (after later refinements Alphastar's micro limitations were very human-like), but really had an understanding of build orders, positioning, choke points and the like. Here's a great game it played against Serral (one of if not the top Zerg player in the world) where it holds fantastically against Serral's aggression and is able to use the layout of its own base to pick apart what looked like a winning position for Serral: https://www.youtube.com/watch?v=C6qmPNyKRGw
As an avid StarCraft player I still think AlphaStar is one of the most impressive AI programs ever made. While it didn't get everything quite right, it exhibited planning and creativity in it's builds, and didn't just do the same exact thing every time; it scouted and reacted to the opponent's moves with professional-level proficiency.
AlphaStar's training process consisted of two phases: an imitation-of-human-players learning phase that created a base set of "templates" and then self-play among agents seeded from those templates.
When creating those templates, they basically ended up with "baked-in" build orders. But self-play among agents ended up creating a variety of reactive strategies on top of those build orders, so that agents would modify them on-the-fly as the game situation changed. At a high level this is basically how humans use build orders as well.
Later refinements where still quite superhuman, just in terms of consistency not abusing stalker micro. For example it doesn’t need control groups, that’s a clear sign it’s not sticking close to human limitations.
Yeah the alpha star team didn’t really emphasise it. It’s APMs were human like in count, but they had basically 0 wasted actions, they were all meaningful. Lots of APM are used by repeat keys or mass clicking.
> Lots of APM are used by repeat keys or mass clicking.
Both of those (and more) are filtered by EAPM measures (effective APM) and even on those metrics Alphastar still had a lower EAPM count than most pros.
EAPM still includes a lot of pointless clicking by pros. Just look at periods of the game when there is nothing productive to do. By comparison a throttled Aplphastar has no need for any pointless actions and can get timings inhumanly precise which games EAPM in the other direction.
Kind of. The only reason it didn't use control groups is because it had "soft" control groups of its own (IIRC there were limits on how granular of actions it could take as well as how many distinct move commands it could issue).
I mean looking at that Serral game I don't see any superhuman micro on the part of Alphastar. Just fantastic exploitation of base layout and steadily pressing every strategic weakness Serral made in his attack.
If anything on the consistency front, Alphastar's main advantages came from superhuman macro (effectively never forgetting about making sure workers are busy), but the edge there vs human pros is pretty slight.
Effective macro requires good micro. That’s one of the reasons archon mode is such an advantage.
> steadily pressing every strategic weakness Serral made in his attack
Again a very micro limited activity. Alphastar wasn’t very superhuman in that someone could get those inputs down about that quickly, they just can’t keep the pace up over time. Endurance is one of those human limits that is often ignored when talking about esports, but it’s real and huge influence on gameplay.
> Simply put because AI is way too stupid for that kind of stuff. When we talk about AI today most of what we talk about is actually just processing sensory input at a low level.
This is a very narrow view of AI. In fact, the limitations you are talking about apply only to deep learning (which is frequently and mistakenly confused for AI nowadays). Other well-studied branches of artificial intelligence include planning [1] and reasoning [2].
By far the biggest problem is to formalize the real world and build a realistic simulation which the AI system can play with. After that, a decent system would blow most humans out of the water.
> Even Deepmind's starcaft bot was stupid in this sense. It didn't understand starcraft strategy at all (which was visible in it's nonsensical unit combinations it was building) but simply out-microed its opponent, it just had better mechanics.
Maybe, what about go and chess? The feeling there was that humans did not understand the game very well.
> limitations you are talking about apply only to deep learning (which is frequently and mistakenly confused for AI nowadays)
Deep learning is also where all the money and attention is going, and where the revolutionary gains have been seen in the last few years. They’re pretty good at compressing huge datasets into a queriable structure and generating features for hierarchical data, but are just one component of a generalized system that could handle something like war.
> By far the biggest problem is to formalize the real world and build a realistic simulation which the AI system can play with
Strong agree! The fundamental challenge is to create digital model of the world with enough detail & accuracy that you can run optimizations on it and trust the results. This is as true for robotic aircraft as it is for autonomous cars or an e-commerce data warehouse. People need to decide what subset of the world to observe, deploy & maintain sensors to collect those observations, build information systems to organize & serve the observations, and maintain the whole thing. Then maybe you can start running AI/ML on it.
I don’t think it’s correct to characterise these bots as one trick ponies, winning based on micro. Take the Dota 2 AI built by OpenAI. Dota 2 isn’t micro intense, it’s mostly about strategy. These bots are capable of beating the best human teams. Their strategy is interesting because it’s different from how humans have played this game as long as it has existed. For example, in a 5v5 humans will skew resources towards so 2-3 players (“cores”) have more to work with than the 2 “supports”. This has always been taken for granted. But OpenAI distributed them evenly, allowing for some interesting strategies.
It’s still not perfect. It still can’t play the full game with no restrictions. But it is discovering new strategies that humans never did. It’s not just micro.
Have humans adapted their strategy with similar success? As I would guess those strategies rely on the improved micro in movement, hp vs damage possible analysis, and communication.
Two things stand out to me, and they have nothing to do with micro:
The AI consistently did a 'deathball push' strategy, where small early advantages snowball and they pushed hard to exploit that early advantage. One of the things that enabled that was that resources were more evenly spread across the team, allowing the heroes that were strong earlier to do something with that advantage. Competitive and casual ranked play hasn't switch to always do deathball, but there is a marked increase in the amount of resources that early-timing 'support' heroes take, they are a lot greedier than before OpenAI. (as pointed out by a sibling post) It's unclear if that is either because of lessons from OpenAI-five, or because of the balance changes (there is more resources to be had around the map these days).
The second thing that stands out is that the 1v1 OpenAI and the 5v5 OpenAI did buy a lot more of 'consumable' items, mana and HP potions basically. These were considered too inefficient before, people tended to prefer items that gave a more over-time regeneration and long-term efficiency. What the bots did was /constantly/ ship consumables over, which was kind of surprising at the time. It used that consumable resource to either force an opponent away from xp/gold resources (lane, jungle) or allow it to stay after aggression. Once pros started buying a lot more consumables, casual players quickly followed suit.
Sort of, but I can't be sure that this attributed entirely to OpenAI or other changes. It used to be more common 5-6 years ago to devote all available resources to one player ("carry") and the other 4 make space for the carry. In the last few years it's become more common to have 3 cores who get sufficient resources. It also sometimes happens that the 4th player has resources equal to the top 3. But always, the 5th player is dirt poor.
FWIW the developer of the game has also tried to fix this over the years, devising ways to get more resources to the 4th and 5th players. For example, there is a system similar to UBI where all members of the team benefit equally from "bounty runes". But till date, the bots have achieved a level of egalitarianism yet to be matched by human players.
> FWIW the developer of the game has also tried to fix this over the years, devising ways to get more resources to the 4th and 5th players. For example, there is a system similar to UBI where all members of the team benefit equally from "bounty runes". But till date, the bots have achieved a level of egalitarianism yet to be matched by human players.
this kinda makes sense. I'm more of a cs player, so hopefully my observations aren't totally off base. generally, a group of five humans will have some sense of who their best player is, who their worst player is, and who's good at specific roles. for instance, you might deliberately send your worst player into a bombsite first to gain info about the opponent's defense (and save your best player to react to that info). or you might send the person who's best at "entry fragging".
an egalitarian playstyle doesn't make sense when you have human players with different capabilities. you want to play to each person's strengths. if you have an algorithm controlling five heroes (or five instances of the same algorithm, idk how it works), there's no "best player" to send more resources to or dedicate to a specific role. I don't know enough about moba to say whether this is even an advantage.
none of this stuff is particularly interesting in counterstrike. a circa 1999 aimbot with a good ping will beat any human player.
Replace player by character and you're basically there. Typical about MOBA's (and now also in e.g. Valorant these days) is that each character you choose comes with a certain 'kit' of skills and stats that give it different strength and weaknesses. A character might have powerful spells early on, others are very weak early but dominate towards the end because scale really well with extra resoures. The former will typically act in a more sacrificial role to enable the latter.
The OpenAI favored a strategy where you snowball an early advantage into an early to midgame win, before the 'carry' characters start to dominate. That in itself is not new though, it just executed it in a weird but consistent way. One of the weird ways was indeed that 'egalitarian' way of having the 'support' characters be more greedy, enabling them to have a higher impact on the early/mid game timing, when the deathball starts rolling.
The Shadow Fiend AI was consistently beating pros in 1v1 because it had flawless technical execution of the standard gameplay on that hero, eventually forcing errors out of the fallible and emotional human players. It wasn't anything new, it was just perfect.
They've moved beyond that now. Shadow Fiend won mostly due to superior micro but their newer 5v5 bot has human-level "planning" and executive function.
Actually, it is famously said that what made napoleon a fabulous field marshall was his extensive reading and learning of historical battles.
This gave him an extraordinary ability to contrast live battle situations with a huge library of learned historical situations, helping him to make decisions on battle plans.
You're exactly correct about war games. Warfare is more than just moving units around a battlefield. It involves psyops, cyber, logistics, and other factors that are way too complex for contemporary AI to consider.
Probably because you’d end up with an AI that would do something that fulfilled its training criteria, but didn’t really reflect the real desired outcome, like that Tetris AI that just paused the game when it realised it was going to lose. Or the AI would suggest attacking unacceptable non-military targets, or inflict collateral damage, or suggest nuking the enemy as a first strike. AI might be an interesting tool for wargaming, but it’s never going to be a complete strategy guide.
I’d say that AI is used extensively throughout the defence industry in specific areas though, and I’m fairly confident wargaming is one of them. But there are people who have context the AI will lack at all points in the chain. You wouldn’t just have some AI ordering troops about.
> Or the AI would suggest attacking unacceptable non-military targets, or inflict collateral damage
I would expect the people who wrote the AI would impose on it constraints to avoid this – to be useful in the real world, an AI has to comply with the same constraints that real world military commanders have to follow, and the laws of war are among those (certainly if we are talking about the US military.)
(A realistic risk is that it might interpret those rules differently than a human would – maybe in an overly pedantic manner – and its overly pedantic readings of the rules might give human commanders cover or encouragement to engage in the same harmful pedantry.)
> or suggest nuking the enemy as a first strike
I doubt they'd even give an AI that option to consider. Realistically if you are a US military commander – in Afghanistan or Iraq or Syria or wherever the US might intervene next – that's not an option on the menu. The decision to use nuclear weapons only comes from the very top, and the very top will only consider it in utterly dire circumstances.
To be actually a useful tool to military commanders, you want an AI which suggests to them strategies they might not have thought of, or which gives them a reasoned opinion on which of the various options they are considering is most advisable. A useful AI tells them how to use the resources they have, and nuclear weapons are not–in normal circumstances–an available resource to US military operations.
> I would expect the people who wrote the AI would impose on it constraints to avoid this
The devil is in the details.
What if the AI decides that the course of action that leads to the desired outcome of winning the conflict is to kill civilians but it is also aware that outright killing civilians is against the law or will be ignored by the decision making human so it instead decides to recommend a series of strikes that indirectly lead to a large famine which results in the death of the civilians and winning the conflict?
"To be actually a useful tool to military commanders, you want an AI which [...] gives them a reasoned opinion on which of the various options they are considering is most advisable."
With that (far more sophisticated) AI system, leadership could just de-prioritize options with excessive risk of civilian casualty.
The other day I was looking at this – http://www.datalex.org/ [0] – presented with a situation, it asks you questions about it, and then generates a legal opinion on your situation. It shows all its reasoning (in natural language) to arrive at that legal opinion. That's the same basic thing a human advisor does – if you get a formal legal opinion from a lawyer, it doesn't just give you the conclusion, it also explains the arguments and reasoning and considerations it used to arrive at that conclusion.
So I don't see why a strategic military advice platform couldn't do the same basic thing.
It sound more like a prolog program. It takes a set of rules and facts and can infer other facts from there and show you all the steps of the process. Although it is a branch of AI it is not what one thinks of nowdays hearing the term AI. One thinks of some fancy powerful self learning black box, which can do miracles but can't explain how it does it.
>What if the AI decides that the course of action that leads to the desired outcome of winning the conflict is to kill civilians
I always figured that was what happened in the film Colossus: The Forbin Project, the computer(s) found a way to ensure / prevent a nuclear war / peace, by taking control of the nuclear arsenals on their own and ensuring peace via their own threat of nuclear weapons.
1) A military commander programs an AI squadron with the parameters of destroying a heavily defended military target with the parameters of minimizing casualties.
2) The AI veers the bombers 180 degrees off course and attacks a nearby town, purposely and irreparably destroying the only hospitals and drinking water production facilities in the region. Thousands of civilians are killed, and the loss of fresh water will cause many more casualties in the following weeks as people are no longer able to get drinking water. No military targets are hit.
3) The AI is examined. The reason it attacked the town is unclear but it is determined that it was not a malfunction or sabotage.
Given that.... was a war crime actually committed?
The AI cannot commit a crime, it's not a person. The people that ordered the attack specified that it should be attacking a valid military target. The people that created the AI certainly didn't intentionally put any mechanism in there to have the AI purposely commit war crimes.
Given that, is it an actual crime? Or would it be closer to something like a natural disaster?
...to be clear I'm not a bright guy so I've probably missed something obvious...
You haven't actually spent any time with optimization algorithms, have you? The grandparent is on point: on problems as totally ill specified as this one, you have a choice of nonsensical, yet valid suggestions or death by 1001 constraints (which just ends up being you spelling out the solution, and the AI repeating).
curiously - using an AI for Red Air/wargaming seems like an exceptionally good use case for AI that is pretty good most of the time, and pretty random at other times.
As a preparation activity you want to be prepped for both perfect play opponents and imperfect opponents that do crazy things like nuke first. Arguably such a system would also help eliminate bias in (pretend) adversarial commanders.
I have some limited experience in naval war ship combat software. There are many systems on a ship that can be fully automatic and that was even true before AI was a thing. The Goalkeeper https://nl.wikipedia.org/wiki/Goalkeeper from the 80's is for example a fully automatic gun that can take out incoming attacks but also make strategic attacks. This system has to be turned on manually and a chief has to approve to the maximum capabilities such as the maximum amount of bullets the system is allowed to use. Other software already can give the operations team the advice to enable it without using fancy AI. But the command still has to travel by 3 people before it can be turned on. This is mostly because of safety. So I expect that the same will happen in other fields. A human probably has to get the advice from the machine and decide if an other machine is allowed to do a thing.
A battlefield superior "AI" can already be easily fielded.
Just like Goalkeeper will beat any human operated gun, a cheap disposable drone swarm pointed at an enemy country with a program to blow up anything resembling an airfield will beat any military general at leading an air campaign, and will probably beat even a nuclear weapon in effectiveness. It's already possible without any real AI. It's just nobody bothered to make one (or probably did, and is keeping quiet about it.)
It's just we did not have a big war in the age of powerful, disposable computers, and thus never weaponized it. There are many weapons today which claim to "beat nuclear weapons" in their respective context, with the main determinant of them being "super" weapons is the lack of humans in the loop.
All of them can be developed, and weaponized, it's just a man with a gun, artillery, and amassed air, or armour already works superbly well at the task, and most importantly does so reasonably economically.
Interesting. Now that you mention it, I do seem to vaguely remember an article in the past year or two talking about just how automated some naval ships were getting. This also is a good reminder of the military's relationship to tech in general, which is that it's usually much, much farther along then the civilian world.
Probably in reference to Zumwalt where the relationship is not always positive i.e. automation saves cost on smaller crew, but smaller crew gets overworked (USN already grossly overtasked), has less expertise redundancy if shit hits fan, or even crew to ship size ratio for damage control needs.
Let's be real: a full blown conflict (i.e. fight to opponent destruction assuming a Nash equilibrium of no early surrenders) between team A and C would probably mean complete destruction of both teams: look at what was achievable in terms of destruction in WW2, and imagine the result with 70 years of better technology, better bombs, etc.
The Earth is too small and our weapons too powerful for playing child games anymore.
So it doesn't really make sense to train an AI for full scale conflict, nuclear arsenals are already MAD-sufficient enough. Instead, you'd have to model complex geopolitical, even cultural and psychological details to model surrenders and which territories one could lose, for example. Essentially AGI-complete tasks.
I know our realities are such that we can't let go of defense spending and defense systems yet, but it's looking increasingly ridiculous and dangerous to rely on military strength[1].
[1]: Instead of seeking ethical-economic-cultural multinational equilibrium, that is, a mutually acceptable plan toward individuals and development of our entire society/civilization.
In the sense of each "nation" (non-trivial political subdivisions in general) tolerating the collective vision for development of our society. I don't think bombs are a very effective form of communication or cultural-philosophical exchange.
Full blown conflicts aren’t the only types of conflicts. Small ones, especially those fought by proxies matter too. The Armenia-Azerbaijan conflict of 2020 wasn’t a world war, but it was still consequential. Notably it was won by the effective use of drones.
Conflict will increasingly be drone based. The controllers of those drones will increasingly be wielded by AI. This link is about tactics. This thread got derailed into strategy. But it’s reasonable to think that AI would do a good job of controlling attack air/sea drones with clear success criteria.
It's worth noting that while we have 70 years of better technology, we have much less of it than we used to have, and the tech difference isn't that large - machine guns are almost the same, artillery is almost the same, and a ton of conventional bombs is as destructive as in WW2, except more accurate.
The standing armies are generally smaller than the ones before and after WW2, despite three times larger worldwide population. The effective mobilization potential of major countries is much smaller than before or after WW2 as many countries have moved on from total conscription where you'd expect most of adult men to have military training and a huge proportion of the population to actually fight in a war. The quantity of planes, bombs, guns and shells that we can muster is an order of magnitude smaller than 70 years ago, so while each bomb or gun is more effective, the total destructive power is less than it used to be. The world has 5 times less nuclear weapons as it used to have.
So the actual trend is towards a smaller scale of destruction. We have major wars that result in as much casualties as an average WW1 tuesday despite being as full-blown conflicts by the involved countries, mobilizing as much as they reasonably can in modern circumstances.
Pretty sure the DoD is trying to use AI technology absolutely everywhere they can.
All of these cases assume you have simulators that represent reality realistic enough that you can use AI approaches to find ideal solutions in the parameter spaces. Dog fights make sense since there's a lot of computational work done in the aviation industry and the environments are pretty stable, as I imagine constraints on aircrafts and pilots are probably pretty well known. Other cases may be a lot more messy to accurately model.
Protein-based war planning is based on rough models of how the battle of say defending motor rifle company vs a tank regiment goes, so nothing new here.
A more fundamental challenge is about hunches the generals have about what they don't know due to always incomplete intelligence data.
How do you validate a strategic AI? You can’t trust it to be superior over humans because it beats humans in your simulator because, as you say, it may exploit bugs no one even knew existed, and some of those may not get recognized as bugs. https://en.wikipedia.org/wiki/Military_simulation#Problems_o...:
One notorious example was the US Air Force's Advanced Penetration Model, which due to a programming error made US bombers invulnerable to enemy air defences by inadvertently altering their latitude or longitude when checking their location for a missile impact. This had the effect of 'teleporting' the bomber, at the instant of impact, hundreds or even thousands of miles away, causing the missile to miss. Furthermore, this error went unnoticed for a number of years. Other unrealistic models have had battleships consistently steaming at seventy knots (twice their top speed), an entire tank army halted by a border police detachment, and attrition levels 50% higher than the numbers each force began with.
If you can’t validate it, do you dare trust it above humans?
With tactics, validation still is hard, but you can do (expensive) real-life experiments.
That's very likely being worked on, but strategy is also in major parts shaped by often rather vaguely defined political agendas and goals.
Ideally it also has to work in a grander geopolitical context, which is a lot of complexity to account for and simulate, involving a lot of very abstract and subjective calls.
So it's probably a much more complicated problem than it might appear at first glance.
Modern AI wins at games where the rules are known (like go, or even Dota), but war irl is a game where you need to invent and adapt to a changing set of rules.
As an example - if we had modern ai in ww2, it would help with the position of tanks and soldiers, but wouldn’t invent the idea of breaking Enigma, or the idea of making decoy tanks for the d-day.
Not just lack of out of the box thinking, also the inherent outdatedness of the corpus of documented war in general. There's even a saying describing the problem, "fighting the last war". Which basically means that human strategists are already more like GPT than they should. The problem is further compounded (beyond inherent outdatedness) by the general scarcity of the corpus, fortunately there just hasn't been that much war written by recent history.
> Sure, it can learn how to fly planes and drones, or drive tanks or terrifying robot dogs
NB: self-driving cars, drones, and Boston Dynamics robots are all examples of systems that make significant use of hand-designed control algorithms. AFAIK none of those systems are doing (especially end-to-end) deep rl for control.
I think the answer to your question might be something like "neural nets aren't as good as you think and have gotten a lot of credit for things they aren't used for and aren't even very good at."
> there doesn't seem to be any talk of using "AI" (ie Neural Net stuff) for planning and strategy
AlphaStar: excuse me??
People write that these games have predefined rules, as opposed to real-life scenarios.
Such people have probably never actually played Starcraft or any other similar games. Sure, the rules of how much damage each unit does, etc. - are fixed. (Actually not even they are fixed throughout the months, and a good player needs to be able to adapt to changes). But what the actual games end up being rarely depends on these rules directly.
What happens instead is a meta-game that is created between you and your opponent where you have to read their strategy and come up with creative ways to deal with it.
In fact in real life the situation is similar: the laws of physics are also always the same, the stats of airplanes are also the same for most of a given year, how much damage can it do, which angles of attack it can support, its speed etc. But it's the combination of these rules in real life situation that makes for the real meta-game.
AlphaStar is absolutely an example of an AI that deals with this exact problem.
PS. Sometimes I see how in the future when we create human-capable AI, these people will still say "but it's just statistics, it's not real intelligence, it's just advanced SVMs". Who cares about what the underlying primitives are? 2+2 does not equal 4 in complex systems like this, it's the emergent endogenous effects that produce all the bang. I don't care if it's all 1s and 0s, or some other small statistical primitive that you could reduce everything to: what's importan is the high-level computation and what it can achieve.
Um, your soldiers don't die (as much). Probably the immediate overriding concern, since the military industrial complex is fundamentally restricted in America by a resistance to risk and lose American Soldiers.
No soldiers to lose, no resistance to wars and use of weapons. Because we don't care about the rest of the world that we drop bombs on, that's for sure.
Robots don't say no, they do not need indoctrination, they keep secrets, they are expendable, they don't receive benefits, it doesn't take 20 years to make 1 robotic soldier at a time, they're repairable.
If a country decides not to go robotic, other countries will, and those countries will be in control.
> What puzzles me is that as of yet, there doesn't seem to be any talk of using "AI" (ie Neural Net stuff) for planning and strategy, which seems like a MUCH more powerful application than this stuff.
It'll probably be so effective, it'll still be fighting long after all of humanity has destroyed itself and left an irradiated environment.
Here are two science fiction short videos of an artist depicting such a future:
> The strength of computers has always been the ability to analyze literally incomprehensible (to humans) amounts of data and derive unique insights from them
AI actually gets into trouble due to the amount of data before humans do. Humans are quicker learners and only need a few iterations of analysis, while current AI needs a plethora of iterations over possible combinations.
For example the OpenAI Dota 2 AI was doing pretty well, but it had to play a restricted version of the game where only about a dozen heroes are allowed out of ~120. Basically only 10% of the complexity. OpenAI never even attempted to compete at the full game before they quit and shut down the project.
It was crushed by the worst pro team at the top tournament. I have never seen a deep learning bot (Alphastar included) beat humans in a game like this when reaction time/micro advantages are removed. Their novel 'strategy' is always optimized around perfect mechanics.
Well OpenAI holds the opinion that they could easily win. However I call bullshit.
When a self-driving car company can have its AI drive around in an Arizona suburb, nobody sane believes that it means they can also drive around in Kolkata, India. However that is basically what happened with the Dota 2 AI effort by OpenAI. They managed to succeed in an extremely limited scenario and then proceeded to put out press releases claiming they had beat Dota 2.
The people paying for these sorts of developments are the people who do that planning and strategy, and people are terrible at automating their own jobs. People presume tasks which they find simple will be easier to automate than tasks they find complex, and imagine a small number of skilled operators having their capabilities multiplied by systems that take care of the tedious labor. They see the ditch diggers being replaced by robots which the foreman can command. But the human brain is fantastic at estimating the size of a rock half buried in the ground, and terrible at keeping track of hundreds of workers' schedules.
It's difficult to admit that what tends to make you a smart and valuable human is your ability to behave more like a computer. It's even more difficult to admit that the parts of your job you consider the most fulfilling and impressive can often be rendered unnecessary with minor process changes or ought to be done by someone else whose skills are more appropriate for those tasks.
You can build simulations and train it on that but then it would find some quirks in your simulations and exploit that or your simulation is too basic and won't model some variables that is relevant and a human can exploit.
Then if you have the perfect simulation with all the rules and all the variables you should probably use computer science and find the best strategy and not some statistics/guessing/AI stuff that might find a good enough solution.
I think you don't have enough real world data to train an AI, probably you could put the effort in building an AI to solve the economic and social problems, find better way to solve the issue where some guy spends on Fortnite skins the money that could feed an entire family... I would try to create a simulation for the economy, improving the lives of the poor around the world should reduce the political tensions and reduce the number of conflicts IMO.
We are trying to do this, but GPT-N is not a productive avenue because you can't achieve what you're talking about by training on text. Single-vehicle control is a much simpler problem and thus you hear about it because there are successes to talk of, but an attempt at something like SkyNet is very much in the works, possibly for coordinating attacks though that isn't something I've ever worked on personally, but very much for automating the entirety of what the intelligence community calls the TCPED cycle. This is something my wife is working on, and the code name for the contract is actually "Cyberdyne."
You're never going to hear anything about the details of how this works or whether or not it works at all because it only works if it's kept a secret.
This makes sense, and even as I was writing the initial comment I realized suggesting GPT was a bit lazy/imprecise, but it gets the general idea across quickly, and GPT HAS been remarkable good at things other than just completing stories or news articles. Realistically though yes, I'd expect it would be something geared from the ground up for navigating a game-like space, like, again, a very very souped up version of MarI/O.
Something like speeding up TCPED seems like a much more modest and immediately doable goal; reminds me of efforts to automate financial analysis to some degree. Love the codename if it's actually genuine.
And yes, obviously secrecy makes sense, but I'm not sure to what extent. Obviously the whole point of MAD was that it was loudly trumpeted as a deterrent. It doesn't seem impossible that the same might happen here. On the other hand, the military certainly has some ability to keep things quiet when they need to.
I think that strategy is much more complex than tactics, while improving tactics has real tangible benefits that can be realized sooner, amplifying power. Even stupid genderals can win if their army is 10x as smart, 10x as powerful than the enemy.
> The strength of computers has always been the ability to analyze literally incomprehensible (to humans) amounts of data and derive unique insights from them, and current machine learning has magnified this ability several-fold.
That's what these F-16s are trying to do. It is taking visual and sensor information by the truckloads and find patterns in it, make meaningful conclusions from it and provide direction.
Your definition of AI is too narrowly scoped. AI can be applied in planning, strategy, logistics, and things like flight mechanics, as well. This is precisely because AI today itself is narrowly focused. We don't have AGI obviously.
Yes, and no. If we break down by time horizon, this is the feasible limit: solving constraints and at-risk problems for operations planners; solving relaxed instances of future scenarios to provide guidance to mission planners given partially observed opponent behavior; and engagement level models. Only the last level is really something you can generate / extract training data for, and at vast effort. We've been modelling air combat for 40 years or more and it's still difficult for larger scenarios. We really have only a nascent concept of how to employ unmanned jets.
Planning and strategy are important, but logistics (which is a subset of planning) is generally more important. The US didn't defeat Iraq in GW1 by using superior technology; there was a point where Iraqi forces could have continued into Saudi Arabia and made things a fait accompli. What defeated Iraq was US logistics. We first inserted the 82nd Airborne as a speed bump, and in roughly 8 days we have F-15s from the 1st TFW, F-16s, AWACs, A-10s, F-117 etc etc deployed to Saudi Arabia.
Amateurs study tactics, professionals study logistics.
AI is good enough to do some planning for you. Looks like we're just getting to the point where they can engage in dog fights. But there's a fine line.
You don't want AIs replacing human generals, because generals need to make tough calls. So tough, that their decisions are still debated by us 100+ years after. War is the most sophisticated application of human intelligence. Dog-fighting F-16s are more like a self driving car in this sense, than a general intelligence capable of winning a war.
You normally have to have a lot of data to get a machine to deal with a situation. There’s a lot more data for flying than making leadership-level decisions. There are likely a lot more unique and fuzzy situations in leadership as well. I’d start off with something simpler, like predicting the markets and weather.
That's only currently possible to the extent that we can build a realistic simulator for the problem we are trying to solve, over which an RL model can learn a good state-action function. Which means that most real-world problems involving planning etc aren't currently amenable to AI/ML methods.
War game simulators have been around longer than Neural nets gained prominence commercially.
There are enough records on the drive itself about how the Military has funded multiple supercomputers and algorithms including popular videogame engines for this very purpose.
This AI would be the ancestor of Asimov psychohistory https://en.wikipedia.org/wiki/Psychohistory_(fictional) . I am convinced that google is working on it. In trading, predictive models stop working when they are disclosed. If these AI are used, governments will keep it secret. Like massive surveillance, it will be confirmed in many years.
A Tesla cannot make a left turn (https://news.ycombinator.com/item?id=26542462), which most 16 year olds and 85 years drivers manage while either inebriated or not fully in control of their faculties. However we fully expect F-16 which take years of training of highly fit specimens of the human race to fly to be controlled by an AI.
I somehow don't see it happening in the next two decades or atleast a decade after we have L-5 driving.
Curiously, flight is much better suited to automation than driving. It's an easier control problem. Which is why planes and space vehicles have been taking off, flying and landing on their own for the past four decades, while the equivalent isn't true for cars even today.
Well, the article is about somebody doing this (in a simulator), not talking about doing it. Of course we are talking defense contractors here, so it's hard to tell wishful thinking from reality here. But logically, they would have been working on this for quite some time as it has been an obvious thing to be throwing lots of dollars at for some decade+ or so. So, it wouldn't surprise me if this was about as real as it looks.
Anyway, you can compensate for human smartness by just being unhumanly fast and compensating for their weaknesses. E.g. doing 16G turns, would kill/knock out the pilot but is probably OK from a structural point of view. Likewise, taking decisions faster than a human can take them matters when rockets are traveling towards you at several times the speed of sound. And of course maintaining situational awareness is a thing humans aren't good at when they are pulling many Gs and are dealing with a very complex battlefield. Tunnel vision, blacked out vision, etc. are all things fighter pilots have to fight against.
For the same reason, most dog fights are over before they begin these days. It mostly involves rockets fired from many miles away and a game of trying to dodge them. Basically, this space is already dominated by computer operated rockets. Shooting down planes with onboard machine guns stopped being a thing mostly. Except of course for countries still flying antiquated airplanes.
Moving in air space is a well-defined problem, the rules of physics and traffic are clear and you can expect others to follow them.
Traffic rules are often not well defined. And even if they are, people don't follow them to the letter, so there is this kind of fuzzy, "this is how we drive here" rule set which isn't well-defined.
I don't disagree, but the 'fit humans' has two components that AI pilots don't suffer from; blackout from G forces, and high situational awareness.
That said, they can put these planes into mock dogfights with human pilots on the redflag range in Nevada and see where they have advantages and where they fall down. Personally I would expect this sort of thing to be a pilot assist long before we got to Level 3 war fighting.
Agreed. Computers have been massively important to flying for decades. It's already basically impossible to fly a modern warbird without computer assistance. Here's a Hacker News-level amount of nuance for that statement: https://aviation.stackexchange.com/questions/8049/are-fighte...
A Tesla could make a left turn but they don't because the company doesn't want the liablity. Having to cross 3 lanes adds a lot of moving parts and you can't see a car moving 35mph in the 3rd lane when the other 2 lanes are stopped.
A plane basically has to avoid the ground and other planes.
The planes don't need human passengers. Humans are the planes Achilles' heel.
Those humans have a goal of destroying the opponent without dying. Tesla’s autopilot would be much more efficient if killing people sometimes would be OK. This is also why drones are so dangerous to the future of humanity.
Where you could ramp up a plane’s production, I’d wager there’s a scarcity problem when it comes to pilots (even though you could ramp up pilot training).
The physical fitness is required because of the high stress environment. Computers are obviously superior here, machines are able to outperform any human in any physical situation.
There are not near as many hazards in the sky as on your drive to work, it seems flying may be the perfect use case for AI
Then why have a real fight now? US launches its virtual flights, China, Russia, UK will develop their own. So let future war be of virtual fights! The side which loses a virtual plane pays the cost of the real plane to the other side. And the country which pulls out of this game(once it starts) loses the war! I see the future of war as total virtual affair. No need for Boston Dynamics style robots or expensive military hardware.
This idea is nothing new. In one of the star-trek episodes this was the theme, the only difference there was a real human was sacrificed for each virtual human lost!
What happens when the losing party in the sim-war disagrees with the outcome and accuses the other side of cheating?
A mugger accosts you on the street, points a knife at you and demands your money. You've managed to convince him to play a round of Quake 3 Arena instead - the winner gets your money. You win. Do you think the mugger will just say "gg mg", and leave you with your money?
War is the same thing, just scaled to groups of people.
The haves fight drone wars between themselves. But that won't happen because of nuclear deterrent.
So the haves will fight with drones and the have-nots will spill their blood.
This was presaged in Afghanistan. Army drone pilots did their shift killing Afghan soldiers with drones in an American base, get in car, drive to family in suburbs once your shift was up.
Next up: all those "shooter" gamers can get paid to operate robot infantry! None of that pesky resistance to actually killing people like they dealt with back in WWI and WWII.
> This idea is nothing new. In one of the star-trek episodes this was the theme, the only difference there was a real human was sacrificed for each virtual human lost!
Surface Detail by Iain M Banks also explores this idea. That novel also goes through the obvious problems -- you have to trust that the virtual outcome will actually be accepted, and that the conflict won't bleed out into the real. And at the point at which you have that trust in your opponent would you really be having a war in the first place?
It is a nice idea though, shame it's unlikely to ever become a reality.
Because you can’t actually force someone to pay up unless you have the actual physical ability to force them to.
Such a system would be easy to game. Simply build a physical army and go beat up everyone else while they’re playing dance dance revolution, war edition.
Yes, this idea is nothing new. It's called sports. But at the end, when you keep losing at the game, you can still grab a knife and kill the f*ckers and just take away the trophy. Or just feel better.
War is and has always about killing the other guys. (Now I'm not saying we should not do away as much of it as we can, but it's not as easy as making it into a game and calling it a day.)
Well also if the US ends up in a scuffle with a super power, air superiority will more or less decide which conventional force achieves its objectives.
I think that is pretty narrow sighted. A new major world war could be won with viruses, bacteria, drones, small robots, hacking financial systems, space lazers, genetically enhancing your population for financial domination, etc.. Air superiority is possibly a factor in some possible versions of war, but the reality is that it mostly gives us an advantage that allows us to press our will upon lesser militaries. That doesn't mean it couldn't be the dominant factor, but I imagine a major war between any super powers would be so costly to human life (since we are so good at killing now), that alternatives to traditional means of takeover would probably be preferred.
You're failing to define what the objectives are. Wars have objectives, that's why people fight them.
The US and China have no ability to occupy each other's lands, nor interest in doing so. Neither do they benefit from out right destruction of infrastructure because that escalates to a nuclear conflict which wins no one anything.
The most plausible engagement is US intervention in the strait of Taiwan to thwart a Chinese invasion and occupation. And that war is essentially conventional: it results from US fleets being out of position and internal politics making it look like a good opportunity.
Counterpoint: you don’t need a war for the arms industry to make a profit. What you really need is the potential for a war. Whether the armaments are fired, or lay dormant in storage for several decades makes no difference to the contractor because they are sold either way. As for votes, The last US president to start a war became very unpopular because of it.
We’re talking governments, not consumers. The government can do all kinds of crazy stuff to contractors that economic theory doesn’t account for, such as dictating the amount of profit the contractor gets to make and mandating that the weapon system be manufactured by another company because of any number of reasons. A war, or any kind of emergency, can increase the odds of the government doing crazy stuff. It’s not good for a contractor when the government decides they should make 5% profit when the stock market that year is going up by 10%.
Having a large, old inventory of weapons the government maintains can be profitable for the contractor because they can sell support for those systems that no one else can. This support alone can employ hundreds of engineers and bring millions a year to a contractor. You can’t do that if the weapons are all fired.
This is all in service of keeping the government happy. The government, state department specifically, can decide which countries the contractor can export the weapon systems too. The set of countries the government will allow this for will be bigger if there is no conflict the government is involved in.
Defense contracting is a very counter intuitive business.
Or may be the next world war will be won by a country that doesn't fight it.
China is building international super highways for trade, continent wide trade belts and inter-continent markets.
I don't think the US is lucky enough to get a war now. Countries will just maintain some minimum deterrence in the form of missile forces. So that they don't get out right invaded or sanctioned. Beyond that it will just be trade war and which country has citizens lined up to work 20 hour shifts. What makes all this so easily possible is we are not making new leap frog success in science(physics) anymore. Eventually engineering everywhere will catch up with science. In short every one can and will have weapons to deter invasions and sanctions.
The problem for west is not that a war will happen, but that it won't.
Every once in a while the world arrives at a paradigm change where everything changes. And no matter how good you are at the old game, it doesn't matter because the game has changed. We are likely at that place now.
One cannot easily discuss tanks in Iraq without mentioning the Battle of 73 Easting at least in passing. Superior armament, training, and tactics make a huge difference.
TL;DR: As part of that engagement, a troop of 9 US tanks and 12 APCs encountered one of Iraq's finest tank brigades. They destroyed ~50 Iraqi tanks, 25 APCs, 40 trucks and more in 23 minutes without taking any casualties.
> Superior armament, training, and tactics make a huge difference.
The Republican Guard was absurdly outmatched.
Re: training... arguably, the encounter occurred in the way it did due to (then) Captain McMaster's aggressive advancement leading to the necessity of ignoring regimental orders not to become decisively engaged.
What made the decisive difference was technology, specifically, infrared sighting and laser range-finders available to and used only by one side.
They're "sipping coffee in Paris" as in they've beaten NATO and occupied Continental Europe. They're asking "who won the air war?", with the implication that they don't really care because they won the part that mattered.
sounds like the fundamental dilemma in existence, that an individual is a free agent in a prescriptive system. can businesses be free agents? certainly someone is excited about laser missile defense, and certainly the economy of ideas is in play.
While we’re at it we need to remove the stigma of working on this type of software. China and Russia are plowing full speed into this technology with their best engineers. We must encourage our bright engineers to work on this tech.
What will Russia and China donor they come out ahead with AI weapons? Whatever they want.
If AI can competently dogfight in an F16, doesn't that make the F16 completely obsolete for dogfighting? Because the same sort of AI in a craft that doesn't have to cater for a human or even bother making it back to base (say, a missile) is going to kick the butt of an F16 no matter who the pilot is.
It may not be obsolete as such. Just that the fuel constraints make modern day dogfighting (topgun style) too limited in efficacy for either side. Especially when most of the recent conflicts displayed a decisive technological disparity in the air and to lesser extent on the ground.
It seems that bombing still remains the most in-demand task for the airforce (fighters included). Should that be done by AI too?
The new air threat appears to stem from the ability to fit nuclear power sources onboard of cruise missiles. Which basically turns each such missile into a very determined UCAV. Kind of revisiting the project Pluto [1].
My thoughts are if you don't need a human in the pilot seat, you don't need a pilot seat. Fewer constraints and your craft can become a lot more capable. Drones with AI pilots or AI assist seems to fit right in here. And it seems to be the way things already moved a war or two ago. Teaching AI to dogfight in F16s seems completely pointless. Even to train F16 pilots crazy enough to enter a battlefield against a capable enemy, where you will be shot down by an AI guided hypersonic missile or drone swarm rather than jousting against another mounted knight in honorable 1 on 1 combat.
I think the implication / fallout is much more paradigm changing. Impact on order of dreadnought invalidating pre-dreadnought fleets. Competent AI pilot -> autonomous vehicles with more extreme flight performance -> dramatically reduced anti-air capabilities. A UAV without meatbag limits / more fuel = an extremely performant missile that will be significantly more challenging to intercept to the point where anti-air might no longer be viable.
This tech gap = lesser adversaries with generation meatbag hardware get disproportionately stomped harder. But it's a capability reset among peer adversaries with pursuing similar technology, i.e. next gen UAV could make manned aircraft obsolete. Imagine scenario where two sides (US / CN) can have mutual air superiority because future UAVs can no longer be effectively challenged. If the only viable tactic is offense, then engagements shift to numbers games, which will favor whoever can field more in theatre which translates to reduction of viability of carrier based naval power projection. 50 years of US manned naval and aviation buildup and expertise could become obsolete the same way UK's massive pre-dreadnaught navy became obsolete, a technical reset that enabled Germany navy to catch up closer to parity via arms race.
Only if you consider single games; in repeated games a lower risk engagement (choose to draw or retreat rather than take a high chance of loss) will make sense.
The flight dynamics model won't have any data points for large parts of the flight envelope, so if your robo pilot exceeds tested limits we don't know what will happen, or even if control is recoverable. Experienced pilots certainly understand dynamic performance better than state space models -- the original learning platform! People learn jets in 100 hours, it remains to be seen how much training a machine takes.
Pulling 12G all the time also imposes stresses on the platforms that they were not designed for, so you'd probably want to ration that.
What I'm getting at is that sustainment of operations is as big a problem as individual engagements. You can loose a whole campaign by responding too heavily (or weakly!) to feints and probing.
Silver lining: When autonomous fighter planes do away with the human pilot, maybe sci-fi authors will finally stop putting fragile meat bags in their space war machines.
Somewhat off topic, but I would like to see this in commercial aircraft first. Dogfights? No, automated AI running through checklists and automated coordination with ATC ground systems. Today if the aircraft you are flying in has problems, the captain and co-captain work as a team to go through a long set of conditional checklists. This is precious time they don't have and very prone to human error. If something is not correctable, they may call pan-pan or mayday if they have time. First priority is to fly the plane. With "AI" the aircraft could have figured out the problem was about to happen, attempt several thousand combinations of corrections or compensating controls and ATC would already know the exact condition before a human uttered a word. Automatic escalation could bring on a set of support staff with knowledge of the aircraft if it can't self correct. Instead today we are depending on humans for this and then recovering a data flight recorder to see after the fact what went wrong. After we perfect this, then teach the aircraft dogfights for aircraft/drone avoidance, allowing aircraft to communicate their intentions. This could also help compensate for over zealous pilots pull back too much on take off, inducing drag/stall on a high angle of attack / low speed.
It depends on what goes wrong. If the failure is within the design spec (eg Engine 2 just flamed out) then it could speed through the checklists faster than humans. That kind of system may have prevented the loss of Air France 447. I think an AI system would have resulted in a higher loss of life on United 232, when the four humans in the flight deck managed to figure out how to crash the plane on a runway (where a MASSIVE fire/rescue response was waiting) instead of just flopping over into the nearest cornfield.
Related, here's my favorite radio call of all time, recorded when Captain Haynes was flinging the out-of-control United 232 in the direction of the airport, knowing that even if he made it to a runway he didn't have any way to stop.
"Sioux City Approach: United Two Thirty-Two Heavy, the wind's currently three six zero at one one; three sixty at eleven. You're cleared to land on any runway."
"Haynes: [laughter] Roger. [laughter] You want to be particular and make it a runway, huh?"
I was thinking through something recently. Here's where I think things are going:
- I suspect that air platforms will be littered with censors and massive amounts of data will be collected.
- This data will be used to train AI to complete missions.
- The USAF will start by augmenting human-led teams with AI flown aircraft.
- Polyglot teams consisting of platforms with different roles in the same mission.
All of this has led me to believe that the countries that engage in the most air missions (most combat active air force) could ultimately have AI models orders of magnitude better than other countries.
It would be great to hear someone who knows more than me (not hard) interrogate this idea. I've been captured by it and I'm meandering down the road of possibilities that start there.
Didn't see anything in the article - has anyone come across sources for more details on the actual network architectures/algorithms they used? That they prefaced one of the algo names with Alpha is probably a good clue...
They mention sub-scale aircraft tests at the end of 2021. Does anybody know by any chance what that means? How big is that sub-scale? Would they possibly have live ammunition?
This is partly true. But you might still want to avoid g-forces, not because of who or what controls your airborne unit but rather because maneuvering sheds kinetic energy, which is undesirable in combat and you might want to perform drastic maneuvers sparingly purely for kinematic reasons.
No, the biggest limitation on a fighter plane is the fuel fraction. Even new UCAVs won't be designed to handle G forces higher than current manned fighters because it isn't very useful and requires bad design compromises in other areas.
Neither F-22 not F-35 is designed to win via close quarters dogfights, but at medium to long range engagements, where the opponent can't even isolate what direction you're coming from, let alone achieve missile lock.
Having said that, the F-22 has M1.8 supercruise, and F-35 can boost to M1.3 and then cruise, so they have a considerable combat speed advantage over the F-16.
This is obsolete in the grand scheme of a war. The future is swarm of drones. Imagine a future tomahawk going at full x 20+ speed of sound and exploding over Beijing releasing 10 thousands drones, each capable of like 10 minutes of flying, but enough to search for high valuable targets and kaboom! the entire China's communist rulers dead. Win the war before it's starting.
Technically just nonsense (Tomahawk is subsonic, could not carry many drones, very unlikely to make it that far, the Chinese govt is vast and could not be 'taken out' even with 10000 cruise missiles), but the most objectionable part is "Win the war before it's starting". These are human lives we're talking about. No one wins by starting a war, and there are no "home by Christmas" wars now, if there ever were.
People were quite certain wars were a thing of the past before World War 1: "the economic cost of war was so great that no one could possibly hope to gain by starting a war the consequences of which would be so disastrous."
"I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones." -Albert Einstein
The post 1945 peace we've enjoyed is because a lot changed after WWII, not the least of which was the development of nuclear weapons and frameworks of nuclear deterance used around the world.
The problem now is that so many players have nuclear weapons they might leverage in dire situations should things go south that these sort of conflicts have largely been avoided. Most 'wars' now occur through financial/economic, information, and psychological means to gain upper hands. The rest of the conflicts are largely against smaller players who don't have nuclear weapons or even comprable fighter jets.
I wouldn't rule out another major war though because people seem to crave conflict but I think it's going to take a lot more to get into the scale of conflicts we've seen before.
Yes, I realize I'm almost literally describing SkyNet, but isn't that where the power would be, for better and worse? The strength of computers has always been the ability to analyze literally incomprehensible (to humans) amounts of data and derive unique insights from them, and current machine learning has magnified this ability several-fold. AIs have now repeatedly been dropped into all kinds of video games and gone from morons to competent to beating all human players and exploiting bugs no one even knew existed. In a grim sense war is just another game. It has constraints, resources, definable win and loss states, etc. Consider that conflicts have often tilted based on a single novel strategy discovered by limited human commanders, and then ponder what an AI that improves its Mario speedrun via nearly imperceptible collision glitches could do.
Potentially, weapons don't matter as much as how well you can use them.