The End of Manned Aviation?

I usually don’t comment on current affairs because commenting on them is time-critical and I don’t have the time nor do I feel inclined to fire-off an article under that pressure; I like to take my time and really work that article. However, recent events do force me to comment on a very particular issue, an AI just beat a fighter pilot and some people think this is the end of military aviation as we know it. Nevermind that such sweeping generalizations on the military are usually wrong, it is an interesting development. It is especially interesting for me, since I have a background in aerospace engineering (though I have not „engineered“ in over five years, so take what I say with a grain of salt) and had specialized in nonlinear adaptive control (parts of which are within the „AI fold“) during my grad school days.

The event
The latest trial was part of an ongoing investigation by DARPA on the use of AI in military aviation. Several teams of developers (including industry giants like Lockheed Martin) created different learning algorithms and trained them. Initial trials focused on training the algorithms and according to Popular Science, this was a bumpy undertaking: 

The first trial in the series, held last fall, was very much rookie algorithms trying to figure out aviation fundamentals, explains Col. Dan Javorsek, the manager of the event at DARPA and a former F-16 aviator and test pilot. „What you were basically watching was the AI agents learning to fly the plane,“ Javorsek says. (His call sign is „Animal,“ a reference to the Muppets.) „A lot of them killed themselves on accident—they would fly into the ground, or they would just forget about the bad guy altogether, and just drive off in some direction.

This is quite normal. The algorithm has no frame of reference whatsoever to what is happening and has to „learn“ how to act in a specific environment. This goes to the core of AI design: how to create an algorithm that can efficiently and effectively learn the rules of the environment that it is in (there is a very interesting video on youtube visualizing the learning process of a hide and seek AI, much of the theory is described in Nonlinear Systems by Khalil and in this paper on AlphaGo).

Heron Systems, a small company out of Maryland and Virginia won the last trial, in which its AI design was pitted against a real human pilot. However, most of the conditions of this trial are unknown, so it is hard to come to any type of conclusion. There are a lot of questions that need to be answered to be able to judge the viability of this specific AI in real-life air-to-air combat. What is the exact structure of it, and which data was used to train it? How much data was used to train it? I think it would be very interesting to compare the „flight hours“ of the AI to those of the pilot (I am assuming that the AI had orders of magnitude higher training hours…somewhere here there is an argument to be made for more flight hours for pilots).

Another thing to keep in mind (this will be important later) is that this was all a „software-based trial“, by that I mean that the AI was not in an actual airplane flying, but that a flight simulation was run and that the AI directly interfaced with the flight simulation software. I am assuming that the AI had perfect situational awareness here, i.e. it knew where the enemy was at all times, it had perfect situational awareness of environmental conditions, etc.

The Challenge of Situational Awareness

When moving from a computer-simulated environment into the real world, situational awareness for autonomous systems becomes a challenge. The AI needs some way to know where the enemy is, and this is harder than it sounds.

The first hurdle on the way will be machine vision. The AI will need a way to visually grasp the environment and derive implications from that. Even though we are making progress every day, machine vision is an issue that has not yet been solved to a satisfying degree. Take for example cleaning robots. Humans can build incredibly sophisticated hard- and software, however cleaning robots, which often rely on visual inputs to guide themselves around a room, are still very rudimentary and failure-prone.

Vision is not the only challenge in this area. The next hurdle is sensor integration. Pilots rely on a myriad of aircraft sensors for their situational awareness. The fusion of all this information, meaning the act of painting a coherent picture and therefore gaining situational awareness, is done by the human brain. In the absence of a human brain, this will have to be done by a computer. Now, this sounds easier than it is. Different sensors have different error tolerances and accuracy margins, as well as different reliabilities. Then there is the problem of contradictory sensor information: what is the AI to do if one sensor says one thing and another something else, especially when a mistake could mean catastrophic failure?

This leads us to the next problem: jamming and spoofing. Say what you want, but I think humans are the most un-jammable and un-spoofable systems on the battlefield. A human will intuitively know that flares are flares and will not mistake it for an enemy aircraft. A well-trained pilot will realize that his GPS is playing games with him and will not land on enemy territory (unlike unmanned aerial system). Developments in offensive and defensive technology behave like a pendulum; the implementation of an offensive technology leads to the development of a defensive technology that is designed to counter it, which leads to the development of offensive technology designed to beat it, etc. It will not take long until the pendulum swings the other way and new technology designed to beat unmanned aircraft is implemented.

Then there is the issue of cost. Usually, one argument for unmanned systems is that they will be much more cost-efficient (since there will be no life-support systems on board) and more capable than humans. We have talked about capabilities above, let’s talk a little bit about cost. Software and hardware are not exactly cheap, especially in the realm of military aviation. The costs are usually driven by relatively small batch numbers (compared to demand on the civilian side and for non-aviation applications) and the high-reliability demands by military and aviation regulations. This is exemplified by exploding costs in the US F-35 and F-22 programs (or any other military high-tech program really). It seems like software will be driving the price up since software is much more expensive than hardware. Considering that a fully autonomous AI would need to be developed according to military and aviation reliability standards, maintained and upgraded during its lifetime, it is not clear that fully autonomous flight will be much cheaper than manned flight.

The Core of the Debate
The important question that I have not seen anybody ask so far is this: what comes after the machines? There is a push for autonomous and optionally manned land systems, but what happens when all these systems are destroyed? In a near-peer conflict, what is going to happen when one side manages the destroy the other side’s autonomous fleet? Will the losing side stop fighting, or will they move on to manned flight and systems? Will autonomous systems be even viable in a near-peer conflict, where jamming, spoofing, electromagnetic attacks, and the targeting of crucial networking infrastructure will be a common occurrence? These are the hard, and important questions and I think that they imply that manned flight will not disappear.

The Results of DARPA’s Experiment
What does this experiment prove? I think it proves that humans are capable of building very sophisticated software and that our machine learning algorithms are top-notch. I think it also shows that AI will be able to support humans, but in the near future, it will not replace humans (yet). The people at DARPA are aware of that fact, as they stated themselves on Popular Science:

Javorsek says that looking into the future, he sees a split between the types of tasks that AI and humans might handle, with algorithms focusing on physically flying an aircraft and people free to keep their minds on the bigger picture. „The vision here is that we get human brains at the right spot,“ he reflects. Artificial components can focus on the „low-level, maneuvering, tactical tasks,“ he says, while their flesh-and-blood counterparts can be „battle managers“ who are able to read „context, intent, and sentiment of the adversary.

What do you think of this new development?
Share your thoughts in the comments!

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.