csds-africa

The evolution of Artificial Intelligence and future of National Security

Artificial intelligence is all the rage these days. In the popular media, regular cyber systems seem almost passe, as writers focus on AI and conjure up images of everything from real-life Terminator robots to more benign companions. In intelligence circles, China’s uses of closed-circuit television, facial recognition technology, and other monitoring systems suggest the arrival of Big Brother—if not quite in 1984, then only about forty years later. At the Pentagon, legions of officers and analysts talk about the AI race with China, often with foreboding admonitions that the United States cannot afford to be second in class in this emerging realm of technology. In policy circles, people wonder about the ethics of AI—such as whether we can really delegate to robots the ability to use lethal force against America’s enemies, however bad they may be. A new report by the Defense Innovation Board lays out broad principles for the future ethics of AI, but only in general terms that leave lots of further work to still be done.

What does it all really mean and is AI likely to be all it’s cracked up to be? We think the answer is complex and that a modest dose of cold water should be thrown on the subject. In fact, many of the AI systems being envisioned today will take decades to develop. Moreover, AI is often being confused with things it is not. Precision about the concept will be essential if we are to have intelligent discussions about how to research, develop, and regulate AI in the years ahead.

AI systems are basically computers that can “learn” how to do things through a process of trial and error with some mechanism for telling them when they are right and when they are wrong—such as picking out missiles in photographs, or people in crowds, as with the Pentagon’s “Project Maven”—and then applying what they have learned to diagnose future data. In other words, with AI, the software is built by the machine itself, in effect. The broad computational approach for a given problem is determined in advance by real old-fashioned humans, but the actual algorithm is created through a process of trial and error by the computer as it ingests and processes huge amounts of data. The thought process of the machine is really not that sophisticated. It is developing artificial instincts more than intelligence examining huge amounts of raw data and figuring out how to recognize a cat in a photo or a missile launcher on a crowded highway rather than engaging in deep thought (at least for the foreseeable future).

This definition allows us quickly to identify some types of computer systems that are not, in fact, AI. They may be important, impressive, and crucial to the warfighter but they are not artificial intelligence because they do not create their own algorithms out of data and multiple iterations. There is no machine learning involved, to put it differently. As our colleague, Tom Stefanick, points out, there is a fundamental difference between advanced algorithms, which have been around for decades (though they are constantly improving, as computers get faster), and artificial intelligence. There is also a difference between an autonomous weapons system and AI-directed robotics.

For example, the computers that guide a cruise missile or a drone are not displaying AI. They follow an elaborate, but predetermined, script, using sensors to take in data and then putting it into computers, which then use software (developed by humans, in advance) to determine the right next move and the right place to detonate any weapons. This is autonomy. It is not AI.

Or, to use an example closer to home for most people, when your smartphone uses an app like Google Maps or Waze to recommend the fastest route between two points, this is not necessarily, AI either. There are only so many possible routes between two places. Yes, there may be dozens or hundreds—but the number is finite. As such, the computer in your phone can essentially look at each reasonable possibility separately, taking in data from the broader network that many other people’s phones contribute to factor traffic conditions into the computation. But the way the math is actually done is straightforward and predetermined.

Why is this important? For one thing, it should make us less breathless about AI, and see it as one element in a broader computer revolution that began in the second half of the twentieth century and picked up steam in this century. Also, it should help us see what may or may not be realistic and desirable to regulate in the realm of future warfare.

The former vice chairman of the joint chiefs of staff, Gen. Paul Selva, has recently argued that the United States could be about a decade away from having the capacity to build an autonomous robot that could decide when to shoot and whom to kill—though he also asserted that the United States had no plans actually to build such a creature. But if you think about it differently, in some ways we’ve already had autonomous killing machines for a generation. That cruise missile we discussed above has been deployed since the 1970s. It has instructions to fly a given route and then detonate its warhead without any human in the loop. And by the 1990s, we knew how to build things like “skeet” submunitions that could loiter over a battlefield and look for warm objects like tanks—using software to decide when to then destroy them. So the killer machine was in effect already deciding for itself.

Even if General Selva’s terminator is not built, robotics will in some cases likely be given greater decisionmaking authority to decide when to use force, since we have in effect already crossed over this threshold. This highly fraught subject requires careful ethical and legal oversight, to be sure, and the associated risks are serious. Yet the speed at which military operations must occur will create incentives not to have a person in the decisionmaking loop in many tactical settings. Whatever the United States may prefer, restrictions on automated uses of violent force would also appear relatively difficult to negotiate (even if desirable), given likely opposition from Russia and perhaps from other nations, as well as huge problems with verification.

For example, small robots that can operate as swarms on land, in the air or in the water may be given certain leeway to decide when to operate their lethal capabilities. By communicating with each other, and processing information about the enemy in real-time, they could concentrate attacks where defenses are weakest in a form of combat that John Allen and Amir Husain call “hyperwar” because of its speed and intensity. Other types of swarms could attack parked aircraft; even small explosives, precisely detonated, could disable wings or engines or produce secondary and much larger explosions. Many countries will have the capacity to do such things in the coming twenty years. Even if the United States tries to avoid using such swarms for lethal and offensive purposes, it may elect to employ them as defensive shields (perhaps against North Korean artillery attack against Seoul) or as jamming aids to accompany penetrating aircraft. With UAVs that can fly ten hours and one hundred kilometers now costing only in the hundreds of thousands of dollars, and quadcopters with ranges of a kilometer more or less costing in the hundreds of dollars, the trendlines are clear—and the affordability of using many drones in an organized way is evident.

Where regulation may be possible, and ethically compelling, is limiting the geographic and temporal space where weapons driven by AI or other complex algorithms can use lethal force. For example, the swarms noted above might only be enabled near a ship, or in the skies near the DMZ in Korea, or within a small distance of a military airfield. It may also be smart to ban letting machines decide when to kill people. It might be tempting to use facial recognition technology on future robots to have them hunt the next bin Laden, Baghdadi, or Soleimani in a huge Mideastern city. But the potential for mistakes, for hacking, and for many other malfunctions may be too great to allow this kind of thing. It probably also makes sense to ban the use of AI to attack the nuclear command and control infrastructure of a major nuclear power. Such attempts could give rise to “use them or lose them” fears in a future crisis and thereby increase the risks of nuclear war.

We are in the early days of AI. We can’t yet begin to foresee where it’s going and what it may make possible in ten or twenty or thirty years. But we can work harder to understand what it actually is—and also think hard about how to put ethical boundaries on its future development and use. The future of warfare, for better or for worse, is literally at stake.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *