Fear of a “cyber Pearl Harbor” first appeared in the 1990s, and for the past two decades, policymakers have worried that hackers could blow up oil pipelines, contaminate the water supply, open floodgates and send airplanes on collision courses by hacking air traffic control systems. In 2012, then-U.S. Secretary of Defense Leon Panetta warned that hackers could “shut down the power grid across large parts of the country.”
None of these catastrophic scenarios has occurred, but they certainly cannot be ruled out. At a more modest level, hackers were able to destroy a blast furnace at a German steel mill last year. So the security question is straightforward: Can such destructive actions be deterred?
It is sometimes said that deterrence is not an effective strategy in cyberspace, because of the difficulties in attributing the source of an attack and because of the large and diverse number of state and nonstate actors involved. We are often not sure whose assets we can hold at risk and for how long.
Attribution is, indeed, a serious problem. How can you retaliate when there is no return address? Nuclear attribution is not perfect, but there are only nine states with nuclear weapons; the isotopic identifiers of their nuclear materials are relatively well known; and nonstate actors face high entry barriers.
None of this is true in cyberspace where a weapon can consist of a few lines of code that can be invented (or purchased on the so-called dark web) by any number of state or nonstate actors. A sophisticated attacker can hide the point of origin behind the false flags of several remote servers.
While forensics can handle many “hops” among servers, it often takes time. For example, an attack in 2014 in which 76 million client addresses were stolen from JPMorgan Chase was widely attributed to Russia. By 2015, however, the U.S. Department of Justice identified the perpetrators as a sophisticated criminal gang led by two Israelis and an American citizen who lives in Moscow and Tel Aviv.
Attribution, however, is a matter of degree. Despite the dangers of false flags and the difficulty of obtaining prompt, high-quality attribution that would stand up in a court of law, there is often enough attribution to enable deterrence.
For example, in the 2014 attack on SONY Pictures, the United States initially tried to avoid full disclosure of the means by which it attributed the attack to North Korea, and encountered widespread skepticism as a result. Within weeks, a press leak revealed that the U.S. had access to North Korean networks. Skepticism diminished, but at the cost of revealing a sensitive source of intelligence.
Prompt, high-quality attribution is often difficult and costly, but not impossible. Not only are governments improving their capabilities, but many private-sector companies are entering the game, and their participation reduces the costs to governments of having to disclose sensitive sources. Many situations are matters of degree, and as technology improves the forensics of attribution, the strength of deterrence may increase.
Moreover, analysts should not limit themselves to the classic instruments of punishment and denial as they assess cyberdeterrence. Attention should also be paid to deterrence by economic entanglement and by norms.
Economic entanglement can alter the cost-benefit calculation of a major state like China, where the blowback effects of an attack on, say, the U.S. power grid could hurt the Chinese economy. Entanglement probably has little effect on a state like North Korea, which is weakly linked to the global economy. It is not clear how much entanglement affects nonstate actors. Some may be like parasites that suffer if they kill their host, but others may be indifferent to such effects.
As for norms, major states have agreed that cyberwar will be limited by the law of armed conflict, which requires discrimination between military and civilian targets and proportionality in terms of consequences. Last July, the United Nations Group of Government Experts recommended excluding civilian targets from cyberattacks, and that norm was endorsed at last month’s G-20 summit.
It has been suggested that one reason why cyberweapons have not been used more in war thus far stems precisely from uncertainty about the effects on civilian targets and unpredictable consequences. Such norms may have deterred the use of cyber weapons in U.S. actions against Iraqi and Libyan air defenses. And the use of cyberinstruments in Russia’s “hybrid” wars in Georgia and Ukraine has been relatively limited.
The relationship among the variables in cyberdeterrence is a dynamic one that will be affected by technology and learning, with innovation occurring at a faster pace than was true of nuclear weapons. For example, better attribution forensics may enhance the role of punishment; and better defenses through encryption may increase deterrence by denial. As a result, the current advantage of offense over defense may change over time.
Cyberlearning is also important. As states and organizations come to understand better the importance of the Internet to their economic well-being, cost-benefit calculations of the utility of cyber warfare may change, just as learning over time altered the understanding of the costs of nuclear warfare.
Unlike the nuclear age, when it comes to deterrence in the cyberera, one size does not fit all. Or are we prisoners of an overly simple image of the past? After all, when nuclear punishment seemed too draconian to be credible, the U.S. adopted a conventional flexible response to add an element of denial in its effort to deter a Soviet invasion of Western Europe. And while the U.S. never agreed to a formal norm of “no first use of nuclear weapons,” eventually such a taboo evolved, at least among the major states. Deterrence in the cyber era may not be what it used to be, but maybe it never was.
Joseph S. Nye is a professor at Harvard University and the author of “Is the American Century Over?” THE DAILY STAR publishes this commentary in collaboration with Project Syndicate © (www.project-syndicate.org).