Hacking Back, Signaling, and State-Society Relations – by Adam Segal

Over the last year, in the wake of continuous revelations of cyber attacks on companies, the media, think tanks, and civil society groups, there has been an increasingly, vocal debate over whether private actors should be allowed to engage in “active defense”, or more offensive oriented forms of defense sometimes referred to as “hacking back.” In one survey, more than half of the respondents thought their companies should have the ability to hack back against their attackers. In another poll, over one-third admitted that they had already done so. After Google discovered it had been hacked in 2009, it reportedly gained access to a computer in Taiwan that it believed was one of the sources of the attack.

These ideas are not new—they date back at least a decade—but they are motivated by both a recognition of the capabilities of private actors and a renewed sense of the limitations of national authorities to respond expeditiously and effectively to increasingly persistent, capable, and often state-backed adversaries. Firewalls, patching vulnerabilities, cyber hygiene, and other passive defenses are said to be “no longer sufficient” to address Advanced Persistent Threats (APT). Active defenses can raise the cost to attackers as well as gather intelligence on them to prevent future attacks. In addition, the talent and skills necessary are likely to exist in far larger numbers in the private sector than in the federal government.

These calls for private actors to play a greater role also reflect the recognition that security, like other areas of Internet governance, requires a mix of public and private authorities. Stewart Baker, for example, argues that “busting the government monopoly” and allowing the private sector to conduct offensive operations would probably “increase the diversity, imagination, and effectiveness of the counter-hacking community.”

As many have noted, what actually constitutes “active defense” is vague and includes a range of actions. David Dittrich defines the spectrum as “less intrusive to more intrusive, less risky to more risky, less aggressive to more aggressive.” This spectrum includes: local intelligence gathering; remote intelligence gathering; actively tracing the attacker; and actively attacking the attacker. Some have suggested the use of tracking beacons inside files that are at risk of being stolen, using disinformation, and planting fake data.

Much of what some of the security vendors say they are doing sounds like defense. As Shawn Henry, President of Crowdstrike and former head of FBI’s Cyber Division, put it: “We want to help companies do what they can, within their own firewall and within the confines of the law, to make them more resilient and secure.”

In addition, some have characterized the “naming and shaming” that Mandiant and the Citizen Lab have engaged in as a form of offensive action. The APT1 and Tracking GhostNet reports do not just expose techniques and means, but also, when possible, draw a connection between hackers and state supporters as a form of diplomatic pressure.

If this is to become a legitimate activity, it would have to become a regulated space. In order to prevent vigilantism, the government would have to accredit and review actors, and hold them accountable for their actions. Dittrich suggests that many of the rules and norms governing state behavior during war could also apply to the private sector. These would include necessity (engage only in actions necessary to achieve legitimate military objective), distinction (identification of lawful military objectives and avoidance of civilians), and proportionality (prohibition on use of force that exceeds that needed to achieve objective).

Drawing on the historical experience of privateers, Michael Tanji has suggested four principles. A principle of self-help would insist the private actors had already taken reasonable defensive measures before they were attacked and responded. A principle of proportionality would limit actors to reactions that were equal to or less than the actions taken by the attacker. Nations would actually have to demonstrate controls over private actors under the principle of sovereign control. And all private actors would have to be competent and certified as such under the principle of qualification.

The arguments against active defense fall into several categories. First, there are doubts that a regulatory regime could ever adequately define and limit hacking back. If, for example, active defense can be justified legally by claims that company A can retrieve its digital property now sitting on computer B, then would the same rights hold for the holders of copyrights who wanted to hack back into the computers of anyone who has pirated material?

Second, Dittrich argues against the efficacy of these attacks. As he notes, truly determined attackers are unlikely to cease from current attacks or be deterred from future attacks. Hacking back does not eliminate the problem but instead escalates the conflict. Third, there is the widespread concern that mistakes are bound to happen, and private actors will either damage third parties or cause inadvertent escalation. The end point is a very Hobbesian world, where private actors and nation states are pursuing a range of selfish interests and cyberspace is a never ending state of war.

China’s public response to the Mandiant APT1 report illustrates another difficulty of private actors and active defense, even though the report is not part of a traditional active defense strategy. States are bound to interpret these actions both through the filter of public-private relations in their own societies and their assumptions about how the private sector and the government interact in other nations.

Besides denying the attacks, claiming hacking is illegal in China, and proclaiming China’s own status as victim, Chinese press reports describe the report, along with stories of hacking of the New York Times, Wall Street Journal, and Washington Post, as part of a larger U.S strategy serving at least three objectives. First, claims about China-based hackers are meant to distract from the United States developing offensive cyber weapons and militarizing cyberspace as well as increase DoD budgets for cyber operations. Second, they are part of a larger economic competition in information and communication technology sectors. U.S. companies are losing market share to Huawei and ZTE and so, the argument goes, are smearing their competitors with charges of hacking and espionage. The House Select Intelligence Committee report on these two companies is the direct outcome of this. Third, the “Chinese hacking threat” is part of a series of other “China threat” arguments, including military, economic, and energy threats, that are designed to destabilize Sino-US relations and increase suspicion of China among its neighbors and trading partners.

The Global Times scoffed at the idea that the release of the Obama Administration’s Strategy on Mitigating the Theft of U.S. Trade Secrets the day after the Mandiant report was some kind of “coincidence.” In this the Global Times is probably correct: it seems highly unlikely that there was not some type of interaction between Mandiant and the United States government before the release of the strategy. But many in the West are likely to characterize that interaction as “coordination” and not direction. This coordination would look more like Mandiant alerting the White House it was writing the report, and the White House seeing it as a politically useful springboard for a larger diplomatic push, rather than the White House telling Mandiant to write and release the report. Of course, the process operates in the opposite direction. U.S. analysts make assumptions about the relationship between the Chinese government and technology companies, as well as the state and hacking groups based on China’s tight control over the Internet.
This is another degree of complexity, another opportunity for miscommunication and escalation. Even if a country signaled what it thought was a clear distinction between public and private authorities, there is no guarantee that the other side will interpret it as it is intended. In fact, it may be possible that the higher degree of distrust between the two sides the less likely they are to accept that there is any differences between private and state attacks.

About Adam Segal

Adam Segal is the Maurice R. Greenberg Senior Fellow for China Studies at the Council on Foreign Relations (CFR). An expert on security issues, technology development, and Chinese domestic and foreign policy, Dr. Segal currently leads the Cyberconflict and Cybersecurity Initiative. His recent book Advantage: How American Innovation Can Overcome the Asian Challenge (W.W. Norton, 2011) looks at the technological rise of Asia. His work has appeared in the Financial TimesThe Economist, Foreign PolicyThe Wall Street Journal, and Foreign Affairs, among others. He currently writes for the blog “Asia Unbound.” Dr. Segal has a BA and PhD in government from Cornell University, and an MA in international relations from the Fletcher School of Law and Diplomacy, Tufts University.

This entry was posted in Blog, News. Bookmark the permalink.

Comments are closed.