One can certainly understand the frustration of private companies that are repeatedly subject to cyberattacks, and seem to have little ability to keep the intruders out or to get overstretched law enforcement agencies interested in investigating. But the idea of changing the law to authorize “hacking back” is a dangerous one, and unlikely to fix the problem. In fact, it would likely make it worse.

Is anyone actually hacking back?

Some preliminaries: I’ve heard a lot of third-hand assertions that companies are beginning to hack back against their attackers. But these assertions never come from a company that actually did it, and they are rarely made by anyone even in a position to really know. There also are never any specifics about exactly what a retaliating company did. Menn’s article is cut from the same cloth—it starts with a general assertion that “an increasing number of U.S. companies are taking retaliatory action” in response to intrusions. But then it doesn’t offer a single example or anecdote. So as far as we really know, the discussion is at least mostly hypothetical.

Active response vs. hacking back

It’s important to draw some clear lines. Some of the types of “active response” that Menn and Stewart discuss should be uncontroversial, because they don’t actually involve hacking back. Using honeypots and deception within your own network seems perfectly legal, and unlikely to hurt any innocent bystanders. A beacon or digital dye-pack is a bit more controversial, but, depending on the specifics of how it would work, could probably be deployed legally. Things get dicey, though, when one talks about damaging a bad guy’s computer. Everyone seems to agree that this would violate the Computer Fraud and Abuse Act (CFAA) as presently written, so the legality of this tactic is not the issue for present purposes. The question is whether the CFAA should be amended to make it legal. Every indication that I can see points to “no.”

OOPS!  Sorry I destroyed your new computer, Mr. Smith

First, there’s the difficulty of “attribution.” If the government, with all its technical and legal resources, has difficulty identifying sophisticated intruders, how is the private sector going to be able to identify anyone but low-grade attackers (whom it should be able to keep out through basic defense measures)? And even if they can identify the attackers, there’s still a very high chance of collateral damage to innocent bystanders. Attackers can hide behind, and launch their attacks from, innocent servers. Yes, the former DoJ’ers invocation of “felony murder” is a bit histrionic, but the point is legitimate: do we really want private companies “shooting up” the network of a university or a corporation or of Ozzie and Harriet because a bad guy launched his attack from there? Talk about Stand Your Ground run amok. And think about this: If the US and Israel, widely seen as responsible for Stuxnet, couldn’t limit the malware they unleashed to Iranian networks, even though those networks were air-gapped, how can a private company be sure that its counterattack won’t have spill-over to innocent networks?

War Games, but without Matthew Broderick

It’s not just that innocent bystanders get hurt. There are serious national security implications, too. Say, for example, an American company is losing terabytes of highly valuable technical data to a hacker that is exfiltrating the data remotely. The company traces the source back to a research institute in China. It’s pretty confident it has the goods on ‘em. So it launches a counter-attack on the bad guys. Well, it turns out that research institute is a wing of the People’s Liberation Army. And when the PLA traces the attack to the United States, all it can see is that the attack is coming from the US. What would the PLA do then? At the very least, you have a serious international incident. At worst, you could have the start of a tit-for-tat campaign of cyberattacks that could escalate into an actual cyberwar. And if that’s not bad enough, suppose that Chinese research institute actually had nothing to do with the attacks on the American company. Instead, it was the ______ [take your pick: Russians, Iranians, Syrians…..]. Now you’ve really got a problem.

In another scenario, the bad guys launch their attack from a computer in the UK, and the victim company in the US fires back at the UK computer. Would our “special relationship” with the UK mean that the British government would just say “no problem, old chap”? Unlikely. If you’re going to change the CFAA, you’re going to have to change a whole lot of other countries’ laws, too, or you’re still going to have a heap of legal problems.

All risk, no reward

Even if everything goes exactly according to plan—the source of the attacks is accurately identified, no innocent computers are harmed by the counterattack, and no foreign countries are provoked by the intrusion on their sovereignty into launching a counter-counter-attack or issuing arrest warrants for the company’s CEO and board members—what’s to stop the attackers from simply launching the same attack from another computer? It’s a maxim of cyber-conflict that the cost of entry into this game is extremely cheap. Cyber weapons are a dime a dozen (if not free). Knock out one computer, and there are millions more for the adversary to choose from. So ultimately, what good has been achieved after all that risk was incurred? Painfully little. Unless you can take the people out of the equation, knocking out their computers usually won’t do much other than buy you a temporary respite.

Government oversight?

Stewart suggests that “proper supervision”—from the government?—could take care of such problems. But how would that work, in reality? If the CFAA already creates splits in the courts on issues like what constitutes “authorized access” or “damage” or “loss” within the meaning of the Act, how would Congress write meaningful guidelines into the statute that say when and how victims of computer attacks could launch counter-attacks? And how is a government agency going to supervise this? When privateers were authorized to act, they were, for all intents and purposes, deputized to go out and attack enemy ships, without supervision. But supervising private sector cyberattacks would seem quite different. Presumably one would want the government not only to approve target lists, but also the means of attack, in order to minimize collateral damage and avoid setting off a cyberwar. But the government hasn’t yet figured out how to do this in any effective or timely way for its own cyberattacks (or counter-attacks). Why should we expect it to do any better when reviewing counter-attacks by private companies?

So hack-backs have all the popular appeal of a Dirty Harry movie. But ultimately, they’re not going to do much to stem the cybercrime wave. And they’re likely to make things a whole lot worse.