The vulnerability of computer networks to hacking grows more troubling every year. No network is safe, and hacking has evolved from an obscure hobby to a major national security concern. Cybercrime has cost consumers and banks billions of dollars. Yet few cyberspies or cybercriminals have been caught and punished. Law enforcement is overwhelmed both by the number of attacks and by the technical unfamiliarity of the crimes.

Can the victims of hacking take more action to protect themselves? Can they hack back and mete out their own justice? The Computer Fraud and Abuse Act (CFAA) has traditionally been seen as making most forms of counterhacking unlawful. But some lawyers have recently questioned this view. Some of the most interesting exchanges on the legality of hacking back have occurred as dueling posts on the Volokh Conspiracy. In the interest of making the exchanges conveniently available, they are collected here a single document.

The debaters are:

  • Stewart Baker, a former official at the National Security Agency and the Department of Homeland Security, a partner at Steptoe & Johnson with a large cybersecurity practice. Stewart Baker makes the policy case for counterhacking and challenges the traditional view of what remedies are authorized by the language of the CFAA.
  • Orin Kerr, Fred C. Stevenson Research Professor of Law at George Washington School of Law, a former computer crimes prosecutor, and one of the most respected computer crime scholars. Orin Kerr defends the traditional view of the Act against both Stewart Baker and Eugene Volokh.
  • Eugene Volokh, Gary T. Schwartz Professor of Law at UCLA School of Law, founder of the Volokh Conspiracy, and a sophisticated technology lawyer, presents a challenge grounded in common law understandings of trespass and tort.

Baker-Kerr

RATs and Poison: The Policy Side of Counterhacking

Stewart Baker

Good news for network security: the tools attackers use to control compromised computers are full of security holes. Undergrad students interning for Matasano Security have reverse-engineered the Remote Access Tools (RATs) that attackers use to gain control of compromised machines.

RATs, which can conduct keylogging, screen and camera capture, file management, code execution, and password-sniffing, essentially give the attacker a hook in the infected machine as well as the targeted organization.

This is great news for cybersecurity. It opens new opportunities for attribution of computer attacks, along lines I’ve suggested before: “The same human flaws that expose our networks to attack will compromise our attackers’ anonymity.”

In this case, the possibility of a true counterhack is opened up. The flaws identified by Hertz and Denbow could allow defenders to decrypt stolen documents and even to break into the attacker’s command and control link – while the attacker is still on line.

It’s only a matter of time before counterhacks become possible. The real question is whether they’ll ever become legal. Both the reporter and the security researcher agree that “legally, organizations obviously can’t hack back at the attacker.”

I believe they are wrong on the law, but first let’s explore the policy question.

Should victims be able to poison attackers’ RATs and then use the compromised RAT against their attacker?

It’s obvious to me that somebody should be able to do this. And, indeed, it seems nearly certain that somebody in the US government — using some combination of law enforcement, intelligence, counterintelligence, and covert action authorities — can do this. (I note in passing, though, that there may be no one below the President who has all these authorities, so that as a practical matter RAT poisoning may not happen without years of delay and a convulsive turf fight. That’s embarrassing, but beside the point, at least today.)

There are drawbacks to having the government do the job. It is likely that counterhacking will work best if the attacker is actually on line, when the defenders can stake out the victim’s system, give the attacker bad files, monitor the command and control machine, and to copy, corrupt, or modify ex-filtrated material. Defenders may have to swing into action with little warning.

Who will do this? Put aside the turf fight; does NSA, the FBI, or the CIA have enough technically savvy counterhackers to stake out the networks of the Fortune 500, waiting for the bad guys to show up?

Even if they do, who wants them there? Privacy campaigners will not approve of the idea of giving the government that kind of access to private networks, even networks that are under attack. For that matter, businesses with sensitive data won’t much like the stark choice of either letting foreign governments steal it all or giving the US government wide access to their networks.

On a policy perspective, surely everyone would be happier if businesses could hire their own network defenders to do battle with attackers. This would greatly reinforce the thin ranks of government investigators. It would make wide-ranging government access to private networks less necessary. And busting the government monopoly on active defense would probably increase the diversity, imagination, and effectiveness of the counterhacking community.

But there is always the pesky question of vigilantism…

First, as I’ve mentioned previously, allowing private counterhacking does not mean reverting to a Hobbesian war of all against all. Government sets rules and disciplines violators, just as it does with other privatized forms of law enforcement, from the securities industry’s FINRA to private investigators.

Second, the “vigilatism” claim depends heavily on sleight of hand. Those against the idea call it “hacking back,” with the heavy implication that the defenders will blindly fire malware at whoever touches their network, laying indiscriminate waste to large swaths of the Internet. For the record, I’m against that kind of hacking back too. But RAT poison makes possible a kind of counterhacking that is far more tailored and prudent. Indeed, with such a tool, trashing the attacker’s system is dumb; it is far more valuable as an intelligence tool than for any other purpose.

Of course, the defenders will be collecting information, even if they aren’t trashing machines. And gathering information from someone else’s computer certainly raises moral and legal questions. So let’s look at the computers that RAT poisoning might allow investigators to access.

First, and most exciting, this research could allow us to short-circuit some of the cutouts that attackers use to protect themselves. Admittedly, this is beyond my technical capabilities, but it seems highly unlikely to me that an attacker can use a RAT effectively without a real-time connection from his machine to the compromised network. Sure, the attacker can run his commands through onion routers and cutout controllers. But at the end of all the hops, the attacker is still typing here and causing changes there. If the software he’s using can be compromised, then it may also be possible to inject arbitrary code into his machine and thus compromise both ends of the attacker’s communications. That’s the Holy Grail of attribution, of course.

Is there a policy problem with allowing private investigators to compromise the attacker’s machine for the purpose of gathering attribution information? Give me a break. Surely not even today’s ACLU could muster more than a flicker of concern for a thief’s right to keep his victim from recovering stolen data.

The harder question comes when the attacker is using a cutout — an intermediate command and control computer that actually belongs to someone else. In theory, gathering information on the intermediate computer intrudes on the privacy of the true owner. But, assuming that he’s not a party to the crime, he has already lost control of his computer and his privacy, since the attacker is already using it freely. What additional harm does the owner suffer if the victim gathers information on his already-compromised machine about the person who attacked them both? Indeed, an intermediate command and control machine is likely to hold evidence about hundreds of other compromised networks. Most of those victims don’t know they’ve been compromised, but their records are easy to recover from the intermediate machine once it has been accessed. Surely the social value of identifying and alerting all those victims outweighs the already attenuated privacy interest of the true owner.

In short, there’s a strong policy case for letting victims of cybercrime use tools like this to counterhack their attackers. If the law forbids it, then to paraphrase Mr. Bumble, “the law is a ass, a idiot,” and Congress should change it.

But I don’t think the law really does prohibit counterhacking of this kind, for reasons I’ll offer in a later post.

RATs and Poison Part II: The Legal Case for Counterhacking

Stewart Baker

In an earlier post, I made the policy case for counterhacking, and specifically for exploiting security weaknesses in the Remote Access Tools, or RATs, that hackers use to exploit computer networks.

There are three good reasons to poison an attacker’s RAT:

  • We can make sure the RAT doesn’t work or that it actually tells us what the attackers are doing on our networks;
  • We gain access to the command and control machines that serve as waystations that let attackers download stolen data or upload new malware; and
  • If we’re very lucky and very good, we can use the poisoned RAT to compromise the attacker’s home machine, directly identifying him and his organization.

More problematic is the legal case for counterhacking, due to long-standing opposition from the Justice Department’s Computer Crime and Intellectual Property Section, or CCIPS. Here’s what CCIPS says in its Justice Department manual on computer crime:

Although it may be tempting to do so (especially if the attack is ongoing), the company should not take any offensive measures on its own, such as “hacking back” into the attacker’s computer—even if such measures could in theory be characterized as “defensive.” Doing so may be illegal, regardless of the motive. Further, as most attacks are launched from compromised systems of unwitting third parties, “hacking back” can damage the system of another innocent party

This is a mix of law and policy. I’ve already explained why the Justice Department’s policy objections.

That leaves the law. Does the CFAA, prohibit counterhacking? The use of the words “may be illegal,” and “should not” are a clue that the law is at best ambiguous.

To oversimplify a bit, violations of the CFAA depend on “authorization.” If you have authorization, it’s nearly impossible to violate the CFAA, no matter what you do to a computer. If you don’t, it’s nearly impossible to avoid violating the CFAA.

But the CFAA doesn’t define “authorization.” It’s clear enough that things I do on my own computer or network are authorized. That means that the first step in poisoning a RAT is lawful. You are “authorized” under the CFAA to modify any code on your network, even if it was installed by a hacker. (For purposes of this discussion we’ll put aside copyright issues; it’s unlikely in any event that a hacker could enforce intellectual property rights against his victim.)

The more difficult question is whether you’re “authorized” to hack into the attacker’s machine to extract information about him and to trace your files. As far as I know, that question has never been litigated, and Congress’s silence on the meaning of “authorization” allows both sides to make very different arguments. The attacker might say, “I have title to this computer; no one else has a right to look at its contents. Therefore you accessed it without authorization.” And the victim could say, “Are you kidding? It may be your computer but it’s my data, and I have a right to follow and retrieve stolen data wherever the thief takes it. Your computer is both a criminal tool and evidence of your crime, so any authorization conveyed by your title must take a back seat to mine.”

In a civil suit, the lack of definition would make both of those arguments plausible. Maybe “authorization” under the CFAA is determined solely by title; and maybe it incorporates all the constraints that law and policy put on property rights in other contexts. Personally, I dislike statutory interpretations that fly in the face of good policy, so I think the counterhacker wins that argument.

No matter; computer hackers won’t be bringing many lawsuits against their victims. The real question is whether victims can be criminally prosecuted for breaking into their attacker’s machine.

And here the answer is surely not.

The ambiguity of the statute makes a successful prosecution nearly impossible; deeply ambiguous criminal laws like this are construed in favor of the defendant. See, e.g., McBoyle v. United States, 283 U.S. 25, 27 (1931) (“[I]t is reasonable that a fair warning should be given to the world, in language that the common world will understand, of what the law intends to do if a certain line is passed. To make the warning fair, so far as possible, the line should be clear.”) (Holmes, J.).

The same analysis applies even to the hardest case, where victims use a compromised RAT to access command and control machines that turn out to be owned by an innocent third party. An innocent third party is a more appealing witness, but his machine was already compromised by hackers before the counterhacking victim came along, and it was being used as an instrumentality of crime, sufficient in some states to justify its forfeiture. It remains true that the counterhacker is pursuing his own property.

Finally, when he begins his counterhack, the victim does not know whether the intermediate machine is controlled by an attacker or by an innocent third party. Why should the law presume that it is owned by an innocent party — or force the victim to make that presumption, on pain of criminal liability? (There’s room for empirical research here; while a few years ago hackers seemed to favor compromising third-party machines for command and control, the Luckycat study suggests that some attackers now prefer to use machines and domains that they control. As the latter approach grows more common, a presumption that intermediate machines are owned by innocent third parties will grow even more artificial.)

All told, it seems reasonable to let victims counterhack a command and control machine that is ex-filtrating information from the victim’s network, at least enough to determine who is in control of the machine, to identify other victims being harmed by the machine, and to follow the attacker back to his origin (or at least his next hop) if the intermediate machine is simply another victim. Requiring the victim not to counterhack if there’s uncertainty about the innocence of the machine’s owner simply gives an immunity to attackers.

The balance of equities thus seem to me to favor a recognition of the victim’s authorization to conduct at least limited surveillance of a machine that is, after all, directly involved in a violation of the victim’s rights. If “authorization” under the CFAA really boils down to a balancing of moral and social rights, and nothing in the law refutes that view, then the counterhacker has at least enough moral and social right on his side to make a criminal prosecution problematic — unless he damages the third party’s machine, in which case all bets are off.

The Legal Case Against Hack-Back: A Response to Stewart Baker

Orin Kerr

Stewart says his analysis is “surely” right. I think it’s obviously wrong. Here’s why.

The CFAA is a computer trespass statute. It prohibits accessing another person’s computer “without authorization” just like trespass laws prohibit walking on to someone else’s land without their consent. Like a traditional trespass statute, it is the owner/operator of the property that controls authorization. There is lots of disagreement about how computer owner/operators can create rights on their machines that the law will enforce but everyone agrees that hacking into someone else’s machine is the quintessential example of the kind of conduct prohibited by the statute.

Contrary to Stewart’s claim, there is no genuine ambiguity over whether the statute protects the rights of computer owners or data owners. The statutory language expressly prohibits “intentionally access[ing] a computer without authorization” (emphasis added). It protects access to computers, not access to stolen data. The rule here is the same rule that is used in real property law: The owner/operator of the property controls who has access to it. The fact that your neighbor borrowed your baseball glove and you want it back doesn’t give you a right to break into everything your neighbor owns on the theory that you can authorize yourself to go anywhere to get your glove back. The same goes for computers.

If we assume Stewart’s I-like-it-and-therefore-it-is-the-law argument were valid, I think the results it would produce would be terrible. For every one hypothetical you can devise in which such hacking back might seem like a good thing, you can come up with hundreds of examples in which it wouldn’t be. For example, wouldn’t Stewart’s theory allow copyright holders to hack into the computers of anyone suspected of having any infringing materials on their computers? That would be bad. More broadly, Stewart’s theory appears to have few limits. His test seems to boil down to good faith: As long as someone believes that they were a victim of a computer intrusion and has a good-faith belief that they can help figure out who did this or minimize the loss of the intrusion by hacking back, the hacking back is authorized. Given the well-known difficulty of locating the source of intrusions, that’s not a power that we want to give to every person in the US who happens to own or control a computer.

Another problem with Stewart’s theory is that it would have the bizarre effect of allowing hacking victims to declare that the people who hacked into their machines can’t access their own computers. That is, if A hacks into B’s machine, B just has to announce that A now can’t use A’s own machine. If A uses his own computer, that is “without authorization” from B and therefore a crime. It’s a bizarre result, and even more bizarre given that Stewart uses the rule of lenity to justify it.

Baker Replies to Kerr

Stewart Baker

Orin Kerr and I agree that “authorization” is the central, and undefined, key to criminal liability under the CFAA. In Orin’s view, “authorization” can be determined by asking two questions: First, does the CFAA protect computers or data? And, second, who controls a computer, the data owner or the computer owner?

I believe the right answer to each question is “both.” The CFAA can and should protect both computers and data stored on computers. Similarly, more than one person can have rights to data on a computer. Orin believes that the CFAA forces a choice. If it protects computers it can’t protect data. You either have full authorization or you have none.

If anything the statute refutes that argument. The only textual clue to what the statute means by “authorized” is found in a section that imposes liability on users who exceed their authorized access to a computer; that term is defined as follows: “[T]he term ‘exceeds authorized access’ means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter.” Put another way, you exceed authorized access if you obtain or alter information you’re not entitled to obtain or alter.

This definition undercuts both of Orin’s assumptions about authority. And it treats “authorized” and “entitled” as more or less synonymous, which isn’t exactly consistent with the idea that authorization is all-or-nothing. If I’m attending a demonstration on the Mall, and the Park Police tell me to move on, I’m likely to say, “Sorry, but I’m entitled to be here.” As I am. But that doesn’t mean that I can then tell them to move on. They’re entitled to be there too. And what if I try to enforce my edict by taking a swing at one of them? He might say, “You’re entitled to be here, but you’re not entitled to do that here.” Quite right, too; it’s jail for me. It turns out my entitlement was real, but it was neither exclusive nor unlimited.

So too with computers under the CFAA. I may be entitled to retrieve my stolen data stolen from a machine without being entitled to take the machine to a pawn shop and sell it, or to tell the innocent owner what he can and cannot do with it.

And what about policy? Which reading of the statute produces better results?

To understand the policy consequences of the choice, let’s begin with a reminder of our strategic situation. Right now, every computer and network in the country is vulnerable to intrusion by authoritarian foreign governments if not criminals.

The intruders have one clear vulnerability; they collect this stolen data on command-and-control machines, which may in some cases belong to other innocent victims. Victims could gain access to these machines, could render the stolen information worthless, could gather clues about the attackers, and could even identify hundreds of other victims who probably don’t yet know they’ve been compromised. That would be a very good thing.

In Orin’s world, though, it’s illegal. Under his reading of the law, the hundreds of victims go unnotified, the evidence goes ungathered, the stolen data goes, well, to China, until law enforcement gets around to the cyber equivalent of stolen-bicycle paperwork.

And what of all those bad policy outcomes that Orin conjures – the crazed vigilantes and the RIAA rummaging in everyone’s computers? The answer is that we, or at least the courts, don’t have to recognize their authority to do that. The courts don’t have to treat “good faith” as creating a counterhacking entitlement; they could as easily insist on a higher standard, such as probable cause. They could recognize the counterhacker’s authority to gather evidence but not to harm innocent third parties, just as they distinguish today between demonstrators who are entitled to throw insults but not punches on park property. They could reject the notion that the copying of 99-cent music files justifies the same response as a campaign to compromise every network in the country. They could distinguish, in short, between baby and bathwater.

It’s true that my definition of authorization is more complicated than Orin’s, that it requires more line-drawing. But so does life. Orin’s alternative is as simple — and as unjust — as applying the murder laws equally to serial killers and to homeowners who shoot home invaders. Nothing in law or policy requires that we adopt such a reading.

More on Hacking Back: Kerr Replies to Baker

Orin Kerr

Stewart makes a textual argument and a policy argument. I find both extremely weak.

Stewart’s first argument is that it’s possible to read the statute as giving authorization rights to people who have rights in data rather than rights in computers because the statute doesn’t textually distinguish between computers and underlying data.

If you read the whole statute, though, that’s plainly wrong. The statute repeatedly and consistently distinguishes between computers and data. The elements of the statute dealing with rights with computers are covered by the basic unauthorized access concept common to most of the different crimes listed in 18 U.S.C. 1030(a). In contrast, the elements dealing with data are covered by the additional elements Congress required for the additional offenses listed in 1030(a). It’s one of the most basic divisions in the statute.

As I read the statute Congress was pretty careful to distinguish rights to computers — the trespass into the machine, covered by the unauthorized access prohibition — from rights to data — the extra elements of 1030(a) for the different crimes that Congress created. Given that, I don’t think it makes textual sense to read the unauthorized access prohibition as governed by rights in data. The statute is just not as mystifying and unclear as Stewart claims. (Also, what does it mean to “own” data? If someone copies this blog post and saves it on their computer without my consent, can I hack into the computer because I “own” that data? Concepts such as “owning” data and when data becomes “stolen” are notoriously difficult to work with — indeed, 18 U.S.C. 1030 was passed so that such questions didn’t need to be asked. It seems puzzling to reintroduce them sub silentio here.)

Second, is Stewart’s policy argument: Justice demands this reading of the statute because the Chinese are invading our computers and we need to stop them. In his post, Stewart suggests that a proper jurisprudential sophistication frees judges to do whatever they want with the statute to deal with the Chinese. With their newfound sense of sophistication, judges should go forth and devise a set of principles for interpreting “authorization” by which it is not a crime for big US companies to go after their stolen data when the Chinese take that data while it is still illegal for people to hack back when they’re not very good at it, the RIAA wants to do it, or there isn’t really a good reason for it. Stewart doesn’t actually offer any legal basis for that distinction. He doesn’t have an argument for where the line should be or even what principles should be used to interpret authorization. He just wants judges to go figure this stuff out somehow.

If someone needs to figure this stuff out, though, it’s a job for Congress instead of the courts. Stewart isn’t just reading the statute. He’s asking judges to write a new statute that he thinks would be better than the one we now have. Maybe Congress should consider the kind of exception Stewart wants. It’s hard to tell, as Stewart hasn’t told us what the new statute should look like. (Instead, he has only told us the result the statute should reach on one case.) But as long as we’re only talking about what the statute presently means — that is, what Congress passed already, and what courts have to interpret — I don’t see a plausible way to read “authorization” to get to the result Stewart wants.

If I were Stewart, I would try to rely on the necessity defense instead of creative readings of authorization to get where he wants to go. Stewart’s argument is best made not as the claim that this isn’t unauthorized, but that it is unauthorized conduct justified by the specific circumstances he has in mind. That’s an argument for the affirmative defense of necessity. Necessity is a nice and vague exception, which is helpful for Stewart’s purposes. It’s a controversial exception as a matter of federal law, but at least there’s some support for it. And it seems to be really what Stewart has in mind. In my view, it would be better to try to make that argument directly rather than by appeals to justice or interpretations of “authorization.”

Baker’s Last Response

Stewart Baker

Now the debate with Orin is actually getting somewhere. Sort of. Here’s a scorecard:

1. Does authorization depend exclusively on ownership?

Orin’s latest post does a good job of showing that the CFAA often draws a coherent distinction between rights in data and rights in a computer, and that rights in the computer are the statute’s principal focus. I don’t disagree.

Where we differ is how much that matters. Orin seems convinced that this distinction makes rights in data irrelevant to the question of what constitutes authorized access to a computer. He doesn’t really offer a reason for treating it as irrelevant. He just assumes it must be, probably because he also assumes that authorization is an all or nothing concept, so that if the owner has authorization no one else has any, and vice versa.

But Orin’s assumption has no basis in the statute that I can see. As my last response says, that’s like assuming that because a trespass statute protects the owners of land, everyone else must be punished as a trespasser, no matter what other rights they have to enter the property. That would make felons of rescuers, people in hot pursuit of thieves, easement holders, and government officials. You could come to that conclusion if that’s what the law unequivocally said, but in this case the law only makes felons of people who are not authorized (or not entitled) to access the computer.

So why would we ignore other claims of entitlement – especially when ignoring those claims makes a felon of someone performing an act with undeniable social value?

Orin’s reluctance to defend his assumption is striking. Maybe he’s got a good response; but he hasn’t offered it yet.

2. Should policy influence the interpretation of “authorization”?

Orin continues to look down his nose at the introduction of policy into the interpretation of this central but undefined term. He thinks I’m requesting a new statute. In fact I’m asking the courts to recognize a perfectly plausible reading of “authorization,” in a criminal context where ambiguity would ordinarily be resolved in favor of the defendant.

I agree with Orin that this interpretation requires the courts to decide which entitlements should be recognized and which should not. He thinks that’s a role for Congress, not the courts, an argument that might be more persuasive in discussing a civil statute, or a criminal statute that was not deterring companies from responding aggressively to a dangerous intelligence attack on our economy and our society.

That said, I welcome Orin’s acknowledgement that maybe Congress should permit counterhacking in some circumstances. Though I fear the CCIPS Old Guard lives on in his heart, and that somehow no actual amendment will ever quite pass muster there.

3. Is necessity a defense for counterhacking?

Orin suggests that a federal criminal necessity defense might be more apt in this case. Maybe so, but he acknowledges that it is at best controversial. At worst, in fact, it doesn’t exist. So, while I won’t spurn even a modest agreement with Orin, the chance to prove an affirmative defense that may not apply isn’t likely to offer much comfort for companies that want to gather information about their attackers.

A Final Response on Hacking Back

Orin Kerr

Thanks to Stewart for the interesting exchange on the (un)lawfulness of hacking back. Here are my concluding thoughts.

First, Stewart repeatedly draws analogies to the law of physical trespass that are faulty because they misunderstand the law of physical trespass. Stewart seems to think that it is legal to break into someone else’s house to retrieve your property stored inside. He also assumes that it is always okay for “rescuers, people in hot pursuit of thieves, easement holders, and government officials” to enter private property. From these assumptions, Stewart guesses that trespass law doesn’t apply to such cases because the conduct is authorized and thus can’t be a trespass. He builds his proposal on that assumption. Just treat electronic trespass like physical trespass, he says: Hack back is authorized just like analogous physical entries are authorized.

But trespass law doesn’t work that way. First, you don’t have a right to break into someone else’s house to retrieve your stuff. That’s a trespass. The issue comes up most often in criminal cases when a party who entered someone else’s home and took property is charged with trespass and burglary. It’s common for the defense to claim that that they entered to retrieve their own property: They thus concede liability for a criminal trespass but deny liability for the more serious crime of burglary. Cf. Auman v. People, 109 P.3d 647 (Colo. 2005). Similarly, those who are rescuers or police officers or those in hot pursuit don’t have a general exemption from trespass liability. Instead, they have to invoke an affirmative defense. Rescuers must invoke the necessity defense. See, e.g., City of Wichita v. Tilson, 253 Kan. 285 (1993). Police officers must invoke the affirmative defense of the Fourth Amendment. Either they have to produce a valid warrant or they have to identify an applicable exception to the warrant clause (one of which is hot pursuit). See, e.g., Entick v. Carrington, 95 Eng. Rep. 807 (K.B. 1765); Warden v. Hayden, 387 U.S. 294 (1967). Easement holders can’t trespass, but that’s because easements limit the property owner’s usual right to exclude.

What’s the lesson from physical trespass laws? It’s that trespass liability is actually pretty broad, and the kinds of exceptions that Stewart is using for purposes of analogy are a lot more limited than Stewart thinks. They’re affirmative defenses, not elements of the crime itself. So while I agree that we should treat physical trespass and cybertrespass the same way that means recognizing that hacking back violates 18 U.S.C. 1030 and that the only way to get out of liability is to fit the case into an affirmative defense.

What about the affirmative defense of necessity? It seems to respond to Stewart’s concerns. If any existing criminal law doctrine fits Stewart’s argument, that’s it. Stewart says it isn’t very helpful, though, because it “isn’t likely to offer much comfort for companies that want to gather information about their attackers.” It’s too doctrinally uncertain and vague for companies to rely on safely. I’ll concede that’s true. But how is it relevant? We’re just debating what the law is. What companies feel about that law is irrelevant to the question.

A final comment

Stewart Baker

I still don’t think we’ve quite engaged. My point in discussing the various trespass exceptions is not to import them into the CFAA. My point is that trespass does not turn entirely on title, because the law recognizes that there are times when a right to enter the property is allowed. That’s significant not for the precise content of the right but because the CFAA uses language (“authorization,” “entitlement”) that directly invites an examination of the rights of the intruder.

You might say that “authorization” doesn’t exactly invite a claim of moral right by the person accused of a CFAA violation. But the statute does equate authorization with entitlement, which does invite such a claim. And the Budapest Convention, which is a more or less direct translation of the CFAA into treaty-speak, goes even further, criminalizing access “without right.” Surely this invites defendants to say, “I didn’t access that computer without right. I have a right to pursue my data.”

Put another way, by using such an open-ended word as “authorization,” you could say that the CFAA incorporated the defense of necessity into the crime, along with other claims of right or entitlement. The Justice Department might say that incorporating such a vague and ambiguous defense into the statute is unfair because it makes prosecutions harder. But it was the Justice Department that chose the term in the first place, precisely because it is so ambiguous and capacious that it allowed prosecution of wrongdoers without much worry about changes in technology. To which I would reply, “That’s fine, CCIPS, but you have to take the good with the bad. If ‘authority’ stretches with the times for you, then it stretches with the times for the defendant.”

In fact, let’s carry that point just a bit further for illustrative purposes. CCIPS could have written a (slightly) more capacious and ambiguous statute making it a felony to “do wrong with a computer.” Under that even more future-proofed law it would surely be open to a defendant to argue that counterhacking is not wrong. It seems to me that “authorization” is a slightly more precise and certainly fancier-sounding variant of “doing wrong.”

I still don’t know why Orin thinks that this reading of “authorization” is plainly wrong.

***

Volokh-Kerr

The Rhetoric of Opposition to Self-Help:

Eugene Volokh

I was just talking to some people recently about the question of “digital self-defense” — whether organizations that are under cyberattack should be free to (and are free to) fight back against attacking sites by trying to bring those sites down, by hacking into the sites, and so on.

I don’t claim to know the definitive answer to this question; but I did want to say a few words about some common anti-self-help rhetorical tropes, which are sometimes heard both in this context and other contexts.

1. Vigilantism: Allowing digital self-defense (or, to be precise, digital defense of property), the argument goes, would mean sanctioning vigilantism; the nonvigilante right solution is to leave matters to law enforcement.

Yet the law has never treated defense of property as improper “vigilantism.” American law bars you from punishing those who attack you or your property, but it has always allowed you to use force to stop the attack, or prevent an imminent attack. There are limits on the use of force, such as the principle that generally (though not always) property may be defended only with nonlethal force. But generally speaking the use of force is allowed, and shouldn’t be tainted with the pejorative term of “vigilantism,” which connotes illegality. (Black’s Law Dictionary echoes this, defining vigilantism as “The act of a citizen who takes the law into his or her own hands by apprehending and punishing suspected criminals.”)

2. Taking the Law Into Your Own Hands: Critics of self-defense and defense of property also sometimes characterize it as “taking the law into your own hands.” This too implies, it seems to me, extralegal action, through which someone unlawfully taking into his own hands power that the law leaves only in law enforcement’s hands.

Yet the law has always placed in your own hands — or, if you prefer, has never taken away from your own hands — the right to defend yourself and your property (subject to certain limits). By using this right, you aren’t taking the law into your own hands. You’re using the law that has always been in your hands.

There are many reasons the law has allowed such self-defense and defense of property: It’s generally more immediate than what law enforcement can do; even after the fact, law enforcement is often stretched too thin even to investigate all crimes; sometimes law enforcement may be biased against certain people, and may not take their requests for help seriously, so self-help is the only game in town. There are also reasons to limit self-defense and defense of property (I’ll note a few below). But let’s not assume that self-defense and defense of property somehow involve unlawful arrogation of legal authority on the defenders’ part. Rather, they generally involve legally authorized exercise of legal authority.

3. But the Statute Has No Self-Defense Exceptions: Ah, some may say, perhaps in the physical world you have the right to defend yourself and your property — but the CFAA secures no such right, so whatever one’s views on self-help, the fact is that self-help is illegal.

Yet, surprising as it may seem to many, self-defense and defense of property may be allowed even without express statutory authorization. These defenses were generally recognized by judges, back when the criminal law was generally judge-made; and many jurisdictions don’t expressly codify them even now. Federal law, for instance, has no express “self-defense” or “defense of property” statute. The federal statute governing assaults within federal maritime and territorial jurisdiction simply says, in part,

Whoever, within the special maritime and territorial jurisdiction of the United States, is guilty of an assault shall be punished as follows ….

(4) Assault by striking, beating, or wounding, by a fine under this title or imprisonment for not more than six months, or both.

(5) Simple assault, by a fine under this title or imprisonment for not more than six months, or both, or if the victim of the assault is an individual who has not attained the age of 16 years, by fine under this title or imprisonment for not more than 1 year, or both.

(6) Assault resulting in serious bodily injury, by a fine under this title or imprisonment for not more than ten years, or both.

(7) Assault resulting in substantial bodily injury to an individual who has not attained the age of 16 years, by fine under this title or imprisonment for not more than 5 years, or both.

Assault is generally defined (more or less) as “any intentional attempt or threat to inflict injury upon someone else, when coupled with an apparent present ability to do so, and includes any intentional display of force that would give a reasonable person cause to expect immediate bodily harm, whether or not the threat or attempt is actually carried out or the victim is injured.” The federal criminal code thus on its face prohibits all assaults, including ones done to defend one’s life. Yet self-defense is a perfectly sound defense under federal law — because federal courts recognize self-defense as a general criminal defense, available even when the statute doesn’t specifically mention it.

Likewise, federal law generally bans possession of firearms by felons, with no mention of self-defense as a defense. Yet federal courts have recognized an exception for felons’ picking up a gun in self-defense against an imminent deadly threat, again because self-defense is a common-law defense available in federal prosecutions generally.

Given this, a federal statute’s general prohibition on breaking into another’s computer doesn’t dispose of breakins done in defense of property against imminent threat — just as federal statutes’ general prohibitions on assault or on possession of a firearm by a felon don’t dispose of assault or possession done in defense of life (or sometimes property) against imminent threat. Federal criminal law already includes judicially recognized and generally available self-defense and defense of property defenses, even when the defendant is prosecuted under a statute that doesn’t expressly mention such defenses.

There still remains a good deal of uncertainty about how the defense of property defense would play out in any particular digital strikeback situation, and I suppose it’s possible that courts might even decide that it’s categorically unavailable as a matter of law in computer breakin cases (though it would be unusual, given the general availability of self-defense and defense of property defenses). But it is a mistake to simply assert that such a defense is unavailable simply because the statute doesn’t mention it.

* * *

All this having been said, I want to stress that there are plausible arguments in favor of prohibiting digital self-defense (either criminalizing it or making it tortious), and reasons to be skeptical about easy analogies between digital self-defense (or, more precisely, defense of property) and physical self-defense. It may be, for instance, that there’s more of a risk of error in digital self-defense cases, in that you might disable, directly or indirectly, a computer that’s not actually attacking you. (Say, for instance, you’re defending against a worm by launching a counterworm; there’s more risk of massive damage to many third parties from an error in the counterworm than there is in a typical situation where you’re confronting someone who’s trying to run off with your bicycle.) It’s also not obvious what should be allowed when you’re going after a computer that is attacking you but only because it’s been hijacked. Should that turn, for instance, on whether the computer’s owner was negligent in allowing the computer to be hijacked?

It’s also not clear how the general principle that defense of property must generally be nonlethal should play out — what if you’re under attack using a hijacked computer that belongs to a hospital, an airport, a 911 center, or some other life-critical application? Is disabling that computer potentially lethal force, because it may have lethal consequences? How can you tell whether the computer is indeed running some application on which lives turn?

It’s therefore not obvious whether the law should criminalize most or all forms of digital self-defense, criminalize some and make others tortious, leave it entirely to the tort system so long as the actor sincerely believed (or perhaps reasonably believed) the actions were necessary to defend his property, or whatever else. Some limits on digital defense of property may well be proper, especially if we think that on balance allowing such defense would lead to too much harm to the property of third parties. But we need to analyze things carefully, by asking some of the questions I noted in the last few paragraphs — not just by condemning digital self-defense as vigilantism, as taking the law into one’s own hands, or as clearly illegal under current computer crime law.

Thanks to Warren Stramiello, a student whose paper first alerted me to the defense of property analogy; and note the Journal of Law, Economics & Policy symposium on the subject, which is available in volume 1, issue 1 of the Journal, but unfortunately not on the Web. (Participants included our very own Orin Kerr, as well as my incoming colleague Doug Lichtman.)

A Response to Eugene Volokh

Orin Kerr

Does a “Cyber Self-Help” Defense Exist, and Would It Be A Good Idea?: I enjoyed Eugene’s post about “digital self-help,” although I have a very different take on the question.

First, I highly doubt that a defendant can assert a “digital self-help” claim in a prosecution brought under the CFAA, 18 U.S.C. 1030. Eugene is right that federal criminal statutes generally do not mention self-defense and other defenses, and yet courts sometimes have recognized those defenses for some crimes. But I don’t think it’s accurate to say, as Eugene does, that “federal criminal law already includes judicially recognized and generally available self-defense and defense of property defenses.” Some commentators have said this, but I believe it clashes with the Supreme Court’s most recent take on such questions in Dixon v. United States, 126 S.Ct. 2437 (2006).

As I read Dixon, it seems that whether a federal defense exists is a question of Congressional intent. Specifically, the question is whether and how Congress meant to incorporate the common law defenses when it enacted that particular crime. Where Congress was silent, courts are supposed to reconstruct what Congress probably wanted or would have wanted “in an offense-specific context.” Id. at 2447. (It’s true that Dixon was a duress case, not a self-defense case, but it cited the Cannabis opinion, which was a necessity case; to me that suggests that the Court sees all the common law defenses together.)

This is pretty straightforward when considering a federal criminal law that closely tracks a traditional criminal prohibition, such as homicide. As Justice Kennedy put it in his concurrence in Dixon, “When issues of congressional intent with respect to the nature, extent, and definition of federal crimes arise, we assume Congress acted against certain background understandings set forth in judicial decisions in the Anglo-American legal tradition.” It’s hard to imagine Congress enacting a homicide statute without meaning to incorporate a self-defense provision. So in that context, courts have readily applied self-defense even though it’s not technically written into the statute.

I think the CFAA is quite different. I don’t know of any evidence that anyone in Congress had ever even heard about “hacking back” when Congress passed the CFAA in 1986. Congress did consider whether there were some kind of computer intrusions that would be okay based on the context; specifically, it created an exception in 1030(f) exempting “any lawfully authorized investigative, protective, or intelligence activity of a law enforcement agency.” But it didn’t create an exception for self-defense, and I don’t know of any reason to think that there was a background sense that those defenses would apply as seems to be required under Dixon. Given that, I would tend to doubt that a federal “cyber self-defense” doctrine exists.

Although it’s not directly contrary to Eugene’s post, I’ll also add my 2 cents that I think such a defense would be a really, really, really bad idea. Here’s an excerpt of what I wrote on the topic in a 2005 article, Virtual Crime, Virtual Deterrence: A Skeptical View of Self-Help, Architecture, and Civil Liability:

It is very easy to disguise the source of an Internet attack. Internet packets do not indicate their original source. Rather, they indicate the source of their most immediate hop. Imagine I have an account from computer A, and that I want to attack computer D. I will direct my attack from computer A to computer B, from B to computer C, and from C to computer D. The victim at computer D will have no idea that the attack is originating at A. He will see an attack coming from computer C. Further, the use of a proxy server or anonymizer can easily disguise the actual source of attack. These services route traffic for other computers, and make it appear to a downstream victim as if the attack were coming from a different source.

As a result, the chance that a victim of a cyber attack can quickly and accurately identify where the attack originates is quite small. By corollary, the chance that an initial attacker would be identified by his victim and could be attacked back successfully is also quite small. Further, if the law actually encouraged victims of computer crime to attack back at their attackers, it would create an obvious incentive for attackers to be extra careful to disguise their location or use someone else’s computer to launch the attack. In this environment, rules encouraging offensive self-help will not deter online attacks. A reasonably knowledgeable cracker can be confident that he can attack all day with little chance of being hit back. The assumption that an attacker can be identified and targeted may have been true in the Wild West, but tends not to be true for an Internet attack.

Legalizing self-help would also encourage foul play designed to harness the new privileges. One possibility is the bankshot attack: If I want a computer to be attacked, I can route attacks through that one computer towards a series of victims, and then wait for the victims to attack back at that computer because they believe the computer is the source of the attack. By harnessing the ability to disguise the origin of attack, a wrongdoer can get one innocent party to attack another. Indeed, any wrongdoer can act as a catalyst to a chain reaction of hacking back and forth among innocent parties. Imagine that I don’t like two businesses, A and B. I can launch a denial-of-service attack at the computers of A disguised to look like it originates from the computers at B. The incentives of self-help will do the rest. A will defend itself by launching a counterattack at B’s computers. B, thinking it is under attack from A, will then launch an attack back at A. A will respond back at B; B back at A; and so on. As these examples suggest, basing a self-help strategy on the virtual model of the Wild West does not reflect a realistic picture of the Internet. Self-help in cyberspace would almost certainly lead to more computer misuse, not less.

More in the article itself (unfortunately, the version on SSRN is only an early draft, but the final is on Westlaw and Lexis.)

Response to Orin Kerr

Eugene Volokh

Common-Law Federal Criminal Defenses:

I just wanted to very briefly comment on Orin’s post on the subject. Dixon v. United States involved the question of who is to bear the burden of proof as to a duress defense. The “long-established common-law rule” had been that the defendant must prove duress by a preponderance of the evidence, and the Court held that Congress did not intend to displace this rule. This is where the “offense-specific context” language comes up (citation omitted):

Congress can, if it chooses, enact a duress defense that places the burden on the Government to disprove duress beyond a reasonable doubt. In light of Congress’ silence on the issue, however, it is up to the federal courts to effectuate the affirmative defense of duress as Congress “may have contemplated” it in an offense-specific context. In the context of the firearms offenses at issue — as will usually be the case, given the long-established common-law rule — we presume that Congress intended the petitioner to bear the burden of proving the defense of duress by a preponderance of the evidence.

It seems to me that this common-law tradition is the most important factor here, and the longstanding common-law acceptance of the defense-of-property defense should lead federal courts to assume that Congress didn’t mean to preempt it, at least absence a statement from Congress to the contrary.

It’s true that Congress likely didn’t think much about the defense when enacting computer crime laws; but the point of the common-law criminal defenses is precisely that the legislature often doesn’t think much about defenses, which often (as with duress, for instance) involve relatively rare circumstances. The defenses are out there to be used when the triggering circumstances arise, and Congress doesn’t need to think much about them when enacting specific statutes.

So it seems to me that Dixon is quite consistent with my position: Congress legislates against the background of various common-law rules related to criminal law defenses, and the general presumption is that Congress doesn’t mean to displace these background rules.

Response to Eugene Volokh

Orin Kerr

More on the “Hacking Back” Defense: I wanted to add one more round to the exchange Eugene and I were having about whether a defendant charged with a federal computer intrusion crime can assert a “hacking back” defense. I’m still of the opinion that defendants cannot assert such a defense, and I wanted to respond specifically to Eugene’s most recent post about it. Specifically, I want to make two points. First, I’m not entirely sure a general defense of property defense doctrine exists as a default in federal criminal law, and second, if the doctrine exists I don’t think it covers computer intrusions.

The reason I’m unsure that the “defense of property” defense exists as a Congressional default is that the defense seems to be quite rare in federal court, and the cases appear almost entirely in a very specific context. Based on a quick Westlaw check, at least, I could only find about 30 federal criminal cases that seem to apply it or discuss it at all. Further, those cases arise in almost entirely in a very specific context: a defense raised in a prosecution for physical assault. There’s also a bit of homicide and one or other two crimes thrown in, but not much. Perhaps =a lot more cases exist beyond what I could find, but I couldn’t find much — and what I found was quite narrow and applied only on in a very small subset of criminal cases. Clearly this doesn’t rule out that Congress legislates all criminal offense against a general background norm of a “defense of property” defense being available, but I think it does shed some doubt on it.

Second, when stated as a defense in federal criminal cases, “defense of property” seems to mean only defense of physical property from physical access or removal. For example, in the context of the Model Penal Code’s defense of property section, which has been influential in federal court applications of defenses, the provisions are available only “to prevent or terminate an unlawful entry or other trespass upon land or a trespass against or the unlawful carrying away of tangible, movable property . . . , [or] to effect an entry or re-entry upon land or to retake tangible movable property.” MPC 3.06. (The MPC seems to treat the kind of interference with property that includes computer intrusions under a separate section, § 3.10, Justification in Property Crimes, which seems to foillow a different set of principles. Also, while you might think “entry” includes virtual entry, entry in the context of criminal trespass statutes are generally understood to mean physical entry.) Given that, it seems that whatever “defense of property” doctrine is established as a background norm when Congress creates a new criminal law, it doesn’t seem to me to apply to computer attacks.

Anyway, I should stress that we don’t yet have any cases on this, so both Eugene and I are guessing as to what courts would or should do based on the legal materials out there. It’s a very interesting question. Finally, I’ll just add further thoughts in the comment thread in the future, as I’m not sure a lot of readers are interested in this issue.

Response to Orin Kerr

Eugene Volokh

The “Defense of Property” Defense:

I much appreciate Orin’s posts on the subject, and I should note again what I noted at the outset — there are quite plausible policy arguments for barring “hacking back” even when it’s done to defend property against an ongoing attack, and Orin has expressed some of them in the past. That an action falls generally within the ambit of an existing defense, or is closely analogous to an existing defense, doesn’t preclude the conclusion that we should nonetheless bar the action because of special problems associated with it.

Nonetheless, I do disagree with two parts of Orin’s analysis. First, it seems to me that the defense-of-property defense has indeed been recognized as part of a general class of common-law defenses — including justifications such as self-defense and defense of others, and excuses such as duress or insanity — that are by default accepted in all jurisdictions, or at least all jurisdictions that have not expressly codified their defenses. (I say “by default”; they may be expressly statutorily precluded, as a few states have done as to insanity.) Robinson’s treatise on Criminal Law Defenses describes it well, I think,

Every American jurisdiction recognizes a justification for the defense of property. The principle of the defense of property is analogous to that of all defensive force justifications and may be stated as follows: … Conduct constituting an offense is justified if:

(1) an aggressor unjustifiably threatens the property of another; and

(2) the actor engages in conduct harmful to the aggressor

(a) when and to the extent necessary to protect the property,

(b) that is reasonable in relation to the harm threatened.

More generally, defense of property, self-defense, and defense of others are generally treated by the law more or less similarly, though subject to the general principle that defense of property will generally not justify the use of lethal force. I have never seen in any case, treatise, or other reference any indication that federal law differs from this, and rejects the notion that defense-of-property is a general default.

I agree with Orin that the defense has been rare. But I suspect that it is rare because defense of property generally doesn’t authorize the use of deadly force, and because use of supposedly defensive nondeadly force is less likely to draw a federal prosecutor’s attention than the use of supposedly defensive deadly force. The typical nonlethal defense of property scenario — someone says I punched him, and I claim I did this in order to keep him from stealing my briefcase — just isn’t likely to end up prosecuted by the local U.S. Attorney’s office, even if there’s some reason to doubt my side of the story.

Second, Orin points to the Model Penal Code as evidence that “when stated as a defense in federal criminal cases, ‘defense of property’ seems to mean only defense of physical property from physical access or removal”; and the MPC does define defense of property as limited to “use of force upon or toward the person of another … to prevent or terminate an unlawful entry or other trespass upon land or a trespass against or the unlawful carrying away of tangible, movable property …, [or] to effect an entry or re-entry upon land or to retake tangible movable property” (plus provides for a related but different defense in § 3.10).

But the MPC seems to define defenses in a way that’s focused on those crimes that the MPC covers. For instance, the MPC’s self-defense provision literally covers only “the use of force upon or toward another person”; it would not cover imminent self-defense as a defense to a charge of being a felon in possession of a firearm (though no such crime is defined by the MPC in the first place). Yet federal law does recognize this. Likewise, state cases recognize self-defense as a defense to the use of force against an animal, when the use would otherwise be illegal (I could find no federal prosecutions involving the question).

Now perhaps the answer is that federal law would reject even self-defense as a defense to non-physical-force crimes, and that the defense in felon-in-possession cases is actually a species of the necessity defense. But if that’s true (which isn’t clear, since it’s not even clear that federal law recognizes a general necessity defense), then one could equally argue for digital self-defense under the rubric of necessity.

Likewise, while Orin brackets § 3.10, that might very well be the defense-of-property provision (though labeled by the MPC under the more general rubric of “justification in property crimes”) that an MPC-following federal court might adopt, if it chooses to take a narrow view of the common-law defense-of-property defense. Section 3.10 generally allows “intrusion on or interference with property [when tort law would recognize] a defense of privilege in a civil action based [on the conduct],” unless the relevant criminal statute “deals with the specific situation involved” or a “legislative purpose to exclude the justification claimed otherwise plainly appears.” And the common law has generally recognized defense of property as a privilege in civil actions. (See, e.g., Restatement (Second) of Torts § 79, which allows even nonlethal physical force against a person when necessary to terminate the person’s intrusion on your possession of chattels. That doesn’t literally cover use of nonlethal electronic actions against a computer, but the point of common-law defenses is that they are applicable by analogy; the Restatement is thus a guide, not a detailed code to be followed only according to its literal terms even in novel situations.)

So we have to remember, it seems to me, that the federal law of criminal defenses is common law, borrowing from both the substance of the traditionally recognized common-law defenses, and from the common-law method, which involves reasoning by analogy. The common-law method also allows analogies to be resisted, if the new situation is vastly different from the old; and of course Congress can trump common-law defenses by statute. But the background remains that there’s a common-law defense of defense of property (buttressed, where necessary, by the necessity defense, and to the extent one is influenced by the Model Penal Code, by § 3.10’s borrowing from the common-law tort defenses), and that there’s no reason to think that federal law takes a narrow view of this defense.