Cybersecurity & Tech Surveillance & Privacy

Cyborgs! Law and Policy Implications

Benjamin Wittes, Jane Chong
Friday, September 5, 2014, 10:27 AM
And now for something completely different: Cyborgs. No, this is not a joke. For years, certain technology enthusiasts have floated variations on the question of whether we are becoming cyborgs---or already are cyborgs.

Published by The Lawfare Institute
in Cooperation With

And now for something completely different: Cyborgs. No, this is not a joke. For years, certain technology enthusiasts have floated variations on the question of whether we are becoming cyborgs---or already are cyborgs. In our newly released paper, titled "Our Cyborg Future: Law and Policy Implications," we take a different, more legal angle.
The law remains embryonic on virtually all points of interest to the adolescent cyborg: everything from your right to access your own data, to your right to restrict access to your data, to your ability to secure something more than property restitution when an airline destroys your custom mobility assistance device and leaves you bedridden for a year. That's right: whether you rely on a pacemaker to stay alive or on a cellphone to stay connected, when we say "adolescent cyborg," we are talking about you.
Here's the introduction:
In June 2014, the Supreme Court handed down its decision in Riley v. California, in which the justices unanimously ruled that police officers may not, without a warrant, search the data on a cell phone seized during an arrest. Writing for eight justices, Chief Justice John Roberts declared that “modern cell phones . . . are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.”This may be the first time the Supreme Court has explicitly contemplated the cyborg in case law—admittedly as a kind of metaphor. But the idea that the law will have to accommodate the integration of technology into the human being has actually been kicking around for a while.Speaking at the Brookings Institution in 2011 at an event on the future of the Constitution in the face of technological change, Columbia Law Professor Tim Wu mused that “we’re talking about something different than we realize.” Because our cell phones are not attached to us, not embedded in us, Wu argued, we are missing the magnitude of the questions we contemplate as we make law and policy regulating human interactions with these ubiquitous machines that mediate so much of our lives. We are, in fact, he argued, reaching “the very beginnings of [a] sort of understanding [of] cyborg law, that is to say the law of augmented humans.” As Wu explained,

[I]n all these science fiction stories, there’s always this thing that bolts into somebody’s head or you become half robot or you have a really strong arm that can throw boulders or something. But what is the difference between that and having a phone with you—sorry, a computer with you—all the time that is tracking where you are, which you’re using for storing all of your personal information, your memories, your friends, your communications, that knows where you are and does all kinds of powerful things and speaks different languages? I mean, with our phones we are actually technologically enhanced creatures, and those technological enhancements, which we have basically attached to our bodies, also make us vulnerable to more government supervision, privacy invasions, and so on and so forth.

And so what we’re doing now is taking the very first, very confusing steps in what is actually a law of cyborgs as opposed to human law, which is what we’ve been used to. And what we’re confused about is that this cyborg thing, you know, the part of us that’s not human, non-organic, has no rights. But we as humans have rights, but the divide is becoming very small. I mean, it’s on your body at all times.

Humans have rights, under which they retain some measure of dominion over their bodies. Machines, meanwhile, remain slaves with uncertain masters. Our laws may, directly and indirectly, protect people’s right to use certain machines—freedom of the press, the right to keep and bear arms. But our laws do not recognize the rights of machines themselves. Nor do the laws recognize cyborgs—hybrids that add machine functionalities and capabilities to human bodies and consciousness.
As the Riley case illustrates, our political vocabulary and public debates about data, privacy, and surveillance sometimes approach an understanding that we are—if not yet Terminators—at least a little more integrated with our machines than are the farmer wielding a plow, the soldier bearing a rifle, or the driver in a car. We recognize that our legal doctrines make a wide swath of transactional data available to government on thin legal showings—telephone call records, credit card transactions, banking records, and geolocation data, for example—and we worry that these doctrines make surveillance the price of existence in a data-driven society. We fret that the channels of communication between our machines might not be free, and that this might encumber human communications using those machines.
That said, as the Supreme Court did in Riley, we nearly always stop short of Wu’s arresting point. We don’t, after all, think of ourselves as cyborgs. The cyborg instead remains metaphor.
But should it? The question is a surprisingly important one, for reasons that are partly descriptive and partly normative. As a descriptive matter, sharp legal divisions between man and machine are turning into something of a contrivance. Look at people on the street, at the degree to which human-machine integrations have fundamentally altered the shape of our daily lives. Even beyond the pacemakers and the occasional robotic prosthetics, we increasingly wear our computers—whether Google Glass or Samsung Galaxy Gear. We strap on devices that record our steps and our heart rates. We take pictures by winking. Even relatively old-school humans are glued to their cell phones, using them not just as communications portals, but also for directions, to spend money, for informational feeds of varying sorts, and as recorders of data seen and heard and formerly—but no longer—memorized. Writes one commentator:

[E]ven as we rebel against the idea of robotic enhancement, we’re becoming cyborgs of a subtler sort: the advent of smartphones and wearable electronics has augmented our abilities in ways that would seem superhuman to humans of even a couple decades ago, all without us having to swap out a limb or neuron-bundle for their synthetic equivalents. Instead, we slip on our wristbands and smart-watches and augmented-reality headsets, tuck our increasingly powerful smartphones into our pockets, and off we go—the world’s knowledge a voice-command away, our body-metrics and daily activity displayable with a few button-taps.

No, the phones are not encased in our tissue, but our reliance on them could hardly be more narcotic if they were. Watch your fellow passengers the next time you’re on a plane that lands. No sooner does it touch down than nearly everyone engages their phones, as though a part of themselves has been shut down during the flight. Look at people on a bus or on a subway car. What percentage of them is in some way using phones, either sending information or receiving some stimulus from an electronic device? Does it really matter that the chip is not implanted in our heads—yet? How much of your day do you spend engaged with some communications device? Is there an intelligible difference between tracking it and tracking you?This brings us to the normative half of the inquiry. Should we recognize our increasing cyborgization as more than just a metaphor, given the legal and policy implications both of doing so and of failing to do so? Our law sees you and your cell phone as two separate entities, a person who is using a machine. But robust protections for one may be vitiated in the absence of concurrent protections for the other. Our law also sees the woman with a pacemaker and the veteran with a robotic prosthesis or orthosis as people using machines. Where certain machines are physically incorporated into or onto the body, or restore the body to its “normal” functionality rather than enhance it, we might assume they are more a part of the person than a cell phone. Yet current laws offer no such guarantees. The woman is afforded no rights with respect to the data produced by her pacemaker,7 and the quadriplegic veteran has few rights beyond restitution for property damage when an airline destroys his mobility assistance device and leaves him for months without replacement.
As we will explain, the general observation that humans are becoming cyborgs is not new. But commentators have largely used the term “cyborg” to capture, as a descriptive matter, what they see as unprecedented merger between humans and machines, and to express concerns about the ways in which the body and brain are increasingly becoming sites of control and commodification. In contrast, with normative vigor, we push the usefulness of the concept, suggesting the ways in which conceptualizing our changing relationship to technology in terms of our cyborgization may facilitate the development of law and policy that sensitively accommodates that change.
The shift that comes of understanding ourselves as cyborgs is nowhere more apparent than in the surveillance realm, where discussion of the legal implications of our technology dependence is often couched in and restricted to privacy terms. Under this conventional construction, it is privacy that is key to our identities, and technology is the poisoned chalice that enables, on the one hand, our most basic functioning in a highly networked world, and on the other, the constant monitoring of our activities. For example, in a 2013 Christmas day broadcast, Edward Snowden borrowed a familiar trope to portend a dark fate for a society threatened by the popularization of technologies that George Orwell had never contemplated. “We have sensors in our pockets that track us everywhere we go. Think about what this means for the privacy of the average person,” he urged. With some poignancy, Snowden went on to pay special homage to all that privacy makes possible. Privacy matters, according to Snowden, because “privacy is what allows us to determine who we are and who we want to be.”
There is, however, another way to think about all of this: what if we were to understand technology itself as increasingly part of our very being?
Indeed, do we care so much about whether and how the government accesses our data perhaps because the line between ourselves and the machines that generate the data is getting fuzzier? Perhaps the NSA disclosures have struck such a chord with so many people because on a visceral level we know what our law has not yet begun to recognize: that we are already juvenile cyborgs, and fast becoming adolescent cyborgs; we fear that as adult cyborgs, we will get from the state nothing more than the rights of the machine with respect to those areas of our lives that are bound up with the capabilities of the machine.
In this paper, we try to take Wu’s challenge seriously and think about how the law will respond as the divide between human and machine becomes ever-more unstable. We survey a variety of areas in which the law will have to respond as we become more cyborg-like. In particular, we consider how the law of surveillance will shift as we develop from humans who use machines into humans who partially are machines or, at least, who depend on machines pervasively for our most human-like activities.
We proceed in a number of steps. First, we try to usefully define cyborgs and examine the question of to what extent modern humans represent an early phase of cyborg development. Next we turn to a number of controversies—some of them social, some of them legal—that have arisen as the process of cyborgization has gotten under way. Lastly, we take an initial stab at identifying key facets of life among cyborgs, looking in particular at the surveillance context and the stress that cyborgization is likely to put on modern Fourth Amendment law’s so-called third-party doctrine—the idea that transactional data voluntarily given to third parties is not protected by the guarantee against unreasonable search and seizure.

Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.
Jane Chong is former deputy managing editor of Lawfare. She served as a law clerk on the U.S. Court of Appeals for the Third Circuit and is a graduate of Yale Law School and Duke University.

Subscribe to Lawfare