Published by The Lawfare Institute
in Cooperation With
I. Should Software Makers Pay?The joke goes that only two industries refer to their customers as “users.” But here's the real punch line: Drug users and software users are about equally likely to recover damages for whatever harms those wares cause them. Let’s face it. Dazzled by what software makes possible—the highs—we have embedded into our lives a technological medium capable of bringing society to its knees, but from which we demand virtually no quality assurance. The $150 billion U.S. software industry has built itself on a mantra that has become the natural order: user beware. Unfortunately, software vulnerabilities don’t just cost end-users billions annually in antivirus products. The problem is bigger than that. In 2011, the U.S.government warned critical-infrastructure operators about an exploit that was targeting a stack overflow vulnerability in software deployed in utilities and manufacturing plants around the world. In 2012, a researcher found almost two dozen vulnerabilities in industrial control systems (ICS) software used in power plants, airports and manufacturing facilities. In its 2013 threat update, Symantec, the world’s largest security software corporation, surprised no one when it announced that criminals were finding and exploiting new vulnerabilities faster than software vendors were proving able to release patches. Cybersecurity is a very big set of problems, and bad software is a big part of the mess. How did we get here? The rapid evolution of software technology and the surge in the total number of computer users actually led early commentators to warn of software vendors’ increasing exposure to lawsuits —and the “catastrophic" consequences to ensue. But history has gone the other way. Operating within a “legislative void,” the courts have consistently construed software licenses in a manner that allows software vendors to disclaim almost all liability for software defects. Bruce Schneier, perhaps the most prominent decrier of the current no-liability regime for software vendors, puts it simply: “there are no real consequences for having bad security.” The result is a marketplace crammed with shoddy code. As users, we tolerate defective software because defective software works most of the time. And we get it much faster and with a great many features. Partly in response to consumer appetite, timely release and incremental patching have become key features of the industry’s “fix-it-later” culture. Software companies look for bugs late in the development process and knowingly package and ship buggy software with impunity. Meanwhile end users are slow in acknowledging vulnerabilities, use patches too infrequently, and fail to timely deploy published updates. Some experts fear that nothing short of a digital Pearl Harbor—a large-scale attack that exploits critical security holes in our industrial control systems—will create the momentum needed to trigger government regulation of and private investment in quality code. If that ends up being the case, it won’t be for lack of theorizing. Suboptimal code has been recognized as a problem for decades. Certainly, there are defenders of the status quo who argue that holding software providers liable for their code would raise costs and stifle innovation. But legal academics have spent thirty years disagreeing with that proposition and dreaming up liability schemes designed to force software vendors to shoulder some of the costs long borne entirely by users. The software liability debate has retained its basic shape over the years, but the harms giving rise to the debate have clearly evolved in that time. The earliest software liability discussions focused on embedded software malfunctions that led to physical injury or death. Concern expanded to software applications used to infringe copyright. With the explosion in cybercrime and cyber-espionage, and rising fears of cyberterrorism, attention has converged on the vulnerabilities lurking in shoddy code. The shift in kind can also be understood as a shift in scale, with software harms expanding in reach from the end-users who seek to benefit from the deployment of particular software, to third parties affected by its unlawful use, and finally to all actors in an increasingly interconnected and increasingly insecure cyber ecosystem. These shifts are significant in that they pull the software liability discussion in two directions, compelling us to start holding vendors at least partially accountable for poor software development practices but also complicating any attempt to construct a coherent liability regime. For example, software insecurity can be likened to a public health crisis. The fact that a single vulnerability can give rise to untold numbers of compromised computers and harms that are difficult to cabin makes dumping costs entirely on end users unreasonable as a policy matter. To borrow the words of law professors Michael Rustad and Thomas Koenig, the current paradigm is one in which “[t]he software industry tends to blame cybercrime, computer intrusions, and viruses on the expertise and sophistication of third party criminals and on careless users who fail to implement adequate security, rather than acknowledging the obvious risks created by their own lack of adequate testing and flawed software design.” A more reasonable and balanced system should be possible. On the other hand, any attempt to systematically hold vendors accountable for vulnerabilities must build in realistic constraints, or risk exposing the industry to crushing liability. Commentators who advocate for software vendor liability have a common refrain: the software industry should not be categorically exempted from the safety standards imposed on other industries. And while that is certainly true, there is danger in over-relying on the analogies so often drawn between software and other, more-conventional products and services. The most common analogy is the car. And there are legitimate parallels between the vehicle safety crisis of the 1960s and today’s software security conundrum. Then, state and federal courts were reluctant to apply tort law even where automobile-accident victims claimed their injuries resulted from the failure of manufacturers to exercise reasonable care in the design of their motor vehicles. Over the next thirty years, however, the courts did an about-face: they imposed on automobile manufacturers a duty to use reasonable care in designing products to avoid subjecting passengers to an unreasonable risk of injury in the event of a collision; applied a rule of strict liability to vehicles found to be defective and unreasonably dangerous; and held automobile manufacturers accountable for preventing and reducing the severity of accidents. Yet to insist that software defects and automobile defects should be governed by substantively similar legal regimes is to ignore the fact that “software” is a category comprising everything from video games to aircraft navigation systems, and that the type and severity of harms arising from software vulnerabilities in those products range dramatically. By contrast, automobile defects more invariably risk bodily injury and property damage. To dismiss these distinctions is to contribute to an increasingly contrived dichotomy, between those who see the uniqueness of software as an argument for exempting software programs from traditional liability rules altogether, and those who stress that software is nothing special to claim that the road to software vendor liability lies in traditional contract or tort remedies. As it turns out, with respect to the paradigm shift that led to liability for automobile manufacturers, the courts were only one component of what Ralph Nader has called“an interactive process involving both the executive and legislative branches of government as well as the forces of the marketplace.” Specifically, in 1966, in response to mounting public pressure and political momentum directly attributable toNader’s highly visible consumer advocacy efforts, Congress passed the National Traffic and Motor Vehicle Safety Act, which vested a federal agency with the power to proactively promulgate and enforce industry safety regulations. In 1987, law professor Jerry Mashaw and lawyer David Harfst described the statute as nothing short of“revolutionary”:
Abandoning the historic definition of the automobile safety problem as one of avoiding accidents by modifying driver behavior, the 1966 Act adopted an epidemiological perspective. Reconstituted, the safety issue became how to modify the vehicle (environment) so that the interaction of the passenger (host) and the deceleration forces of accidents (agent) produced less trauma.Reconceiving software security is as necessary today as reconstituting the automobile safety issue was yesterday. And just as imposing vehicle manufacturer liability required a shift in our auto safety paradigm three decades ago, fairly allocating the costs of software deficiencies between software vendors and users will require examining some of our deep-seated beliefs about the very nature of software security, as well as questioning our addiction to functionality over quality. Recalibrating the legal system that has grown out of those beliefs and dependencies will, in turn, require concerted action from Congress and the courts. It is common enough to pay lip service to the idea of committing to a system-based view of cyber security. We litter our cyber security discussions with medical and ecological metaphors, and use these to great effect in arguing for comprehensive cyber surveillance measures and increased public-private data sharing. But most security breaches are made possible by software vulnerabilities. So the real question is this: when the “body” breaks down, when the “environment” fails, will we put our money where our collective mouth is? Will a nuanced view of the interdependent forces at play guide us in determining who must pay when things go wrong and who carries the risks associated with bad software? Over the next month, in a series of weekly installments, I’ll be exploring whether and how we might hold software vendors liable for the quality of their code. Throughout, I’ll be examining what makes software and software harms distinctive, and assessing how a regime that holds software vendors liable for the flaws in their code or coding practices could be tailored to account for these features.
II. Software Security Is Difficult---and DoableIt’s true: perfectly secure software is a pipe dream. Experts agree that we cannot make software of “nontrivial size and complexity” free of vulnerabilities. Moreover, consumers want feature-rich, powerful software and they want it quickly; and this tends to produce huge, bulky, poorly-written software, released early and without adequate care for security. Those who oppose holding software makers legally accountable for the security of their products often trot out this reality as a kind of trump card. But their conclusion—that it is unreasonable to hold software makers accountable for the quality of their code—doesn’t follow from it. Indeed, all of the evidence suggests that the industry knows how to make software more secure, that is, to minimize security risks in software design and deployment—and that it needs a legal kick in the pants if it is to consistently put that knowledge into practice. Generally speaking, the term “software security” is used to denote designing, building, testing and deploying software so as to reduce vulnerabilities and to ensure the software’s proper function when under malicious attack. Vulnerabilities are a special subset of software defects that a user can leverage against the software and its supporting systems. A coding error that does not offer a hacker an opportunity to attack a security boundary is not a vulnerability; it may affect software reliability or performance, but it does not compromise its security. Vulnerabilities can be introduced at any phase of the software development cycle—indeed, in the case of certain design flaws, before the coding even begins. Loosely speaking, vulnerabilities can be divided into two categories: implementation-level vulnerabilities (bugs) and design-level vulnerabilities (flaws). Relative to bugs, which tend to be localized coding errors that may be detected by automated testing tools, design flaws exist at a deeper architectural level and can be more difficult to find and fix. The case that software insecurity is inherent, to some degree, is a strong one. Security is not an add-on or a plug-in that software makers can simply tack onto the end of the development cycle. According to the late Watts Humphrey, known as the “Father of Software Quality,” “Every developer must view quality as a personal responsibility and strive to make every product element defect free.” James Whittaker, former Google engineering director, offers similar thoughts in his book detailing the process by which the world’s search giant tests software: “Quality is not equal to test. Quality is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other.” Responsible software, in other words, is software with security baked into every aspect of its development. As a general matter, we expect technology to improve in quality over time, as measured in functionality and safety. But the quality of software, computer engineers argue, has followed no such trajectory. This is not wholly attributable to software vendors’ lack of legal liability; the penetrate-and-patch cycle that is standard to the software industry today is a security nightmare borne of characteristics particular to software. Gary McGraw, among the best-known authorities in the field, attributes software's growing security problems to what he terms the “trinity of trouble”: connectivity, extensibility and complexity. To this list, let’s add a fourth commonly-cited concern, that of software “monoculture.” Together, all of these factors help explain at a systemic level what makes software security difficult to measure, and difficult to achieve. Let’s consider each briefly. First, ever-increasing connectivity between computers makes all systems connected to the Internet increasingly vulnerable to software-based attacks. McGraw notes that this basic principle is exacerbated by the fact that old platforms not originally designed for the Internet are being pushed online, despite not supporting security protocols or standard plug-ins for authentication and authorization. Second, an extensible system is one that supports updates and extensions and thereby allows functionality to evolve incrementally. Web browsers, for example, support plug-ins that enable users to install extensions for new document types. Extensibility is attractive for purposes of increasing functionality, but also makes it difficult to keep the constantly-adapting system free of software vulnerabilities. Third, a 2009 report commissioned to identify and address risk associated with increasing complexity of NASA flight software defines complexity simply: “how hard something is to understand or verify” at all levels from software design to software development to software testing. Software systems are growing exponentially in size and complexity, which makes vulnerabilities unavoidable. Carnegie Mellon University's CyLab Sustainable Computing Consortium estimates that commercial software contains 20 to 30 bugs for every 1,000 lines of code—and Windows XP contains at least 40 million lines of code. The problems do not end with the mind-boggling math. The complexity of individual software systems creates potential problems for interactions among many different applications. For example, earlier this year, Microsoft was forced to issue instructions for uninstalling its latest security update when unexpected interactions between the update and certain third-party software rendered some users’ computers unbootable. Finally, there are the dangers associated with monoculture—the notion that system uniformity predisposes users to catastrophic attacks. As examples of the dangers of monoculture, security experts point to the rot that devastated genetically identical potato plants during the Irish Potato Famine and the boll weevil infestation that destroyed cotton crops in the American South in the early twentieth century. As noted by Dan Greer in a famous paper that, as lore has it, got him fired from a Microsoft affiliate: “The security situation is deteriorating, and that deterioration compounds when nearly all computers in the hands of end users rely on a single operating system subject to the same vulnerabilities the world over.” In other words, to great extent, software is different from other goods and services. And the notion of perfectly secure software almost certainly is a white whale. But as far as the liability discussion is concerned, it’s also a bit of a red herring. Software does not need to be flawless to be much safer than the code churned out today. For starters, software vulnerabilities are not all created equal, and they do not all pose the same risk. Based on data from 40 million security scans, the cloud security company Qualys found that just 10 percent of vulnerabilities are responsible for 90 percent of all cybersecurity exposures. Not only are some vulnerabilities are doing an outsize share of the work in creating cybersecurity problems, but many are, from a technical standpoint, inexcusable. Only systemic dysfunction can explain why the buffer overflow, a low-level and entirely preventable bug, remains among the most pervasive and critical software vulnerabilities out there. That software security is an unusually difficult pursuit, and that the current incentive structure prevents software makers from consistently putting what they know about security into practice, together weigh in favor ofholding software makers accountable for at least some vulnerabilities. The law regularly imposes discipline on other industries that would otherwise be lured toward practices both easy and lucrative. An intelligently designed liability regime should be understood as a way to protect software from itself. Ironically, software is most insecure where the stakes are highest. For all the progress that traditional software providers have made in creating more secure applications, experts say that embedded device manufacturers, responsible for producing everything from medical devices to industrial control systems, are years behind in secure system development. The problem is not that the industry has proven unable to develop a disciplined approach to designing secure software. The problem is that this discipline is too often optional. For example, commercial aircraft are legally required to meet rigorous software safety requisites established by the avionics industry and outlined in a certification document known as DO-178C. Yet not all life-critical systems—indeed, not even all aircraft—are required to comply with such baselines. Software for unmanned aerial vehicles (UAVs) need not meet the DO-178C standard. The discrepancy is hard to justify. As Robert Dewar, president and CEO of the commercial software company AdaCore, put it in an interview last year: “All engineers need to adopt the "failure-is-not-an-option" attitude that is necessary for producing reliable, certified software. UAVs require at least as much care as commercial avionics applications.” History says he has a point. In 2010,a software glitch caused U.S. Navy operators to briefly lose control of a drone, which wandered into restricted Washington airspace before they were able to regain access. Similarly, some critical infrastructure sectors must meet mandatory cybersecurity standards as defined by federal law, or risk civil monetary penalties. But others not legally bound are instead left to drown in voluntary cybersecurity guidance. In 2011, the U.S. Government Accountability Office analyzed the extent to which seven critical infrastructure sectors issued and adhered to such guidance. That report was appropriately titled: “Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use.” In 1992, Ward Cunningham, the programmer who gave the world its first wiki, introduced what would become popularly known as "technical debt" to describe the long-term costs of cutting corners when developing code. He wrote: “Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt.” In 2010, Gartner, the Connecticut-based information technology research firm, projected that technical debt worldwide could hit $1 trillion by 2015. That number is all the more disturbing when you consider the fact that technical debt is unlike financial debt in one crucial sense: private companies are not incurring the costs and risks associated with their choices. You are. The goal of a new cyber liability regime should be to change that—to create an incentive to lower technical debt and thereby add discipline to a confusing, inconsistent and fundamentally irrational default regime, under which software makers call the shots and users pay for them.
III. The Typhoid Mary Model of Cyber User HygieneIn the early twentieth century, typhoid struck mostly poor people. So it was odd when, in the summer of 1906, six of eleven people in the wealthy household of bank president Charles Henry Warren fell ill with the disease. Hired to investigate the source of the scourge, “sanitary engineer" George Soper followed the breadcrumbs to the Warren family's new, and recently-missing, cook. Piecing together her history, he learned that typhoid outbreaks had trailed Mary Mallon from house to house for a decade. Mallon was a good cook with a tragic flaw: she did not wash her hands. This particular combination of talent and defect proved disastrous for her many patrons, since Mallon, later dubbed Typhoid Mary, was a rare, seemingly-healthy carrier of the fecal-oral bacterium Salmonella typhi. Eventually Mallon was apprehended, forcibly quarantined for three years and released on the condition that she would cease preparing food. But unconvinced that she was anything but hale, she took on a series of aliases and began to cook, and inadvertently kill, again. To the modern reader, Mallon’s denial of biology in the face of evidence is baffling—and criminal. But her conduct is less astonishing when translated to another context: the Internet. Our collective behavior as Internet and software users is remarkably like Mallon’s. End users are known to rely on easy-to-guess passwords, to unknowingly execute malware, and to neglect to timely install critical software patches. We do all this despite being told that these behaviors have costs, perhaps to ourselves, perhaps to others. But if code creates real hazards for people and businesses, shouldn’t that eventually generate a market for more secure code? This is the simple and seductive argument of those who oppose liability for makers of insecure software: just leave the quality of code to the market to determine. The free market argument shows up commonly and often in greater detail to counter proposals for internet service provider liability, but its logic operates similarly as opposition to holding software makers accountable for shipping vulnerability-ridden products. At its crux, this logic assumes that when it comes to society’s cybersecurity needs, users can and should be the ones pulling the levers. Here is how Jim Harper of the libertarian Cato Institute put the point back in 2005: “On the margin, pushing disproportionate liability onto ISPs would erode Internet users’ focus on self-awareness and self-help.” Moreover, Harper noted, such a move would “suppress” what is “a well-developing and diverse market for Internet hygiene services.” In the software security context, this argument boils down to two parts: first, that patching practices and antivirus products have some handle on cybersecurity problems; and second, that shifting liability onto entities other than the user would interfere with the market’s ability to generate its own remedies. It’s an argument not unlike contending that in 1906 the market was equipped to handle the risk posed by Mallon—that rather than being quarantined, she should have been allowed to cook because New York families could develop good screening techniques for identifying infected food workers. Security experts have written tomes on why monthly patch rollouts and steadily proliferating antivirus options do not collectively constitute a viable security solution to the problem of insecure code. But more can be said about the nature of this inadequacy, which traces back to the inadequacy of users. Consumers of “Internet hygiene services” are ultimately as ill-equipped to bear the burden of shaping the market to minimize software security risks as Mallon’s employers were in controlling the spread of typhoid. The analogy applies on two levels, for as users we play the role of the victims—the New Yorkers who hired Typhoid Mary—but in important respects we also play the role of Mary herself. Three features make Typhoid Mary a relevant analogy for the modern software user, and shed light on why relying on users to make responsible cyber hygiene decisions cannot make for a responsible national cybersecurity policy. First, there is user apathy. The companies that produce buggy code are not alone in escaping the ramifications of their choices. Like Mallon, who remained healthy even as her patrons fell sick around her, users are not forced to suffer the full consequences of their personal use of buggy software or their bad security practices. This is a classic problem of what economists call negative externalities. And negative externalities are exacerbated by the fact that malware creators have gotten smarter about taking advantage of them. Unlike the viruses and worms of yesteryear, which would typically disrupt the operation of the infected machine in a noticeable fashion, modern malware tends to secrete itself onto a machine and use the host to attack third parties. Experts estimate that 10 percent of U.S. computers have been infected and co-opted for remote exploitation by herders of sprawling, spam-spewing botnets. Botnets, increasingly the tool of choice for cybercriminals as a consequence of their inherent versatility, are made up of vast numbers of infected computers that, unbeknownst to their owners, operate in concert to distribute malicious code, disrupt Internet traffic or steal sensitive user data. In 2010, Microsoft reported that more than 2.2. million PCs in the U.S. had been hijacked by bot herders. In June of this year, Microsoft's Digital Crimes Unit worked with the FBI and the U.S. Marshals Service to liberate more than 1,463 computers from the Citadel botnet, responsible for infecting an estimated 5 million computers worldwide and stealing $500 million from consumer and business bank accounts over 18 months. Yet despite the continuing rise of botnets, many people lack reason to truly care that their computers are infected, because being part of a botnet does not especially harm them. In fact, people are on average quite unaware of how "pervasive and pernicious" the botnet threat is and remain unaware when their systems have been coopted. This brings us to a second connection between Typhoid Mary and the computer user: ignorance. Users, commonly described as the weakest link in the security chain, generally lack the technical background to understand what is going on under the hoods of the various high-tech gadgets that make their worlds go round. So just as Mallon’s ignorance as to the science of the spread of a pathogen made it all the easier for her to skip the soap and continue the cooking, our lack of understanding when it comes to the mechanics of cyber risks lends itself to poor cybersecurity hygiene, even as our reliance on the Internet—and our consequent risk—increases steadily. Even the abstract knowledge that the internet is teeming with malicious activity does not seem to translate into an appropriate awareness of personal risk. In 2011, McAffee conducted a global study that showed that on average, consumers put their digital assets at a value of $37,438—and that more than a third of those consumers failed to institute protections across all those devices. Research conducted within other industries suggests that consumers tend to practice a kind of personal exceptionalism, believing they are less vulnerable to risks and less likely to be harmed by products than are others. As one security researcher points out, "[i]t stands to reason that any computer user has the preset belief that they are at less risk of a computer vulnerability than others." And here’s the kicker: users do not necessarily exercise greater online discretion even when they have personally experienced an adverse event. Technological illiteracy no doubt contributes to a litany of bad security practices. For example, in 2012, a Skype-commissioned survey of some 350,000 individuals revealed that 40 percent of adults do not update their software when prompted, and about a quarter skipped the updates because they did not understand the benefits. Users do not promptly patch software even when companies make such patches available in a timely fashion; something like 90 percent of successful exploits are successful attacks on unpatched systems. You might expect that users would be willing to timely deploy at least the most urgent fixes. But no, numerous studies have confirmed the widely held belief that users are extremely slow about deploying security fixes, even in the case of critical vulnerabilities. Indeed, the most infamous worms and viruses have exploited vulnerabilities for which patches were readily available. These include the Code Red worm in 2000, which caused an estimated $1.2 billion in network damage, and the SQL Slammer in 2003, an even faster-spreading worm that completely shut down the Internet in South Korea and led to outages and slowdowns throughout Asia. Nothing better showcases the problems with dumping the burden of improving cybersecurity on the party with the least technical know-how to accomplish this than the emergence of one distinct criminal enterprise: fake anti-virus software. The basic premise of the so-called “rogue” antivirus application is simple: feed on users’ fear of malware to infect computers with malware. The scam sends an alert message to the user, offering a free (fake) scan and demanding a credit card number in exchange for removing the supposed infections. Researchers at Google recently conducted an analysis of 240 million web pages over 13 months and discovered that fake anti-virus software accounts for 15 percent of all malware on the web and for 50 percent of malware distributed through advertisements. The problem is growing, both in absolute terms and relative to other malware. Put simply: we users are fools, and fools are easy to exploit. There is a third factor keeping end-user liability from even approaching viability as a path toward better software security or better cybersecurity generally, and that’s limited market power. Again, Typhoid Mary offers a useful analog. Under pressure to stop handling food, Mallon briefly attempted other occupations—taking a job, for example, as a laundress. Unfortunately, nothing paid as well for a woman of her station as did cooking. So cook she did. Like Typhoid Mary, software users are hobbled by the limits of their market power. Inan industry structured to reward fast shipping and eventual patching, software makers face no consequences for even knowingly shipping vulnerability-ridden products. Meanwhile, users lack the ability to determine the quality of proprietary software until it has become a standard. One commentator describes the nightmare simply: “The standardization process interacts with the unfortunate fact that latent software security defects tend to remain hidden until after software has become popular, and consequently, such defects play no role in the competition to set standards.” Think about it. If you are dissatisfied with the security of your software, what are your options? Can you really afford to stop cooking altogether? As a nation of modern-day Typhoid Marys, we pose a greater threat to the cyber ecosystem in which we operate than to ourselves. But unlike Mary, we cannot all be quarantined on an island next to Rikers. Our fate is just the opposite—to be increasingly interconnected, and increasingly exposed. So a smart cybersecurity policy has to be one that encourages cyber hygiene among users without mistaking it for an alternative to creating real demand for better security from software makers. Software makers are distinctly unlike Typhoid Mary in that they have the knowledge and the capability to improve the security environment. They resemble Mallon in one respect, and one respect only: they lack adequate incentive to change their habits and have duly shunted the risks associated with bad code off on others. A nuanced software liability regime—one that holds software makers accountable for unacceptably flawed products as well as their negligent or reckless marketing—could correct this. It doesn’t take a sanitary engineer to understand that.
V. Software Liability Is a Complex Machine, Not a Big Red ButtonNoted computer security expert Daniel Geer thinks you should bear the costs of the insecure code you use. It’s nothing personal. He acknowledges that holding the end user responsible for being the “unwitting accomplice to constant crime” is far from a perfect cybersecurity strategy. But he concludes that for the time being, it is the least “worst” option:
If you say that it is the responsibility of Internet Service Providers (ISPs) — that “clean pipes” argument— then you are flatly giving up on not having your traffic inspected at a fine level of detail. If you say that it is the software manufacturer’s responsibility, we will soon need the digital equivalent of the Food and Drug Administration to set standards for efficacy and safety. If you say that it is the government’s responsibility, then the mythical Information Superhighway Driver’s License must soon follow. To my mind, personal responsibility for Internet safety is the worst choice, except for all the others.Geer’s concern is well-founded: imposing security responsibilities on entities other than the end-user will no doubt abrogate some of the freedom and functionality that end users currently enjoy as consumers. This is true both with respect to software and Internet access specifically and with respect to computer information systems more generally. But Geer’s conclusions are unnecessarily stark, in part because they assume that security is a pie that cannot be intelligently shared—a conclusion that, it should be noted, we would never be inclined to accept in our offline lives. Our physical security may ultimately be our burden to bear, but we expect fast food chains not to poison us, local police to do their rounds and our neighbors to call 9-1-1 if they see suspicious activity. Some invisible web of law, professional obligations and communal norms collude at all times to keep us alive and our property in our possession. Would vesting ISPs with circumscribed security responsibilities—such as responding to or recording highly unusual traffic patterns that suggest an ongoing DDoS attack—require end-users to “flatly” relinquish data privacy? Many ISPs already implement limited security mechanisms, and carefully designed private-public data-sharing restrictions could go a long way toward addressing concerns about improper use of subscriber information. Similarly, holding software providers accountable for their code need not entail exposing software providers to lawsuits for any and all vulnerabilities found in their products. Liability critics battle a straw man when they make arguments like this one, from computer security authority Roger Grimes: “If all software is imperfect and carries security bugs, that means that all software vendors—from one-person shops to global conglomerate corporations—would be liable for unintentional mistakes.” Liability is a weapon far more nuanced than its critics believe. Geer and Grimes see liability as a big red button—a kind of nuclear option, to be avoided at all costs. Meanwhile proponents understand liability as a complex machine ideally outfitted with a number of smart levers. Consider: software’s functions range from trivial to critical; security standards can be imposed at the development or testing stage, in the form of responsible patching practices or through obligations for timely disclosure of vulnerabilities or breaches; the code itself might be open-source or proprietary or in any case free. An effective liability regime is one that takes these many factors into account when it comes to designing rules, creating duties or imposing standards. For starters, it would make no sense to hold all software providers to the same duty of care and the same liability for breach of that duty, irrespective of the software’s intended use and associated harms. As Bruce Schneier observed back in 2005, “Not all software needs to be built under the overview of a licensed software engineer . . . [but] commercial aircraft flight control software clearly requires certification of the responsible engineer.” All software embedded in life-critical systems or critical infrastructure should be consistently made subject to more rigorous standards than standard commercial software, and the manufacturers should be held liable either for harms caused by products that deviate from those standards, or for flaws that are not timely remediated. Imposing this kind of liability will require restricting the disclaimers of warranty and limitations on remedies found in standard software license agreements. As far as recommendations go, this is a familiar rerun. In a 2007 report, the House of Lords Science and Technology Committee recommended that the European Union institute “a comprehensive framework of vendor liability and consumer protection,” one that imposes liability on software and hardware manufacturers “notwithstanding end user licensing agreements, in circumstances where negligence can be demonstrated.” The tricky part, of course, is putting into place a system through which negligence can be reliably demonstrated. As a general principle, insofar as security can be understood as a process, rather than an end, negligence should be assessed based on failure to adhere to certain security standards rather than based on absolutes like the number of vulnerabilities in the software itself. Indeed, numbers might reveal more about the software’s popularity than its inherent insecurity. Any particular vulnerability might not prove a software program unacceptably defective—but an examination of the general processes and precautions through which the software was produced just might. Laws merely establishing modest duties on the part of software makers—and subjecting them either to private suit or government fine in the event of harms resulting from breach—could help push the industry to develop and implement best practices. These practices could in turn constitute an affirmative defense against negligence claims. Best practices might range from periodic independent security audits to participation in what David Rice, Apple’s global director of security, describes as a ratings system for software security, an analogue to the National Highway Traffic Safety Administration’s rating system for automobile safety. Already existing legislation that penalizes companies for failing to safeguard sensitive user information offers a useful model for imposing narrowly circumscribed security duties on software providers. For example, in 2006, the Indiana legislature enacted a statute that requires that the owner of any database containing personal electronic data disclose a security breach to potentially affected consumers but does not require any other affirmative act. The terms of the statute are decidedly narrow—it provides the state attorney general enforcement powers but affords the affected customer no private right of action against the database owner and imposes no duty to compensate affected individuals for inconvenience or potential credit-related harms in the wake of the breach. There are other factors to consider in calibrating a liability system. Liability exposure should be to some extent contingent, for example, on the availability of the source code. It is difficult to imagine, for instance, a good argument for holding the contributors to an open source software project liable for their code. Whether or not you believe that the process by which open source software evolves actually constitutes its own security mechanism, a la Linus's Law ("given enough eyeballs, all bugs are shallow”), the fact is that open-source software offers users both the cake and the recipe. Users are free to examine the recipe and alter it at will. By offering users access to the source code, open source software makes users responsible for the source code—and unable to recover for harms. Imposing liability on open-source software is not only an incoherent proposition, but it also has problematic First Amendment implications. Code is, after all, more than a good or service. It is also a language and a medium. And clumsily-imposed liability rules could place significant and unacceptable burdens on software speech and application-level innovation. In her book on the relationship between internet architecture and innovation, Barbara van Schewick gives us some sense of why we should be wary of creating stifling liability laws with her description of the nexus between software applications and sheer human potential:
The importance of innovation in applications goes beyond its role in fostering economic growth. The Internet, as a general-purpose technology . . . creates value by enabling users to do the things they want or need to do. Applications are the tools that let users realize this value. For example, the Internet’s political, social or cultural potential—its potential to improve democratic discourse, to facilitate political organization and action, or to provide a decentralized environment for social and cultural interaction in which anyone can participate—is tightly linked to applications that help individuals, groups or organizations do more things or do them more efficiently, and not just in economic contexts but also in social, cultural or political contexts.All of this applies, of course, to proprietary applications, but the liability calculus for closed-source software should come out a little different. When it comes to proprietary applications, the security of the code does not lie with users but remains instead entirely within the control of a commercial entity. The fact that much proprietary software is “free” should not foreclose liability: a narrowly tailored liability rule might provide that where users are “paying” for a software product or service with their data, a data breach that causes damages could be grounds for government-imposed fines or, to the extent it causes individuals to sustain harm, private damages. This is by no means a comprehensive overview of all possible approaches to constructing a software liability regime. It is rather a glimpse of a few of the many levers we can push and pull to turn security from an afterthought into a priority for software makers. Such a change will come with costs, imposed on software makers and redistributed to us, the users. But we must keep in mind that whatever we pay in preventive costs today are low compared with what we could pay in remedial security costs tomorrow. As a matter of routine, we accept inconveniences, costs and risk redistribution in other areas of our lives. Drugs are required to undergo clinical testing, food is inspected and occasionally “administratively detained,” and vaccines are taxed to compensate the small fraction of recipients who will suffer an adverse reaction. Restrained measures to police software, too, can be understood as part of a commonsense tradeoff between what is cheap and functional and what is safe and secure.
* * *This series was dedicated to moving past old questions: of whether software can be made more secure or whether providers have the capacity to improve the security of their products. The question is no longer even whether manufacturers—and by means of price increases and less timely releases, end users—should be compelled to bear the inconvenience and cost of these security enhancements. The question, and the challenge, lies in designing a liability regime that incentivizes software providers to improve the security of their products and compensates those unduly harmed by avoidable security oversights—without crippling the software industry or unacceptably burdening economic development, technological innovation, and free speech. We don’t need a red button. That’s the beauty of the law—turns out we have plenty of rheostats at our disposal.