Bad Code: The Whole Series

Jane Chong
Monday, November 4, 2013, 12:46 PM
Over the last month, on our New Republic: Security States newsfeed, we rolled out a series designed to explain why fairly allocating the costs of software deficiencies between software makers and users is so critical to addressing the growing problem of vulnerability-ridden code---and how such a regime will require questioning some of our deep-seated beliefs about the very nature of software security.

Published by The Lawfare Institute
in Cooperation With
Brookings

Over the last month, on our New Republic: Security States newsfeed, we rolled out a series designed to explain why fairly allocating the costs of software deficiencies between software makers and users is so critical to addressing the growing problem of vulnerability-ridden code---and how such a regime will require questioning some of our deep-seated beliefs about the very nature of software security. Below is a consolidation of the five-part series in full. Part 1 begins by situating the problem within a decades-old liability debate that began with a focus on life-critical systems malfunctions and has in recent years expanded to exploits. Part 2 argues that the technical difficulty of developing minimally bug-ridden software actually weighs in favor of, rather than against, a liability regime. Part 3 describes why, given the average user's abysmal cyber hygiene, depending on users to drive the demand for better code fails as a responsible national software security policy. Part 4 discusses how the rationale underlying court decisions that construe software license agreements against software users similarly precludes users from bringing successful actions in tort or under consumer protection statutes. Finally, Part 5 suggests that software liability critics and proponents are talking past each other in part because they understand liability in entirely different terms: critics see liability as a kind of nuclear option that could potentially destroy the industry as a whole, while proponents understand liability as a nuanced weapon--a many-levered machine--that could be calibrated so as to effectively promote the development of more secure software without imposing undue costs on the industry.

I. Should Software Makers Pay?

The joke goes that only two industries refer to their customers as “users.” But here's the real punch line: Drug users and software users are about equally likely to recover damages for whatever harms those wares cause them. Let’s face it. Dazzled by what software makes possible—the highs—we have embedded into our lives a technological medium capable of bringing society to its knees, but from which we demand virtually no quality assurance. The $150 billion U.S. software industry has built itself on a mantra that has become the natural order: user beware. Unfortunately, software vulnerabilities don’t just cost end-users billions annually in antivirus products. The problem is bigger than that. In 2011, the U.S.government warned critical-infrastructure operators about an exploit that was targeting a stack overflow vulnerability in software deployed in utilities and manufacturing plants around the world. In 2012, a researcher found almost two dozen vulnerabilities in industrial control systems (ICS) software used in power plants, airports and manufacturing facilities. In its 2013 threat update, Symantec, the world’s largest security software corporation, surprised no one when it announced that criminals were finding and exploiting new vulnerabilities faster than software vendors were proving able to release patches. Cybersecurity is a very big set of problems, and bad software is a big part of the mess. How did we get here? The rapid evolution of software technology and the surge in the total number of computer users actually led early commentators to warn of software vendors’ increasing exposure to lawsuits —and the “catastrophic" consequences to ensue. But history has gone the other way. Operating within a “legislative void,” the courts have consistently construed software licenses in a manner that allows software vendors to disclaim almost all liability for software defects. Bruce Schneier, perhaps the most prominent decrier of the current no-liability regime for software vendors, puts it simply: “there are no real consequences for having bad security.” The result is a marketplace crammed with shoddy code. As users, we tolerate defective software because defective software works most of the time. And we get it much faster and with a great many features. Partly in response to consumer appetite, timely release and incremental patching have become key features of the industry’s “fix-it-later” culture. Software companies look for bugs late in the development process and knowingly package and ship buggy software with impunity. Meanwhile end users are slow in acknowledging vulnerabilities, use patches too infrequently, and fail to timely deploy published updates. Some experts fear that nothing short of a digital Pearl Harbor—a large-scale attack that exploits critical security holes in our industrial control systems—will create the momentum needed to trigger government regulation of and private investment in quality code. If that ends up being the case, it won’t be for lack of theorizing. Suboptimal code has been recognized as a problem for decades. Certainly, there are defenders of the status quo who argue that holding software providers liable for their code would raise costs and stifle innovation. But legal academics have spent thirty years disagreeing with that proposition and dreaming up liability schemes designed to force software vendors to shoulder some of the costs long borne entirely by users. The software liability debate has retained its basic shape over the years, but the harms giving rise to the debate have clearly evolved in that time. The earliest software liability discussions focused on embedded software malfunctions that led to physical injury or death. Concern expanded to software applications used to infringe copyright. With the explosion in cybercrime and cyber-espionage, and rising fears of cyberterrorism, attention has converged on the vulnerabilities lurking in shoddy code. The shift in kind can also be understood as a shift in scale, with software harms expanding in reach from the end-users who seek to benefit from the deployment of particular software, to third parties affected by its unlawful use, and finally to all actors in an increasingly interconnected and increasingly insecure cyber ecosystem. These shifts are significant in that they pull the software liability discussion in two directions, compelling us to start holding vendors at least partially accountable for poor software development practices but also complicating any attempt to construct a coherent liability regime. For example, software insecurity can be likened to a public health crisis. The fact that a single vulnerability can give rise to untold numbers of compromised computers and harms that are difficult to cabin makes dumping costs entirely on end users unreasonable as a policy matter. To borrow the words of law professors Michael Rustad and Thomas Koenig, the current paradigm is one in which “[t]he software industry tends to blame cybercrime, computer intrusions, and viruses on the expertise and sophistication of third party criminals and on careless users who fail to implement adequate security, rather than acknowledging the obvious risks created by their own lack of adequate testing and flawed software design.” A more reasonable and balanced system should be possible. On the other hand, any attempt to systematically hold vendors accountable for vulnerabilities must build in realistic constraints, or risk exposing the industry to crushing liability. Commentators who advocate for software vendor liability have a common refrain: the software industry should not be categorically exempted from the safety standards imposed on other industries. And while that is certainly true, there is danger in over-relying on the analogies so often drawn between software and other, more-conventional products and services. The most common analogy is the car. And there are legitimate parallels between the vehicle safety crisis of the 1960s and today’s software security conundrum. Then, state and federal courts were reluctant to apply tort law even where automobile-accident victims claimed their injuries resulted from the failure of manufacturers to exercise reasonable care in the design of their motor vehicles. Over the next thirty years, however, the courts did an about-face: they imposed on automobile manufacturers a duty to use reasonable care in designing products to avoid subjecting passengers to an unreasonable risk of injury in the event of a collision; applied a rule of strict liability to vehicles found to be defective and unreasonably dangerous; and held automobile manufacturers accountable for preventing and reducing the severity of accidents. Yet to insist that software defects and automobile defects should be governed by substantively similar legal regimes is to ignore the fact that “software” is a category comprising everything from video games to aircraft navigation systems, and that the type and severity of harms arising from software vulnerabilities in those products range dramatically. By contrast, automobile defects more invariably risk bodily injury and property damage. To dismiss these distinctions is to contribute to an increasingly contrived dichotomy, between those who see the uniqueness of software as an argument for exempting software programs from traditional liability rules altogether, and those who stress that software is nothing special to claim that the road to software vendor liability lies in traditional contract or tort remedies. As it turns out, with respect to the paradigm shift that led to liability for automobile manufacturers, the courts were only one component of what Ralph Nader has called“an interactive process involving both the executive and legislative branches of government as well as the forces of the marketplace.” Specifically, in 1966, in response to mounting public pressure and political momentum directly attributable toNader’s highly visible consumer advocacy efforts, Congress passed the National Traffic and Motor Vehicle Safety Act, which vested a federal agency with the power to proactively promulgate and enforce industry safety regulations. In 1987, law professor Jerry Mashaw and lawyer David Harfst described the statute as nothing short of“revolutionary”:
Abandoning the historic definition of the automobile safety problem as one of avoiding accidents by modifying driver behavior, the 1966 Act adopted an epidemiological perspective. Reconstituted, the safety issue became how to modify the vehicle (environment) so that the interaction of the passenger (host) and the deceleration forces of accidents (agent) produced less trauma.
Reconceiving software security is as necessary today as reconstituting the automobile safety issue was yesterday. And just as imposing vehicle manufacturer liability required a shift in our auto safety paradigm three decades ago, fairly allocating the costs of software deficiencies between software vendors and users will require examining some of our deep-seated beliefs about the very nature of software security, as well as questioning our addiction to functionality over quality. Recalibrating the legal system that has grown out of those beliefs and dependencies will, in turn, require concerted action from Congress and the courts. It is common enough to pay lip service to the idea of committing to a system-based view of cyber security. We litter our cyber security discussions with medical and ecological metaphors, and use these to great effect in arguing for comprehensive cyber surveillance measures and increased public-private data sharingBut most security breaches are made possible by software vulnerabilities. So the real question is this: when the “body” breaks down, when the “environment” fails, will we put our money where our collective mouth is? Will a nuanced view of the interdependent forces at play guide us in determining who must pay when things go wrong and who carries the risks associated with bad software? Over the next month, in a series of weekly installments, I’ll be exploring whether and how we might hold software vendors liable for the quality of their code. Throughout, I’ll be examining what makes software and software harms distinctive, and assessing how a regime that holds software vendors liable for the flaws in their code or coding practices could be tailored to account for these features.

 II. Software Security Is Difficult---and Doable

It’s true: perfectly secure software is a pipe dream. Experts agree that we cannot make software of “nontrivial size and complexity” free of vulnerabilities. Moreover, consumers want feature-rich, powerful software and they want it quickly; and this tends to produce huge, bulky, poorly-written software, released early and without adequate care for security. Those who oppose holding software makers legally accountable for the security of their products often trot out this reality as a kind of trump card. But their conclusion—that it is unreasonable to hold software makers accountable for the quality of their code—doesn’t follow from it. Indeed, all of the evidence suggests that the industry knows how to make software more secure, that is, to minimize security risks in software design and deployment—and that it needs a legal kick in the pants if it is to consistently put that knowledge into practice. Generally speaking, the term “software security” is used to denote designing, building, testing and deploying software so as to reduce vulnerabilities and to ensure the software’s proper function when under malicious attack. Vulnerabilities are a special subset of software defects that a user can leverage against the software and its supporting systems. A coding error that does not offer a hacker an opportunity to attack a security boundary is not a vulnerability; it may affect software reliability or performance, but it does not compromise its security. Vulnerabilities can be introduced at any phase of the software development cycle—indeed, in the case of certain design flaws, before the coding even begins. Loosely speaking, vulnerabilities can be divided into two categories: implementation-level vulnerabilities (bugs) and design-level vulnerabilities (flaws). Relative to bugs, which tend to be localized coding errors that may be detected by automated testing tools, design flaws exist at a deeper architectural level and can be more difficult to find and fix. The case that software insecurity is inherent, to some degree, is a strong one. Security is not an add-on or a plug-in that software makers can simply tack onto the end of the development cycle. According to the late Watts Humphrey, known as the “Father of Software Quality,” “Every developer must view quality as a personal responsibility and strive to make every product element defect free.” James Whittaker, former Google engineering director, offers similar thoughts in his book detailing the process by which the world’s search giant tests software: “Quality is not equal to test. Quality is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other.” Responsible software, in other words, is software with security baked into every aspect of its development. As a general matter, we expect technology to improve in quality over time, as measured in functionality and safety. But the quality of software, computer engineers argue, has followed no such trajectory. This is not wholly attributable to software vendors’ lack of legal liability; the penetrate-and-patch cycle that is standard to the software industry today is a security nightmare borne of characteristics particular to software. Gary McGraw, among the best-known authorities in the field, attributes software's growing security problems to what he terms the “trinity of trouble”: connectivity, extensibility and complexity. To this list, let’s add a fourth commonly-cited concern, that of software “monoculture.” Together, all of these factors help explain at a systemic level what makes software security difficult to measure, and difficult to achieve. Let’s consider each briefly. First, ever-increasing connectivity between computers makes all systems connected to the Internet increasingly vulnerable to software-based attacks. McGraw notes that this basic principle is exacerbated by the fact that old platforms not originally designed for the Internet are being pushed online, despite not supporting security protocols or standard plug-ins for authentication and authorization. Second, an extensible system is one that supports updates and extensions and thereby allows functionality to evolve incrementally. Web browsers, for example, support plug-ins that enable users to install extensions for new document types. Extensibility is attractive for purposes of increasing functionality, but also makes it difficult to keep the constantly-adapting system free of software vulnerabilities. Third, a 2009 report commissioned to identify and address risk associated with increasing complexity of NASA flight software defines complexity simply: “how hard something is to understand or verify” at all levels from software design to software development to software testing. Software systems are growing exponentially in size and complexity, which makes vulnerabilities unavoidable. Carnegie Mellon University's CyLab Sustainable Computing Consortium estimates that commercial software contains 20 to 30 bugs for every 1,000 lines of code—and Windows XP contains at least 40 million lines of code. The problems do not end with the mind-boggling math. The complexity of individual software systems creates potential problems for interactions among many different applications. For example, earlier this year, Microsoft was forced to issue instructions for uninstalling its latest security update when unexpected interactions between the update and certain third-party software rendered some users’ computers unbootable. Finally, there are the dangers associated with monoculture—the notion that system uniformity predisposes users to catastrophic attacks. As examples of the dangers of monoculture, security experts point to the rot that devastated genetically identical potato plants during the Irish Potato Famine and the boll weevil infestation that destroyed cotton crops in the American South in the early twentieth century. As noted by Dan Greer in a famous paper that, as lore has it, got him fired from a Microsoft affiliate: “The security situation is deteriorating, and that deterioration compounds when nearly all computers in the hands of end users rely on a single operating system subject to the same vulnerabilities the world over.” In other words, to great extent, software is different from other goods and services. And the notion of perfectly secure software almost certainly is a white whale. But as far as the liability discussion is concerned, it’s also a bit of a red herring. Software does not need to be flawless to be much safer than the code churned out today. For starters, software vulnerabilities are not all created equal, and they do not all pose the same risk. Based on data from 40 million security scans, the cloud security company Qualys found that just 10 percent of vulnerabilities are responsible for 90 percent of all cybersecurity exposures. Not only are some vulnerabilities are doing an outsize share of the work in creating cybersecurity problems, but many are, from a technical standpoint, inexcusable. Only systemic dysfunction can explain why the buffer overflow, a low-level and entirely preventable bug, remains among the most pervasive and critical software vulnerabilities out there. That software security is an unusually difficult pursuit, and that the current incentive structure prevents software makers from consistently putting what they know about security into practice, together weigh in favor ofholding software makers accountable for at least some vulnerabilities. The law regularly imposes discipline on other industries that would otherwise be lured toward practices both easy and lucrative. An intelligently designed liability regime should be understood as a way to protect software from itself. Ironically, software is most insecure where the stakes are highest. For all the progress that traditional software providers have made in creating more secure applications, experts say that embedded device manufacturers, responsible for producing everything from medical devices to industrial control systems, are years behind in secure system development. The problem is not that the industry has proven unable to develop a disciplined approach to designing secure software. The problem is that this discipline is too often optional. For example, commercial aircraft are legally required to meet rigorous software safety requisites established by the avionics industry and outlined in a certification document known as DO-178C. Yet not all life-critical systems—indeed, not even all aircraft—are required to comply with such baselines. Software for unmanned aerial vehicles (UAVs) need not meet the DO-178C standard. The discrepancy is hard to justify. As Robert Dewar, president and CEO of the commercial software company AdaCore, put it in an interview last year: “All engineers need to adopt the "failure-is-not-an-option" attitude that is necessary for producing reliable, certified software. UAVs require at least as much care as commercial avionics applications.” History says he has a point. In 2010,a software glitch caused U.S. Navy operators to briefly lose control of a drone, which wandered into restricted Washington airspace before they were able to regain access. Similarly, some critical infrastructure sectors must meet mandatory cybersecurity standards as defined by federal law, or risk civil monetary penalties. But others not legally bound are instead left to drown in voluntary cybersecurity guidance. In 2011, the U.S. Government Accountability Office analyzed the extent to which seven critical infrastructure sectors issued and adhered to such guidance. That report was appropriately titled: “Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use.” In 1992, Ward Cunningham, the programmer who gave the world its first wiki, introduced what would become popularly known as "technical debt" to describe the long-term costs of cutting corners when developing code. He wrote: “Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt.” In 2010, Gartner, the Connecticut-based information technology research firm, projected that technical debt worldwide could hit $1 trillion by 2015. That number is all the more disturbing when you consider the fact that technical debt is unlike financial debt in one crucial sense: private companies are not incurring the costs and risks associated with their choices. You are. The goal of a new cyber liability regime should be to change that—to create an incentive to lower technical debt and thereby add discipline to a confusing, inconsistent and fundamentally irrational default regime, under which software makers call the shots and users pay for them.

III. The Typhoid Mary Model of Cyber User Hygiene

In the early twentieth century, typhoid struck mostly poor people. So it was odd when, in the summer of 1906, six of eleven people in the wealthy household of bank president Charles Henry Warren fell ill with the disease. Hired to investigate the source of the scourge, “sanitary engineer" George Soper followed the breadcrumbs to the Warren family's new, and recently-missing, cook. Piecing together her history, he learned that typhoid outbreaks had trailed Mary Mallon from house to house for a decade. Mallon was a good cook with a tragic flaw: she did not wash her hands. This particular combination of talent and defect proved disastrous for her many patrons, since Mallon, later dubbed Typhoid Mary, was a rare, seemingly-healthy carrier of the fecal-oral bacterium Salmonella typhi. Eventually Mallon was apprehended, forcibly quarantined for three years and released on the condition that she would cease preparing food. But unconvinced that she was anything but hale, she took on a series of aliases and began to cook, and inadvertently kill, again. To the modern reader, Mallon’s denial of biology in the face of evidence is baffling—and criminal. But her conduct is less astonishing when translated to another context: the Internet. Our collective behavior as Internet and software users is remarkably like Mallon’s. End users are known to rely on easy-to-guess passwords, to unknowingly execute malware, and to neglect to timely install critical software patches. We do all this despite being told that these behaviors have costs, perhaps to ourselves, perhaps to others. But if code creates real hazards for people and businesses, shouldn’t that eventually generate a market for more secure code? This is the simple and seductive argument of those who oppose liability for makers of insecure software: just leave the quality of code to the market to determine. The free market argument shows up commonly and often in greater detail to counter proposals for internet service provider liability, but its logic operates similarly as opposition to holding software makers accountable for shipping vulnerability-ridden products. At its crux, this logic assumes that when it comes to society’s cybersecurity needs, users can and should be the ones pulling the levers. Here is how Jim Harper of the libertarian Cato Institute put the point back in 2005: “On the margin, pushing disproportionate liability onto ISPs would erode Internet users’ focus on self-awareness and self-help.” Moreover, Harper noted, such a move would “suppress” what is “a well-developing and diverse market for Internet hygiene services.” In the software security context, this argument boils down to two parts: first, that patching practices and antivirus products have some handle on cybersecurity problems; and second, that shifting liability onto entities other than the user would interfere with the market’s ability to generate its own remedies. It’s an argument not unlike contending that in 1906 the market was equipped to handle the risk posed by Mallon—that rather than being quarantined, she should have been allowed to cook because New York families could develop good screening techniques for identifying infected food workers. Security experts have written tomes on why monthly patch rollouts and steadily proliferating antivirus options do not collectively constitute a viable security solution to the problem of insecure code. But more can be said about the nature of this inadequacy, which traces back to the inadequacy of users. Consumers of “Internet hygiene services” are ultimately as ill-equipped to bear the burden of shaping the market to minimize software security risks as Mallon’s employers were in controlling the spread of typhoid. The analogy applies on two levels, for as users we play the role of the victims—the New Yorkers who hired Typhoid Mary—but in important respects we also play the role of Mary herself. Three features make Typhoid Mary a relevant analogy for the modern software user, and shed light on why relying on users to make responsible cyber hygiene decisions cannot make for a responsible national cybersecurity policy. First, there is user apathy. The companies that produce buggy code are not alone in escaping the ramifications of their choices. Like Mallon, who remained healthy even as her patrons fell sick around her, users are not forced to suffer the full consequences of their personal use of buggy software or their bad security practices. This is a classic problem of what economists call negative externalities. And negative externalities are exacerbated by the fact that malware creators have gotten smarter about taking advantage of them. Unlike the viruses and worms of yesteryear, which would typically disrupt the operation of the infected machine in a noticeable fashion, modern malware tends to secrete itself onto a machine and use the host to attack third parties. Experts estimate that 10 percent of U.S. computers have been infected and co-opted for remote exploitation by herders of sprawling, spam-spewing botnets. Botnets, increasingly the tool of choice for cybercriminals as a consequence of their inherent versatility, are made up of vast numbers of infected computers that, unbeknownst to their owners, operate in concert to distribute malicious code, disrupt Internet traffic or steal sensitive user data. In 2010, Microsoft reported that more than 2.2. million PCs in the U.S. had been hijacked by bot herders. In June of this year, Microsoft's Digital Crimes Unit worked with the FBI and the U.S. Marshals Service to liberate more than 1,463 computers from the Citadel botnet, responsible for infecting an estimated 5 million computers worldwide and stealing $500 million from consumer and business bank accounts over 18 months. Yet despite the continuing rise of botnets, many people lack reason to truly care that their computers are infected, because being part of a botnet does not especially harm them. In fact, people are on average quite unaware of how "pervasive and pernicious" the botnet threat is and remain unaware when their systems have been coopted. This brings us to a second connection between Typhoid Mary and the computer user: ignorance. Users, commonly described as the weakest link in the security chain, generally lack the technical background to understand what is going on under the hoods of the various high-tech gadgets that make their worlds go round.  So just as Mallon’s ignorance as to the science of the spread of a pathogen made it all the easier for her to skip the soap and continue the cooking, our lack of understanding when it comes to the mechanics of cyber risks lends itself to poor cybersecurity hygiene, even as our reliance on the Internet—and our consequent risk—increases steadily. Even the abstract knowledge that the internet is teeming with malicious activity does not seem to translate into an appropriate awareness of personal risk. In 2011, McAffee conducted a global study that showed that on average, consumers put their digital assets at a value of $37,438—and that more than a third of those consumers failed to institute protections across all those devices. Research conducted within other industries suggests that consumers tend to practice a kind of personal exceptionalism, believing they are less vulnerable to risks and less likely to be harmed by products than are others. As one security researcher points out, "[i]t stands to reason that any computer user has the preset belief that they are at less risk of a computer vulnerability than others." And here’s the kicker: users do not necessarily exercise greater online discretion even when they have personally experienced an adverse event. Technological illiteracy no doubt contributes to a litany of bad security practices. For example, in 2012, a Skype-commissioned survey of some 350,000 individuals revealed that 40 percent of adults do not update their software when prompted, and about a quarter skipped the updates because they did not understand the benefits. Users do not promptly patch software even when companies make such patches available in a timely fashion; something like 90 percent of successful exploits are successful attacks on unpatched systems. You might expect that users would be willing to timely deploy at least the most urgent fixes. But no, numerous studies have confirmed the widely held belief that users are extremely slow about deploying security fixes, even in the case of critical vulnerabilities. Indeed, the most infamous worms and viruses have exploited vulnerabilities for which patches were readily available. These include the Code Red worm in 2000, which caused an estimated $1.2 billion in network damage, and the SQL Slammer in 2003, an even faster-spreading worm that completely shut down the Internet in South Korea and led to outages and slowdowns throughout Asia. Nothing better showcases the problems with dumping the burden of improving cybersecurity on the party with the least technical know-how to accomplish this than the emergence of one distinct criminal enterprise: fake anti-virus software. The basic premise of the so-called “rogue” antivirus application is simple: feed on users’ fear of malware to infect computers with malware. The scam sends an alert message to the user, offering a free (fake) scan and demanding a credit card number in exchange for removing the supposed infections. Researchers at Google recently conducted an analysis of 240 million web pages over 13 months and discovered that fake anti-virus software accounts for 15 percent of all malware on the web and for 50 percent of malware distributed through advertisements. The problem is growing, both in absolute terms and relative to other malware. Put simply: we users are fools, and fools are easy to exploit. There is a third factor keeping end-user liability from even approaching viability as a path toward better software security or better cybersecurity generally, and that’s limited market power. Again, Typhoid Mary offers a useful analog. Under pressure to stop handling food, Mallon briefly attempted other occupations—taking a job, for example, as a laundress. Unfortunately, nothing paid as well for a woman of her station as did cooking. So cook she did. Like Typhoid Mary, software users are hobbled by the limits of their market power. Inan industry structured to reward fast shipping and eventual patching, software makers face no consequences for even knowingly shipping vulnerability-ridden products. Meanwhile, users lack the ability to determine the quality of proprietary software until it has become a standard. One commentator describes the nightmare simply: “The standardization process interacts with the unfortunate fact that latent software security defects tend to remain hidden until after software has become popular, and consequently, such defects play no role in the competition to set standards.” Think about it. If you are dissatisfied with the security of your software, what are your options? Can you really afford to stop cooking altogether? As a nation of modern-day Typhoid Marys, we pose a greater threat to the cyber ecosystem in which we operate than to ourselves. But unlike Mary, we cannot all be quarantined on an island next to Rikers. Our fate is just the opposite—to be increasingly interconnected, and increasingly exposed. So a smart cybersecurity policy has to be one that encourages cyber hygiene among users without mistaking it for an alternative to creating real demand for better security from software makers. Software makers are distinctly unlike Typhoid Mary in that they have the knowledge and the capability to improve the security environment. They resemble Mallon in one respect, and one respect only: they lack adequate incentive to change their habits and have duly shunted the risks associated with bad code off on others. A nuanced software liability regime—one that holds software makers accountable for unacceptably flawed products as well as their negligent or reckless marketing—could correct this. It doesn’t take a sanitary engineer to understand that.

IV. The Sad State of Cyber Liability Law

Software license agreements are typically crammed with boilerplate language freeing the software provider from virtually all forms of liability while binding the commercial user to severe use restrictions. Unhappy with that? Too bad. Anyone who has ever installed software after “consenting” to the terms of the accompanying clickwrap, shrinkwrap or browsewrap understands that the disgruntled user has exactly two choices when it comes to mass market license agreements: take it or leave it. Software providers typically shunt all the risks associated with their products onto users through these license agreements, which the courts have generally treated as enforceable contracts. Think of contracts as a form of private law-making—the parties agree to impose on themselves obligations not otherwise dictated by the law. Frustrated theorists have looked outside of the contract realm for ways to hold software providers accountable for the harms that users sustain as a result of insecure code. Consumer protection laws would seem to offer one narrow avenue for redress. Alternatively, users have filed suit for compensation on tort grounds, alleging negligence on the part of the software provider or product defect. Recognizing the continuing failure of contract law to provide software users meaningful remedies for harms caused by insecure code, as well as the challenges associated with bringing a successful tort claim under the current law, Professors Michael Rustad and Thomas Koenig have gone so far as to propose enacting a statute to establish an entirely new category of tort—“the tort of negligent enablement of cybercrime.” Software license agreements have become such a bad joke for software users that it’s hard to believe that once upon a time, it looked like users might be able to leverage contract principles to their advantage. Specifically, commentators speculated that the contract-making principles embodied in the Uniform Commercial Code (UCC)—a set of model laws adopted at least partially by all the states—could be used to “pierc[e] the vendor's boilerplate” and create a legal framework that would equally benefit vendors and users, licensors and licensees. As one believer declared back in 1988, “[B]ecause fairness and reasonableness are fundamental in the Code, application of the UCC would benefit parties unfamiliar with its provisions.” Another commentator predicted as early as 1985: “The courts have adequate means to protect software vendees from unconscionable contract provisions and the UCC makes requirements for effective disclaimers of warranty clear, so that the UCC will adequately protect software vendees and will not serve as a vehicle for manufacturers to limit their liability." Unfortunately, the UCC has served as just that: a liability-limitation vehicle. As one critic put it almost two decades ago, treating software licenses as sales governed by the UCC “creates a legal fiction which—contrary to the general intent of the UCC—places the purchaser at a severe disadvantage vis-à-vis the vendor.” This is because the UCC is built on freedom-to-contract principles that assume roughly equal bargaining power between the buyer and the seller. Since roughly the mid-1990s, the courts have accepted that operating assumption and allowed software providers to contract away responsibility for the deficiencies of their products. Judicial adherence to upholding the terms of the standard software license agreement has prompted Douglas Phillips, general counsel of Promontory Interfinancial Network, to dub it the “legislative license.” In other words, thanks to a long line of court decisions, "the law of the software license has come largely to be defined by each software license itself." UCC freedom-to-contract principles serve as the pretext by which courts are able to uphold the liability disclaimers and limitations on remedies found in all commercial software licensing agreements. But this is not the end of the story. Other factors help explain why, in one high-profile case after another, software users alleging defects and security breaches get their cases thrown out of court. These factors are important insofar as they offer insight into how the courts understand code—and suggest that the grounds on which courts construe the rules of contract law in favor of software providers would similarly forestall user attempts to impose liability on providers through existing consumer protection laws or through claims sounding in tort. Indeed, software liability is unlikely to get off the ground without the help of legislation or regulation specifically designed to impose certain duties on software providers. At least three factors other than the disclaimers and limitations crammed into the standard license agreement prevent users from seeking compensation when they are harmed by defective software. To start, much software is free. This is a problem under contract law because courts will not hold software providers liable for harms brought about for products or services for which users did not offer some form of payment—or what lawyers call “consideration.” This is the basic rule underlying Bruce Schneier’s observation that “[f]ree software wouldn't fall under a liability regime because the writer and the user have no business relationship; they are not seller and buyer.” Schneier is correct—as long as we’re talking about a private ordering regime. A different legal framework, however, might make for a different rule. For example, providers of free software generate revenue not by extracting money from the users, but rather by extracting data that they are then able to monetize. A statute that creates a duty for software providers to institute safeguards to secure this data or restrict its use might allow users to bring suit in the event of a security breach under tort theories of negligence or misrepresentation. But in the absence of such a statute, the fact that much software and many Internet services are free will remain a sticking point for users seeking compensation for security-related injuries. Last year the social networking service LinkedIn was hit with a high-profile class action suit after hackers breached the company server and posted 6.5 million hashes corresponding to LinkedIn accounts on a forum. Sixty percent of these hashes were later cracked. The plaintiffs alleged that LinkedIn had failed to utilize industry standard protocols and technology to protect its customers' personally identifiable information, in violation of its own User Agreement and Privacy Policy. A federal court in California threw out the case this spring in part on the grounds that the policy was the same for users of the free and premium versions of the service.  Specifically, the court found that the complaint “fails to sufficiently allege that Plaintiffs actually provided consideration for the security services which they claim were not provided.” The fact that popular Web applications are often free has also proven problematic for users attempting to state a claim for harms stemming from a security breach under existing consumer protection laws. In 2011, in lawsuits filed against Facebook and against Apple for their policies of sharing user data with third parties, two more federal court judges in California ruled that consumer protection laws did not extend to the users of free services. In his order dismissing the Facebook case, Chief Judge James Ware of the U.S. District Court for the Northern District of California wrote, “[A] plaintiff who is a consumer of certain services (i.e., who “paid fees” for those services) may state a claim under certain California consumer protection statutes when a company, in violation of its own policies, discloses personal information about its consumers to the public . . . . Here, by contrast, Plaintiffs do not allege that they paid fees for Defendant's services.” Here is a second reason software providers tend to prevail under a private-ordering regime, and remain immune even when users bring suit under various tort theories: the courts are resistant to finding an implied warranty of merchantability with respect to security for software products and services that they know cannot be made vulnerability-free. That is, courts tend to treat certain user security expectations as inherently unreasonable. For example, in 2011, several banks sued the payment transaction company that had been holding their customers’ data when it suffered a massive security breach. A Texas federal court rejected the suit, reasoning, “To the extent that the Financial Institution Plaintiffs argue that [the company’s] statements and conduct amounted to a guarantee of absolute data security, reliance on that statement would be unreasonable as a matter of law.” In rejecting the plaintiffs’ claim, the court relied on the logic of yet another court decision, which declared that “in today's known world of sophisticated hackers, data theft, software glitches, and computer viruses, a jury could not reasonably find an implied merchant commitment against every intrusion under any circumstances whatsoever.” Note that this line of reasoning once got traction in the automobile context. Evans v. General Motors Corp. was a Seventh Circuit case in which the plaintiff alleged that General Motors had been negligent in designing its 1961 Chevrolet station wagon without the perimeter frame rails that were being used in many other cars to protect occupants during a side-impact collision. The Evans court rejected the claim on the grounds that "[a] manufacturer is not under a duty to make his automobile accident-proof or foolproof.” As one commentator pointed out, the court exaggerated the plaintiff’s claim to immunize the manufacturer from liability. Two years later, the Eighth Circuit rejected this formulation of the claims in the landmark case Larsen v. General Motors Corp., in which the plaintiff alleged negligent design based on the displacement of the steering shaft in the Chevrolet Corvair. Specifically, the Larsen court rejected General Motors’ attempt to frame the issue as one contingent on determining whether it had the duty to produce a crash-proof car, relying instead on the idea that it was possible for General Motors to have designed a vehicle that would minimize the effect of accidents. Similar standards based on industry best practices could be used to impose liability in the software context, if courts conceived of software as a product that could be designed to minimize, though not eliminate, security vulnerabilities. But the judiciary’s lack of technical expertise and the inherent complexity of software have long prevented the courts from making this leap. In a case dating back to 1986, a federal bankruptcy court declined to enforce the implied warranty of merchantability where a DOS-based computer that represented itself as being Apple-compatible failed to run Apple software. Noting that Apple sells thousands of software programs, the court declared, “We simply cannot determine the extent of the incompatibility and on that failure of proof we conclude that there has been no breach of an implied warranty of merchantability.” The fact that software users have been unsuccessful in asserting breach of implied warranty bodes badly, in turn, for their ability to bring what amounts to the “conceptually indistinguishable” tort claim for negligence against the software maker. A third factor suggests that courts will continue construing software license agreements—and, as it turns out, tort actions—in favor of software providers: the idea that hackers, not providers, are singularly responsible for security breaches. Last year, a California federal court rejected the claim that Sony had misrepresented the quality of its network security where Sony's privacy policy had stated that its security was not perfect, and moreover also rejected plaintiffs' claims of unfair business practices, since Sony did not benefit financially from the third-party data breach. The court’s rejection of the unfair business practice claim is noteworthy in that it suggests a narrow view of what constitutes financial benefit. That is, the court reasons that software providers gain nothing when malicious actors bring about security breaches, thereby declining to take an expansive view of the gains that software vendors (unjustly) reap by engaging in easy, shoddy software development and shipping practices that in turn contribute to security vulnerabilities and security breaches. This cramped focus on the role of the hacker in executing the exploit and the refusal to consider the role of the software maker in creating an environment susceptible to exploit similarly present a challenge for any attempt to bring basic tort claims. Negligence is grounds for a civil lawsuit where the plaintiff is able to establish that the defendant owed a duty, breached that duty, caused harm as a result and should pay damages to make Humpty Dumpty whole again. Establishing the causation element in that chain is difficult, if not impossible, so long as courts choose to fixate on the hacker, not the environment-creator, when assessing who brought about the injury in question. In sum, it is significant that buttressing the courts’ interpretation of software license agreements are ideas that similarly pose problems for holding software providers liable under consumer protection statutes or under tort theories. But the idea that, in the absence of special legislation or regulation, tort could be a viable avenue for pursuing liability for software providers runs up against a much bigger threshold problem. That is the economic loss doctrine. Broadly speaking, the doctrine restricts tort liability to cases involving bodily injury or damage to other property. This is a special problem for tort claims related to software vulnerabilities, since most security breaches give rise to purely economic losses or data compromises. Thanks to the economic loss rule, courts have long been spared the uncomfortable task of actually declaring that software vendors have no duty to institute reasonable measures to develop and maintain secure software. For example, back in 2000, the gas and oil company Hou-Tex, Inc. alleged that a software program company had breached both its duty to inform its customer about a bug in the software and its duty to fix the problem. But the Texas state court held that the economic loss rule precluded Hou-Tex's negligence claims against the software company. In a 2010 case, a New York federal judge made no mention of a potential duty, and instead simply dismissed plaintiffs’ claims of negligence, strict liability and gross negligence for damages stemming from defects in the contracted-for software, as barred by New York's economic loss doctrine. The economic loss doctrine has public policy roots. As the Supreme Court explained in its landmark 1986 decision East River Steamship Corp. v. Transamerica Delaval, Inc., tort law is the appropriate vehicle for addressing unexpectedly dangerous and defective products, since in the case of unexpected personal injury or property damage, the manufacturer is best positioned to bear the cost of and to price the product to spread the loss. Pure financial loss, however, is properly the domain of contract law, particularly the law of warranty, because the rule prompts the parties to set the terms of the bargain. Where the consumer agrees to pay less, the manufacturer can restrict its liability by disclaiming warranties or limiting remedies. In short, the economic loss doctrine is premised on the idea that, as declared by the East River Steamship court, “a commercial situation generally does not involve large disparities in bargaining power . . . [thus] we see no reason to intrude into the parties' allocation of the risk.” In other words, the rule does not account for the asymmetric bargaining power between software vendors and end-users—which is pretty vast. And so after very briefly touring some of the problems with the current private-ordering regime, and having learned (in part) why tort law won’t work either, we return, full circle, to the inadequacies of contract law and the UCC in allocating liability between software vendors and users. The failure of software users to prevail under contract, tort, or consumer protection schemes when it comes to getting compensated for bad code suggests that in the absence of specific legislation or regulation—for example, restricting software vendors’ ability to rely on blanket disclaimers—software users will have little success in holding vendors accountable for vulnerabilities. To put it simply, the laws on the books must change—or the quality of our software will not.

V. Software Liability Is a Complex Machine, Not a Big Red Button

Noted computer security expert Daniel Geer thinks you should bear the costs of the insecure code you use. It’s nothing personal. He acknowledges that holding the end user responsible for being the “unwitting accomplice to constant crime” is far from a perfect cybersecurity strategy. But he concludes that for the time being, it is the least “worst” option:
If you say that it is the responsibility of Internet Service Providers (ISPs) — that “clean pipes” argument— then you are flatly giving up on not having your traffic inspected at a fine level of detail. If you say that it is the software manufacturer’s responsibility, we will soon need the digital equivalent of the Food and Drug Administration to set standards for efficacy and safety. If you say that it is the government’s responsibility, then the mythical Information Superhighway Driver’s License must soon follow. To my mind, personal responsibility for Internet safety is the worst choice, except for all the others.
Geer’s concern is well-founded: imposing security responsibilities on entities other than the end-user will no doubt abrogate some of the freedom and functionality that end users currently enjoy as consumers. This is true both with respect to software and Internet access specifically and with respect to computer information systems more generally. But Geer’s conclusions are unnecessarily stark, in part because they assume that security is a pie that cannot be intelligently shared—a conclusion that, it should be noted, we would never be inclined to accept in our offline lives. Our physical security may ultimately be our burden to bear, but we expect fast food chains not to poison us, local police to do their rounds and our neighbors to call 9-1-1 if they see suspicious activity. Some invisible web of law, professional obligations and communal norms collude at all times to keep us alive and our property in our possession. Would vesting ISPs with circumscribed security responsibilities—such as responding to or recording highly unusual traffic patterns that suggest an ongoing DDoS attack—require end-users to “flatly” relinquish data privacy? Many ISPs already implement limited security mechanisms, and carefully designed private-public data-sharing restrictions could go a long way toward addressing concerns about improper use of subscriber information. Similarly, holding software providers accountable for their code need not entail exposing software providers to lawsuits for any and all vulnerabilities found in their products. Liability critics battle a straw man when they make arguments like this one, from computer security authority Roger Grimes: “If all software is imperfect and carries security bugs, that means that all software vendors—from one-person shops to global conglomerate corporations—would be liable for unintentional mistakes.” Liability is a weapon far more nuanced than its critics believe. Geer and Grimes see liability as a big red button—a kind of nuclear option, to be avoided at all costs. Meanwhile proponents understand liability as a complex machine ideally outfitted with a number of smart levers. Consider: software’s functions range from trivial to critical; security standards can be imposed at the development or testing stage, in the form of responsible patching practices or through obligations for timely disclosure of vulnerabilities or breaches; the code itself might be open-source or proprietary or in any case free. An effective liability regime is one that takes these many factors into account when it comes to designing rules, creating duties or imposing standards. For starters, it would make no sense to hold all software providers to the same duty of care and the same liability for breach of that duty, irrespective of the software’s intended use and associated harms. As Bruce Schneier observed back in 2005, “Not all software needs to be built under the overview of a licensed software engineer . . . [but] commercial aircraft flight control software clearly requires certification of the responsible engineer.” All software embedded in life-critical systems or critical infrastructure should be consistently made subject to more rigorous standards than standard commercial software, and the manufacturers should be held liable either for harms caused by products that deviate from those standards, or for flaws that are not timely remediated. Imposing this kind of liability will require restricting the disclaimers of warranty and limitations on remedies found in standard software license agreements. As far as recommendations go, this is a familiar rerun. In a 2007 report, the House of Lords Science and Technology Committee recommended that the European Union institute “a comprehensive framework of vendor liability and consumer protection,” one that imposes liability on software and hardware manufacturers “notwithstanding end user licensing agreements, in circumstances where negligence can be demonstrated.” The tricky part, of course, is putting into place a system through which negligence can be reliably demonstrated. As a general principle, insofar as security can be understood as a process, rather than an end, negligence should be assessed based on failure to adhere to certain security standards rather than based on absolutes like the number of vulnerabilities in the software itself. Indeed, numbers might reveal more about the software’s popularity than its inherent insecurity. Any particular vulnerability might not prove a software program unacceptably defective—but an examination of the general processes and precautions through which the software was produced just might. Laws merely establishing modest duties on the part of software makers—and subjecting them either to private suit or government fine in the event of harms resulting from breach—could help push the industry to develop and implement best practices. These practices could in turn constitute an affirmative defense against negligence claims. Best practices might range from periodic independent security audits to participation in what David Rice, Apple’s global director of security, describes as a ratings system for software security, an analogue to the National Highway Traffic Safety Administration’s rating system for automobile safety. Already existing legislation that penalizes companies for failing to safeguard sensitive user information offers a useful model for imposing narrowly circumscribed security duties on software providers. For example, in 2006, the Indiana legislature enacted a statute that requires that the owner of any database containing personal electronic data disclose a security breach to potentially affected consumers but does not require any other affirmative act. The terms of the statute are decidedly narrow—it provides the state attorney general enforcement powers but affords the affected customer no private right of action against the database owner and imposes no duty to compensate affected individuals for inconvenience or potential credit-related harms in the wake of the breach. There are other factors to consider in calibrating a liability system. Liability exposure should be to some extent contingent, for example, on the availability of the source code. It is difficult to imagine, for instance, a good argument for holding the contributors to an open source software project liable for their code. Whether or not you believe that the process by which open source software evolves actually constitutes its own security mechanism, a la Linus's Law ("given enough eyeballs, all bugs are shallow”), the fact is that open-source software offers users both the cake and the recipe. Users are free to examine the recipe and alter it at will. By offering users access to the source code, open source software makes users responsible for the source code—and unable to recover for harms. Imposing liability on open-source software is not only an incoherent proposition, but it also has problematic First Amendment implications. Code is, after all, more than a good or service. It is also a language and a medium. And clumsily-imposed liability rules could place significant and unacceptable burdens on software speech and application-level innovation. In her book on the relationship between internet architecture and innovation, Barbara van Schewick gives us some sense of why we should be wary of creating stifling liability laws with her description of the nexus between software applications and sheer human potential:
The importance of innovation in applications goes beyond its role in fostering economic growth. The Internet, as a general-purpose technology . . . creates value by enabling users to do the things they want or need to do. Applications are the tools that let users realize this value. For example, the Internet’s political, social or cultural potential—its potential to improve democratic discourse, to facilitate political organization and action, or to provide a decentralized environment for social and cultural interaction in which anyone can participate—is tightly linked to applications that help individuals, groups or organizations do more things or do them more efficiently, and not just in economic contexts but also in social, cultural or political contexts.
All of this applies, of course, to proprietary applications, but the liability calculus for closed-source software should come out a little different. When it comes to proprietary applications, the security of the code does not lie with users but remains instead entirely within the control of a commercial entity. The fact that much proprietary software is “free” should not foreclose liability: a narrowly tailored liability rule might provide that where users are “paying” for a software product or service with their data, a data breach that causes damages could be grounds for government-imposed fines or, to the extent it causes individuals to sustain harm, private damages. This is by no means a comprehensive overview of all possible approaches to constructing a software liability regime. It is rather a glimpse of a few of the many levers we can push and pull to turn security from an afterthought into a priority for software makers. Such a change will come with costs, imposed on software makers and redistributed to us, the users. But we must keep in mind that whatever we pay in preventive costs today are low compared with what we could pay in remedial security costs tomorrow. As a matter of routine, we accept inconveniences, costs and risk redistribution in other areas of our lives. Drugs are required to undergo clinical testing, food is inspected and occasionally “administratively detained,” and vaccines are taxed to compensate the small fraction of recipients who will suffer an adverse reaction. Restrained measures to police software, too, can be understood as part of a commonsense tradeoff between what is cheap and functional and what is safe and secure.

*          *          *

This series was dedicated to moving past old questions: of whether software can be made more secure or whether providers have the capacity to improve the security of their products. The question is no longer even whether manufacturers—and by means of price increases and less timely releases, end users—should be compelled to bear the inconvenience and cost of these security enhancements. The question, and the challenge, lies in designing a liability regime that incentivizes software providers to improve the security of their products and compensates those unduly harmed by avoidable security oversights—without crippling the software industry or unacceptably burdening economic development, technological innovation, and free speech. We don’t need a red button. That’s the beauty of the law—turns out we have plenty of rheostats at our disposal.

Jane Chong is former deputy managing editor of Lawfare. She served as a law clerk on the U.S. Court of Appeals for the Third Circuit and is a graduate of Yale Law School and Duke University.

Subscribe to Lawfare