The Securing Open Source Software Act Is Good, but Whatever Happened to Legal Liability?

Chinmayi Sharma, John Speed Meyers, James Howison
Thursday, November 10, 2022, 8:16 AM

The recent introduction of the Securing Open Source Software Act, and its subsequent momentum, has stoked a debate about the true reason for the open source security problem and the merits of different solutions.

(Photo by Blogtrepreneur, https://bit.ly/3WNgmNY; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor's note: A previous version of this article identified the OpenSSL vulnerability as critical. While it was initially announced as such, the vulnerability has been reclassified to high, and the editor has corrected the article to reflect that change.

In December 2021, Log4Shell, a vulnerability in Apache’s open source library Log4j, set the internet on fire. Open source libraries like Log4j are freely and publicly available for anyone to use, modify, and redistribute. One study found that open source comprises nearly three-fourths of the code reviewed. Given open source’s ubiquity, a vulnerability, or exploitable weakness, in the code threatens widespread destruction—Log4Shell proved just that. In September, Sen. Gary Peters (D-Mich.) and ranking member Sen. Rob Portman (R-Ohio), leaders of the Senate Homeland Security Committee, introduced the Securing Open Source Software Act (SOSSA) to address the federal government’s role in responding to the “open source software security” problem. The proposed legislation, which would establish federal responsibilities related to open source software security, was approved quickly by a voice vote with no markup in committee and sent to the Senate. 

SOSSA’s momentum has stoked a debate about the true reason for the open source security problem and the merits of different solutions. After briefly explaining the history of this problem and the details of SOSSA, we describe two camps with divergent views on the appropriate role for the government, especially legal liability, in solving the open source security problem.

The Open Source Software Security Problem: What’s Old? What’s New?

After Log4Shell, one might have assumed that it was only a matter of time until Congress acted to fix the open source software security problem. But this outcome (which is still uncertain) was far from a foregone conclusion. Take the case of OpenSSL.

In 2014, the open source cryptographic library OpenSSL, which turns HTTP into HTTPS, was powering two-thirds of the internet’s websites. It then suffered Heartbleed, a devastating security vulnerability. At the time, OpenSSL operated on a shoestring budget of $2,000 and was maintained by only four developers, only one of whom called it a full-time job. In the aftermath, experts conclusively agreed that a lack of resources was to blame for Heartbleed. Today, however, OpenSSL still receives little more than $10,000 in donations. That makes the fact that a high vulnerability was discovered recently in OpenSSL less surprising. In other words, versions of the open source software security problem have existed for years (if not decades) and, so far, governments have done relatively little.

So, why is SOSSA being proposed now? Though Log4Shell probably precipitated the announcement of SOSSA, the act is not specifically a “counter-Log4Shell” initiative. In fact, SOSSA arguably has little to do with the vulnerability at all, as the volunteer Apache maintainers of the project patched the Log4Shell vulnerability rapidly. More funding or better tools were, arguably, unlikely to help. This incident has simply opened the political window for action related to open source software security. It highlights the extent to which society’s most important functions depend on open source. And advocates for reform, including proponents of SOSSA, seized the moment. The last year has seen a groundswell of support for open source and increased awareness of its security problems. In fact, this bill comes on the heels of significant efforts by governments around the world, nonprofits here in the United States, and open source’s corporate beneficiaries

What’s in SOSSA?

On its face, the bill’s goals are sensible.

In short, SOSSA assigns the director of the Cybersecurity and Infrastructure Security Agency (CISA) three main tasks: (a) publishing a framework that assesses the risk of open source software components, (b) using that framework to assess the security posture of the open source software components on which the federal government relies, and (c) conducting a study that assesses the feasibility of applying that open source software risk framework to one or more critical infrastructure sectors with the help of voluntary industry participants. 

Similarly, the act proposes that the Office of Management and Budget issue guidance to executive agencies on managing and reducing the risk of open source software and also on federal staff contributing to and releasing open source software. In addition, the act proposes that one or more executive agencies establish a pilot open source program office. Open source program offices, a nascent but growing corporate function, govern the open source software activities of an entity. 

Advocates for a Community-Driven Solution 

Most observers can agree that some sort of open source software security problem exists, but there are different schools of thought on the exact problem statement and the solution. The arguably dominant school, expressed via SOSSA, emphasizes the need for a community-driven solution and a limited government role in which the government tends to its own open source software security affairs.

The drafters of SOSSA and its public supporters share a common view on the origin of the modern open source software security problem: insufficient societal investment in the health and security of open source software. SOSSA itself identifies “inconsistent historical investment in open source software security” as a key problem. One prominent open source software contributor has even opined that “open source [software] doesn’t have a security problem, it has a sustainability problem. Don’t blame the maintainers, pay the maintainers.” These advocates believe that, with sufficient and sustained funding, open source software maintainers and allied parties can adopt improved security practices and can build tools that will improve the security of open source software without further government action.

For advocates of a light-touch regulatory approach, SOSSA is a welcome addition to the existing regime of voluntary frameworks, industry incentives through federal contracting mandates, and encouraged, but not mandatory, public-private partnerships. These thinkers see the open source community, not the government, as the best bet for securing open source. While they cautiously welcome government involvement in open source security efforts, they also believe that “government should foster collaboration” and not create “another checklist or reporting mandate that adds more work” because to do so “confuses the desired outcomes of securing critical open-source software.”

The proper role for the government, in this perspective, is for the government to tend to its own open source software security problems. This is why the bill focuses on taking inventory of and assessing the government’s and critical infrastructure’s dependence on open source software. Open source program offices can then help manage and improve the relationship between government agencies and that software.

Behind the scenes, many in this camp believe that nothing is scarier than the statement “we’re the government, and we’re here to help.” According to our interviews with open source software advocates, they worry that the burden of government mandates could fall on the already-burdened open source community. To these advocates, this current act strikes the correct light-touch balance. 

That being said, few in the camp would turn down direct funding to the open source community, if made available. They are familiar with Heartbleed and know that without necessary resources, open source cannot secure itself. This is why open source software maintainers, in addition to prominent organizations like the Open Source Security Foundation, have asked of SOSSA: Where’s the money? In this one regard, they aren’t too picky about where the help comes from.

The Legal Liability Perspective

A more critical perspective on SOSSA stems from the view that voluntary interventions are insufficient. Unfortunately, software vendors have little incentive to adopt measures that lack a mandate. Further, software vendors profit from the existence of a vast open source software ecosystem but sometimes fail to contribute to its sustenance. This perspective welcomes the grassroots efforts of the community-driven camp but worries that without the threat of liability, little progress will be made. Simply put, the government’s role must be to change the incentive structure driving industry behavior. This school faults SOSSA for emphasizing a voluntary technical solution while omitting a legal liability perspective.

To members of this camp, recent data tells a sordid story: 30 percent of software applications that use Log4j still use a vulnerable version, nine months after the vulnerability became a global media sensation. Log4Shell shook the world, and vendors still ship insecure versions of the library, so proponents of a liability regime argue there is a serious incentive problem.

The solution, here, is to shift the legal responsibility onto “the least-cost avoider.” Regulation and laws should require commercial vendors who are in the best position to identify the software components they use, to scan for vulnerabilities regularly and rapidly update affected components, including open source components. In some instances, it would also be appropriate to require those vendors to proactively distribute updated software to downstream customers. In an analogy popular with this camp, software providers should operate in a legal environment akin to car companies: When a software vulnerability is discovered, software providers should automatically and rapidly patch it in the same way that car manufacturers assume responsibility for deficient parts and routinely perform recalls. A liability regime can be designed in many ways, but, at a minimum, it should take into account the nuances of the software supply chain, including an understanding of which parties are responsible for releasing updates, which parties are responsible for incorporating updates, and which parties are responsible for communicating updates to downstream users.

To be sure, this view of the open source security problem places the blame not on open source software contributors, but on the vendors shipping insecure software, thereby increasing society’s exposure to security vulnerabilities. Contrary to the concerns of those who fear government involvement, members of this camp also want to protect the open source community and support its sustainability. They, too, want the government to support community-driven solutions so that the heavy burden of securing open source is not on the community’s shoulders alone. But they also believe that irresponsible vendors are exacerbating the problem: increasing the burden on the open source community by creating new risks for the community to mitigate. 

Why Two Cultures Could Be One

Fortunately, this divide does not need to be permanent. Open source software advocates should, in our opinion, consider the possibility that the lack of historical investment in open source software security and health stems, in part, from the lack of legal liability that software producers face for shipping software with known vulnerable components. Why invest money or time in improving the security of the open source software components underlying a product if consumers neither know what components are in the final application nor have a legal right to software free of known vulnerabilities?

Should companies face legal liability for shipping insecure software, it’s conceivable that there would be a proliferation of corporate investment in open source software. Software vendors would no longer ignore, or simply skate by on, the security activities of the open source projects that they consume, but would spend time proactively scanning for undiscovered vulnerabilities (to reduce their prevalence in the future) and fixing security bugs quickly (to be able to ship software). Incentives would also drive rational actors to contribute to the open source community, recognizing that solving the problem at the source can be cheaper than finding and rectifying it downstream. Not only would all members of the open source community, including enterprise-supporting projects like the Linux Foundation, benefit from this arrangement, but the vendors themselves would as well. Spreading the cost of securing open source across its private-sector consumers would reduce the burden on any given company and ameliorate the threat to them all.

Of course, simply raising the threat of software security legal liability has its own logic. When government officials do this, they might be suggesting that the industry ought to invest in “self-regulation” efforts, lest they necessitate the more direct government involvement that many fear. Either way, it is time to seriously consider a liability regime. Legal liability for insecure open source security practices could further the goals of both open source software security cultures, without undercutting a call for community support. In short, it’s time for open source software security advocates and the lawyers to see each other as allies.


Chinmayi Sharma is an Associate Professor at Fordham Law School. Her research and teaching focus on internet governance, platform accountability, cybersecurity, and computer crime/criminal procedure. Before joining academia, Chinmayi worked at Harris, Wiltshire & Grannis LLP, a telecommunications law firm in Washington, D.C., clerked for Chief Judge Michael F. Urbanski of the Western District of Virginia, and co-founded a software development company.
John Speed Meyers is the head of Chainguard Labs within Chainguard. He leads a research team focused on open source software security. He is also a nonresident senior fellow at the Atlantic Council in the Cyber Statecraft Initiative within the Digital Forensic Research Lab. He previously worked at In-Q-Tel and the RAND Corporation.
James Howison is an Associate Professor in the Information School of the University of Texas at Austin. James studies open collaboration, particularly in software development, including open source software development and the development of software in science.

Subscribe to Lawfare