Published by The Lawfare Institute
in Cooperation With
On Nov. 15, 2017, the White House released a charter for the administration’s once-shadowy Vulnerabilities Equities Process (VEP). Advocates initially hailed this as a welcome first step to reforming the much-criticized VEP. But three years later, as the Trump administration comes to a close, the government has failed to deliver on its promises of greater transparency.
Established by the Obama administration, the VEP outlines the procedure through which the government weighs various considerations in determining when to disclose software vulnerabilities and when to exploit them for law enforcement or foreign intelligence purposes. Disclosure enables the involved company or entity to patch for that vulnerability and protect users’ cybersecurity. Until the charter was released, all the public knew about the VEP came from a blog post written in 2014 by Michael Daniel, then-White House cybersecurity coordinator, and from documents obtained through a Freedom of Information Act request by the Electronic Frontier Foundation.
Why is the VEP so important? Much of cybersecurity can be reduced to a constant race between the security experts trying to discover and patch vulnerabilities, and the attackers seeking to uncover and exploit those vulnerabilities. These exploits in the system can manifest as something minor like a product’s feature not working or, more concerningly, could enable a criminal entity to steal a user’s private information. On a broader scale, a vulnerability in commonly used software can be leveraged to wreak havoc on entire systems, like the WannaCry ransomware attacks that used a Windows vulnerability to, among other things, shut down Britain’s National Health Service networks. This risk is compounded further by the number of people who daily use common software programs and popular mobile hardware, or visit major websites.
However, criminals aren’t the only actors seeking to use these vulnerabilities. Governments have conflicting interests when it comes to vulnerability disclosure, which is what the VEP is designed to address. Governments can use software vulnerabilities as defensive, investigative and offensive tools in their cyber activities as long as they remain unaddressed. Once a company issues a patch or fix for that vulnerability, it is no longer useful to investigators. Although many people don’t install the corresponding updates as often as they should, a vulnerability becomes dramatically less valuable when it is no longer a “zero-day”—an as-yet unknown vulnerability that has not been patched. This creates tension between those in law enforcement and the intelligence community, and those who are more focused on improving general cybersecurity and protecting consumers. Each of these groups has different “equities” in these discussions, some of which directly conflict.
Publication of the charter was something advocates had asked for ever since the VEP was revealed in 2014, and concerns were raised about how the organizations making these decisions about vulnerabilities might be biased against disclosure. Advocates hoped that publication of the charter was indicative of a new, and welcome, commitment to transparency and openness. But the past three years have been a disappointment in that regard. The public has not seen any of the annual, unclassified report summaries that the VEP charter promised, nor any further disclosures regarding this process.
Advocates are seeking two categories of information: how the process is intended to work and how it is working in practice.
As for the former, advocates in favor of disclosure were concerned about some components of the structure as described in the charter. For example, the roster of government entities that participate in the decision-making process is heavily slanted toward participants that have a deep interest in not disclosing valuable vulnerabilities. Although the Department of Commerce may side with and advance the interests of U.S. companies and consumers, its “vote” is unevenly balanced by the equities of the Office of the Director of National Intelligence, the National Security Agency, the Department of Defense, the Department of Homeland Security and others. Three years ago, my Open Technology Institute colleague Sharon Bradford Franklin and I recommended that the process include ways to account for the security and privacy interests of ordinary consumers—who have a lot to lose from massive software breaches. Civil society advocates were also concerned about the nondisclosure agreement and memoranda of understanding loophole, which would allow the government to avoid disclosing valuable vulnerabilities by entering into agreements with other entities (as they supposedly have when sourcing hacking tools from gray-market firms).
Although these are serious structural concerns, I was heartened to see that some transparency requirements were written into the charter. I was cautiously optimistic about the requirement for annual reporting to Congress with, at a minimum, an executive summary written at an unclassified level. The charter includes specifics about the reporting period and what statistical data should be included, and it clearly states that if certain categories of changes (like to the list of members) are made to the VEP, then those will be included in the annual report. However, the public has yet to see a single annual report during the past three years and still have no understanding of how the process is working, or if the charter has ever been more than an aspirational document. The position of White House cybersecurity coordinator, which is named in the charter as the official responsible for the process, has been abolished by the Trump administration. This is not exactly a public indication of a commitment to the VEP. As far as advocates and the public know, the charter’s transparency and public reporting requirements haven’t been fulfilled.
These failures are representative of the primary concern I raised when the charter was released: that these practices are not effective if they aren’t codified into law. By sidestepping Congress and relying exclusively on executive action, the process is on a perpetually unstable footing and subject to change or revision at any time. There is no mechanism in place to hold those responsible for managing the VEP accountable for their failure to adhere to the charter’s transparency requirements, or potentially to adhere to the terms of the process. I noted in 2017 that these changes could be eliminated with the stroke of a pen. At this point, the public remains in the dark about whether the administration has followed the VEP process or altered the charter without disclosing that to the public. Currently, the only insight into the workings of this process comes from the opaque House and Senate Intelligence Committees. It is critical that the reporting process clarify the number of vulnerabilities out there, because size matters. Dozens? Hundreds? Thousands? The importance of a transparent process directly correlates with the size of the “stockpile” because size increases the threat model.
Any retained vulnerabilities, but especially larger numbers of retained vulnerabilities, pose three categories of threats. First, undisclosed vulnerabilities can be exploited by any actor at any time and so pose a threat to the users of that technology. Akin to how encryption backdoors can provide all kinds of actors, not just the “good guys,” access to encrypted information, unpatched vulnerabilities create similar risks. Second, undisclosed vulnerabilities are unlikely to remain within the exclusive control of a government and create an attractive target for hackers. There have been multiple security breaches where hacking tools held by governments, including the U.S. government, were stored insecurely and have been stolen by a third party. For example, in 2017 WikiLeaks released a cache of CIA hacking tools onto the internet, which showed the methods by which the agency conducts investigative hacking.
And third, the secretive nature of the VEP program sows distrust among companies about whether the government is disclosing vulnerabilities for repair. Companies like Mozilla have been active in discussions about VEP reform, after finding out that a vulnerability in its Firefox browser was exploited by FBI investigators but never reported for patching. Theoretically, by the terms of the VEP, this vulnerability should have been reviewed by the collection of entities included in the charter—something that can’t be verified without transparency and accountability.
When the charter was released, I raised concerns that it might diminish or distract from efforts to codify VEP reform, without creating a stable and accountable process for reviewing vulnerabilities. Although reports of government hacking into consumer technology continue to emerge, and the public has not received any corresponding reports about the VEP’s review process, there is significantly less advocacy around reform coming out of civil society. Prior to the charter’s release, advocates worked closely with allies in companies and in Congress to push for codification. Advocates highlighted the need for VEP reform every time another story came out about major hacks that might have been prevented had the exploited zero-days been reported to the company for patching. Those of us who had been so keenly focused on VEP reform were somewhat placated by a charter that, three years later, may have done little, if anything, to address these concerns.
So what now? The most encouraging avenues toward reforming the way that the government handles and discloses vulnerabilities have been two pieces of legislation: the Protecting Our Ability to Counter Hacking (PATCH) Act and the Cyber Vulnerability Disclosure Reporting Act. Both bills were introduced in 2017 and sought to codify VEP reform and reporting requirements. Neither has been passed into law, nor has Congress held any public hearings. Civil society and Congress need to reapply pressure to force more transparency about how the VEP has been operating and the introduction of mandatory reporting requirements with some sort of public component.
Further, steps need to be taken to codify the VEP. It cannot be effective without trust in the process and accountability for the government’s activities. The risks that unpatched vulnerabilities pose demand no less.