Cybersecurity & Tech Surveillance & Privacy

Rethinking Responsible Disclosure for Cryptocurrency Security

Stewart Baker
Thursday, September 8, 2022, 8:01 AM

Cryptocurrency security really is worse than other digital technologies, and there’s a good chance it always will be.

Several cryptocurrency coins. (Source: QuoteInspector.com)

Published by The Lawfare Institute
in Cooperation With
Brookings

The Biden administration has pointed, with alarm, to the national security implications of both cybersecurity and cryptocurrency. It’s just a matter of time before the government begins worrying about their intersection—cryptocurrency security. All of the United States’ international adversaries are in the business of exploiting bad cybersecurity, and many of them monetize their exploits using cryptocurrency. There’s nothing more natural for North Korean state hackers, Russian organized crime, or partially privatized cyberspies in China and Iran than to steal cryptocurrency to finance their national security operations. They’ll find an open door; because, as bad as overall cybersecurity is, the security of cryptocurrency is worse.

You only have to follow cryptocurrency news casually to be struck by the size and frequency of cryptocurrency security failures. That’s not your imagination, or press bias. Cryptocurrency really does have worse security than other digital technologies, and there’s a good chance it always will. 

Here’s why: In other parts of the digital economy, companies quickly patch security flaws, many of which have been found and responsibly disclosed by outside researchers. But as I’ll explain below, the “disclose-and-patch” cycle doesn’t work for cryptocurrency systems. There are ways to make disclose-and-patch work better for cryptocurrencies, but they will require compromises, institutional innovation, and maybe even new laws. That’s a tall order, but until it happens, cryptocurrency security will never match even the low security standard set by other digital technologies.

How Responsible Disclosure Works

Software security flaws like these are ubiquitous in digital products. Like writers who can’t see their own typos, most coders have trouble seeing how their software can be misused. The security flaws in their work are usually found by others, often years later. Indeed, security researchers are still finding serious holes in Windows today—30 years after it became the world’s dominant operating system.

Companies like Microsoft have improved their products’ security by making peace with those researchers. There was a time when software producers treated independent security research as immoral and maybe illegal. But those days are mostly gone, thanks to rough agreement between the producers and the researchers on the rules of “responsible disclosure.” Under those rules, researchers disclose the bugs they find “responsibly”—that is, only to the company, and in time for it to quietly develop a patch before black hat hackers find and exploit the flaw. Responsible disclosure and patching greatly improves the security of computer systems, which is why most software companies now offer large “bounties” to researchers who find and report security flaws in their products.

That hasn’t exactly brought about a golden age of cybersecurity, but we’d be in much worse shape without the continuous improvements made possible by responsible disclosure.

And that’s the problem for cryptocurrency. Responsible disclosure just won’t work there, at least not as it’s traditionally been understood.

Cryptocurrency Recovery

I began thinking about this problem in a very different context: cryptocurrency recovery. That’s the business of unlocking wallets for people who’ve lost access to the passwords, private keys, hardware, or software they originally used to secure their funds. Restoring access to such wallets will be a growth business over time, for two reasons. 

First, the market for those services is growing. The press is already full of stories about people losing access to their cryptocurrency wallets. That’s because people make security a top priority when they store virtual assets. They realize that, in general, anyone who knows their password, seed, or private key has complete access to their funds. So they choose passwords that are hard to guess (and thus hard to remember!) and they are reluctant to write them down. Then, over time, fading memory, or maybe unexpected disability or death, puts the funds out of reach.

At the same time, gaining access to such funds gets easier every year. The hardware and software that was used to secure the assets is frozen in time, but security research is not. As Bruce Schneier reminds us, attacks on security “always get better, they never get worse.” So security researchers have a better chance every year of finding the hole that will unlock an address, a wallet, or a piece of hardware.

That’s good news for people who’ve locked themselves out and good news for those who can get them back in. But what about users who were counting on that security to protect their assets? Much of their security is also frozen in time. The software they used to create their wallet, and the hardware they locked it away in, have been aging, perhaps badly, ever since.

Why Responsible Disclosure Isn’t Working for Cryptocurrency

Why can’t the cryptocurrency industry solve the problem the way the software and hardware industries do, by patching and updating security as flaws are found? Two reasons: First, many customers don’t have an ongoing relationship with the hardware and software providers that protect their funds—nor do they have an incentive to update security on a regular basis. Turning to a new security provider or using updated software creates risks; leaving everything the way it was feels safer. So users won’t be rushing to pay for and install new security patches.

Second, cryptocurrency is famously and deliberately decentralized, anonymized, and low friction. That means that the company responsible for hardware or software security may have no way to identify who used its product, or to get the patch to those users. It also means that many wallets with security flaws will be publicly accessible, protected only by an elaborate password. Once word of the flaw leaks, the password can be reverse engineered by anyone, and the legitimate owners are likely to find themselves in a race to move their assets before the thieves do. Even in the software industry, hackers routinely reverse engineer Microsoft’s patches to find the security flaws they fix and then try to exploit them before the patches have been fully installed.

In the cryptocurrency market, that kind of exploitation is nearly guaranteed and in short order—as happened in two public incidents in August. In one, hackers took nearly $200 million from Nomad, a blockchain “bridge” for converting and transferring cryptocurrencies. One user began exploiting a flaw in Nomad’s smart contract code. That tipped others to the exploit. Soon, a feeding frenzy broke out, quickly draining the bridge of all available funds. In the other incident, Solana, a cryptocurrency platform, saw hackers drain several million dollars from nearly 8,000 wallets, probably by compromising the security of their seed phrases, thus gaining control of the wallets.

Together, these problems make responsible disclosure largely unworkable. It’s rarely possible to fill a security hole quietly. Rather, any patch is likely to be reverse engineered when it’s released and exploited in a frenzy of looting before it can be widely deployed. (This is not a new observation; the problem was pointed out in a 2020 ACM article that deserves more attention.)

If I’m right, this is a fundamental flaw in cryptocurrency security. It means that hacks and mass theft will be endemic, no matter how hard the industry works on security, because the responsible disclosure model for curing new security flaws simply won’t work. 

Is it possible to make responsible disclosure work in cryptocurrency? Maybe, but not easily. I’m still thinking through the issues, but here are two potential solutions, which I offer with only modest confidence.

Centralized Security

There are some companies that can play a role in cryptocurrency security similar to that played by Microsoft, Apple, and Google in consumer software. A big exchange that knows a lot about its customers and controls the software they use to access their wallets can quietly prepare and swiftly implement measures to protect those funds.

That’s worth doing, but any effort to centralize security for cryptocurrency will face roadblocks. First, even exchanges protecting current clients will need terms of service that allow them to modify their customers’ accounts—and perhaps take them offline—to provide that protection. They’ll need to find a way to compensate both themselves and the researchers who find the flaws for this added work. This will be harder to do in the cryptocurrency space, where they’ll be swimming against a deep-seated libertarian resistance to centralization. And even if they succeed, many security flaws will be beyond an exchange’s ability to mitigate.

Decentralized Rescue

There is a decentralized approach to responsible disclosure, but it will require new laws, or at least a new view of current law.

The Nomad hack illustrates what might be called the decentralized “rescue” of compromised wallets. The company noticed that some of the people exploiting the flaw said they were doing it to protect the assets. It issued a public appeal to “white hat hackers and ethical researcher friends” to send any funds they rescued to a wallet created for that purpose. It further sweetened the pot by offering a 10 percent bounty for returned funds and promising not to pursue legal actions against those who returned funds. So far, the company reports that $32 million of the $190 million that was stolen has been returned. This is becoming a recognized industry practice. In 2017, the “White Hat Group” responded to the theft of $32 million in Ethereum by draining $60 million more from vulnerable wallets and then returning it. Last year, Poly Networks was able to recover some $600 million in assets by offering a kind of retroactive bug bounty in the amount of $500,000—a bargain at 0.1 percent of the original loss—and promising not to press charges.

I wish that I thought this trend would continue on its own, but cryptocurrency rescuers are taking big legal risks. As one observer noted, “[E]ven if you give it back, stealing more than a half a billion dollars is a crime any way you look at it.” Hackers without a smudge on their hats will still have to wonder whether prosecutors will suspect that their rescue was just a theft gone wrong. Worse, it’s not always going to be clear who the rescued funds should be returned to. If there are multiple claimants, the rescuer ends up having to adjudicate those claims, with both parties threatening to sue if the decision goes against them. Finally, even if the rescuer correctly identifies the true owner, there is a nontrivial risk that the owner will turn out to be a criminal or even a party sanctioned by the Office of Foreign Assets Control, meaning that the rescuer runs a risk of prosecution not just for taking the funds but for also giving them back.

To reassure good-faith rescuers, legal and financial incentives need to be more systematic and much more certain. Maybe what’s needed is an independent party willing to set up a cryptocurrency wallet into which rescuers could drop rescued funds. (If the wallet is created by the Justice Department, it could reassure rescuers that deposits would make prosecution unlikely. It could also identify (and vet) people claiming ownership. Finally, to be candid, it really should find a way to institutionalize the bounty payment.) We’d see a lot more cryptocurrency rescues if rescuers were rewarded like security researchers in the software ecosystem. 

To raise cryptocurrency security even close to the (unimpressive) level achieved by the software industry, responsible disclosure needs to work in a brand new context. Maybe “responsible rescue” is the answer.

At least for now, I don’t have a better one.


Stewart A. Baker is a partner in the Washington office of Steptoe & Johnson LLP. He returned to the firm following 3½ years at the Department of Homeland Security as its first Assistant Secretary for Policy. He earlier served as general counsel of the National Security Agency.

Subscribe to Lawfare