Cybersecurity & Tech Surveillance & Privacy

Using Threat Modeling to Secure America

Susan Landau
Wednesday, December 20, 2017, 1:00 PM

Getting security right means not only knowing what your threats are but also determining their likelihood and relative risk. Such threat modeling requires systematically enumerating your threats, evaluating the risk of their occurrence, and then accordingly prioritizing your defenses. It's something governments do. When they get it wrong, you get Pearl Harbor—or the Russian cyberattacks on the 2016 U.S. presidential campaign.

Published by The Lawfare Institute
in Cooperation With

Getting security right means not only knowing what your threats are but also determining their likelihood and relative risk. Such threat modeling requires systematically enumerating your threats, evaluating the risk of their occurrence, and then accordingly prioritizing your defenses. It's something governments do. When they get it wrong, you get Pearl Harbor—or the Russian cyberattacks on the 2016 U.S. presidential campaign.

Cyber threat modeling isn't easy. In the case of software vendors, it means understanding how customers use your systems. Who uses your systems? For what purposes? What risks might these users have? What are their assets? Who might their attackers be? What might the attackers seek? What are the attackers' capabilities? A manufacturer with its eye on the ball will carefully track how these issues change as users and their activities evolve. Given that, it was interesting to see a post by a Russian security analyst about a change Apple made to a security setting on the iPhone. The modification concerned accessing backed-up data from iPhones.

Apple makes it easy to backup data. Users have a choice: they can back up to Apple's iCloud or iTunes. The former means the data is accessible anywhere there's a network connection; the latter means the data is stored on the user's machines (and thus, in the U.S., has greater privacy protections than data held at a third-party cloud site).

iCloud data is encrypted using the user's AppleID and password (email is encrypted only in transit), while iTunes backup data is not encrypted unless the user explicitly chooses that option. It used to be that accessing encrypted data stored via iTunes was complicated: the data was accessible only under the same password employed the first time the user backed-up data using iTunes. Forget that password, and the user was out of luck. If the user was temporarily unavailable—in the shower or on the road—no one else could access the backup unless they had the phone and knew the secret password.

Over time, Apple decided that didn't make sense. There are a lot of ways we use our phones. We share them with our spouses to check directions or take a phone call while we're driving, or to access an account if we happen to leave the phone at home and need our two-factor authenticator to work. So Apple's change allowed anyone in possession of the phone who also knows the phone's PIN—the way to unlock the phone—to also change the password for encrypted iTunes backups, allowing them to create a new backup for the sensitive data (contrary to the analyst's post, this modification occurred over a year ago). The new PIN won't provide access to previous backups—only knowledge of the previous password will provide that. But the new PIN enables access to a current backup of data on the phone.

Yes, this does make it easier for a kidnapper or torturer to get your protected information: if they have you and the phone, they'll get the PIN from you. Of course, if they have you and can obtain your PIN, it's likely they can obtain your iTunes password as well. So in practice, this is not a substantive change. What's important is that under the new regimen, a trusted confidant can obtain information from the phone if the owner is unavailable or incapacitated.

That makes sense. If I am willing to provide my PIN to a colleague or family member, I'm saying I trust that person with my phone and the data on it. That's different from letting someone briefly use my phone in my presence to make a call or look up a fact on Google. When I give someone my PIN, I'm saying you can use my phone when I'm not available. Apple decided that under those circumstances, I'm also willing for them to have access to data on the phone. With the wealth of information we have on our phones, there are many reasons why someone might want to allow others access in case of emergency.

And yet, there are caveats to that PIN sharing. If, for example, I have company proprietary information on my phone, I shouldn't be sharing my PIN even with someone I fully trust. And if I'm working under a repressive regime, say as a journalist, I shouldn't keep a wealth of information on the phone—or accessible from it—in the first place.

Apple is letting the user determine what her threat model is. Is it more important for the user to keep her information private? Or more critical that someone be able to access information from her phone when she can't? Apple's recent update lets the user choose based on how she views her risks. And Apple's architecture gives the user flexibility to change her mind; all she has to do is change her PIN and the family member or friend who previously had access is now locked out.

Apple's modification of security options and the analyst's response got me thinking about threat models and the ongoing feud over Apple's resistance to weakening the security protections on locked phones. We keep hearing about the locked phones of the Sutherland Springs shooter,the San Bernardino terrorist and the thousands of phones that law enforcement can't open. One problem with this narrative is that the FBI is conflating two types of cases—the mass murderers, whose criminal trajectory is well mapped out, and the more common type of bad guys whose use of security technologies complicates investigations. These investigations unfold in very different ways. The problems locked phones present have different implications. Conflating these cases leads to making poor policy choices.

But there's a more critical problem here. Law enforcement's focus on the phones it can't unlock means that the FBI is missing the bigger picture of understanding how people use phones and the role that plays in security. Law enforcement needs to do threat modeling on locked phones.

Threat modeling starts with understanding your assets, then determining the threats against them. After the iPhone was introduced, Apple created software protections against theft: first Find My iPhone, which locates a phone, and then later Activation Lock, which prevents someone from shutting off "Find My iPhone" without the knowledge of the iPhone's PIN and the user's Apple ID password. These two together make a stolen phone much less valuable; not surprisingly, the incidence of that crime has dropped substantially. But the physical device was not the only valuable aspect of a phone; the data on the phone is also valuable (this is especially true of passwords to accounts and, in recent years, the apps that enable two-factor authentication). In the late 2000s, criminals in China had figured out how to remove data from lost and stolen phones. Criminals used this stolen data to commit fraud. Locking data on the phones, which is what Apple did first with iOS 5 in 2011, and much more completely with iOS 8, went a long way toward preventing such crimes.

Through all this, Apple was threat modeling, figuring out how the iPhones were being used and determining which assets customers were seeking to protect. What type of customers were using the phone? How had their use changed? What threats did these customers face? Who were their attackers? What were the attackers' capabilities? Understanding the security risks and responding to them is what enabled the iPhone to go from an expensive consumer toy to a device cleared for use in military and corporate settings.

Let's return to the FBI's locked phones. It appears that despite taking on the public fight—in the courts, in Congress, and in the court of public opinion—FBI leadership doesn't appear to have understood the threat models against phones. I've spoken with senior law-enforcement officials who argued strongly for making phones easier to open but who were, at the same time, unaware of why Apple had sought to secure the phone data in the first place. In other words, law enforcement was arguing for weakening the protections of phone data without understanding the threats arising from the unsecured information. That is an extraordinary lapse especially in light of the FBI’s choosing to make locked mobile devices a signature issue.

In its fight over locked phones, the FBI has largely been ignoring society's most serious threat: Russia's attacks on democracies. The U.S. is the primary target in what General Robert Neller has called Russia's efforts in "fighting a war without fighting a war." Using trolls, bots, disinformation and other tools honed over decades, the Russians weaponized the use of information. Spending less money than a new fighter plane would cost, Vladimir Putin's government sowed serious distrust in the electoral process and may have even changed the course of the U.S. presidential election. Russia's success in 2015 and 2016 made the nation bolder; former National Security Agency Deputy Director Richard Ledgett has observed that there's little disincentive to stop the Russians from repeating such efforts. In other words, we can expect more of the same—and worse. Yet somehow the FBI failed to inform most targets of Russian hacking; this included people who were working for the government or holding security clearances. At the same time, the bureau was devoting resources to examining “Black Identity Extremists,” a threat the FBI itself called “rare over the last twenty years.” This apparent lack of threat modeling, which at its most basic is simply understanding threats (including what the purposes and strengths of the attackers are) and prioritizing them, is startling—and dangerous.

Since the election, there have been active Russian efforts to foment tension within the U.S. by increasing racial discord and anger at immigrants. This pattern of Russian disinformation to create social disruption is repeated elsewhere, including Germany. Russia is also attacking other pillars of democratic societies; in a recent Washington Post op-ed, Suzanne Spaulding discussed how the Russians actively seek to undermine trust in the U.S. judicial system. And the Russians also appear to be going after civil society organizations—think tanks and research institutes—"viewed as likely to shape public policy."

The locked devices that frustrate the deputy attorney general and FBI director don't exist in a vacuum of investigations about narcotics and child pornography. They are part of a much larger picture of how people use mobile devices, what data is on them and the role they play in security (for example, a simple, secure form of second-factor authentication). When considering the problems investigators face with locked devices, the FBI needs to consider the phones in the context of the nation's most serious threats. Had John Podesta been using two-factor authentication, his email could not have been compromised. Had the campaigns used Signal for communications, there would have been far fewer emails to steal. In other words, had the campaign been practicing good information security using readily available consumer products, many—though not all—of the hacking problems could have been averted.

Apple's security protections, Signal, WhatsApp, Tor, and other security tools all have important roles in protecting civil society and democracy. The security concerns are the reason behind the Senate sergeant at arms’s approval of the use of Signal by Senate staff earlier this year. The FBI and the Department of Justice (DOJ) need to understand the threats our society faces. This is not only from Russian attempts to undermine our society—and these tactics are spreading elsewhere around the world—but also the serious levels of IP theft that threaten U.S. industry.

Both the FBI and DOJ instead need to adjust to the modern era and embrace secure communications and devices. Otherwise we're repeating the mistakes made in the time of J.Edgar Hoover: counting the number of stolen cars retrieved and ignoring far more serious threats, such as those contemporaneously posed by organized crime. Only this time we're counting the number of locked phones while the Russians are substantially contributing to Americans' distrust of their government. While many aspects of combating the latter are well outside the FBI's role, ensuring the security of U.S. organizations—this includes the security of their assets, electronic and otherwise—is squarely part of the FBI mission. The role that secured communications and devices play is critical, but unfortunately the FBI and DOJ have so far firmly planted themselves on the wrong side of this issue. It's time for them to do some serious threat modeling and understand where our real risks lie.

Susan Landau is Bridge Professor in The Fletcher School and Tufts School of Engineering, Department of Computer Science, Tufts University, and is founding director of Tufts MS program in Cybersecurity and Public Policy. Landau has testified before Congress and briefed U.S. and European policymakers on encryption, surveillance, and cybersecurity issues.

Subscribe to Lawfare