Cybersecurity & Tech Surveillance & Privacy

Counterintelligence Implications of Artificial Intelligence—Part III

Jim Baker
Wednesday, October 10, 2018, 10:05 AM

This is the third post in my series about the counterintelligence implications of artificial intelligence (AI). The first two are here and here. I’ll start this one with a story.


Published by The Lawfare Institute
in Cooperation With

This is the third post in my series about the counterintelligence implications of artificial intelligence (AI). The first two are here and here. I’ll start this one with a story.

From 2001 to 2007, I was the counsel for intelligence policy at the Department of Justice. In addition to having responsibility for representing the government before the Foreign Intelligence Surveillance Court, I also advised the Justice Department and other agencies on a range of national security law issues, including privacy. While in that position, I attended a meeting one day with Adm. John M. Poindexter. The retired admiral is the former national security adviser to President Ronald Reagan who played an important and controversial role in the Iran-Contra affair. He was convicted at trial of various charges that involved lying to Congress and obstruction, but those convictions were reversed on appeal and other charges were later dropped. Poindexter had requested the meeting with me and a few others from the Justice Department to discuss matters related to his role as head of the Defense Department’s Information Awareness Office that ran the Total Information Awareness (TIA) program. Shane Harris discusses Poindexter and the TIA in much more detail in his book “The Watchers” and here.

As Poindexter explained to us at the time—the meeting happened around 2002—the goal of TIA was to prevent future terrorist attacks on the United States and its allies by gathering large amounts of data about the activities of people in the United States and analyzing that data for patterns of behavior and relationships between people. Such analysis, it was hoped, would lead to the identification of terrorists and their networks inside the United States and provide the government with some ability to predict what they would try to do. The FBI and other agencies would then endeavor to thwart those malign efforts using traditional investigative techniques. Since 9/11, U.S. officials responsible for protecting the country from additional attacks had been terrified that there might be terrorist cells operating in the United States that had not yet been identified. I was well-familiar with other efforts both before and after 9/11 to collect large datasets about the domestic activities of Americans and others in the U.S. for the purpose of identifying terrorists in our midst. For example, the Stellar Wind program authorized by President George W. Bush involved, among other things, the collection and analysis of large amounts of telephony and internet metadata to find al-Qaeda-connected operatives in the U.S.

All such efforts had profound implications for the constitutional rights of Americans. I found that most intelligence officials involved in these programs were deeply concerned about such issues, even if the legal and policy analysis they conducted or the protective lines they drew were not the same as those that I, the Department of Justice, or their outside critics would engage in or draw. John Poindexter was no different. Indeed, he had invited me and others from the Justice Department to brief us about what he was doing very early on. He told us that he understood the significant privacy implications of his proposed work and that he wanted to build strong privacy protections into his systems from the outset. As I recall, we agreed with his assessment about the critical need to protect privacy if such a program were to exist and offered to help his team include privacy-by-design features in their projects. Adm. Poindexter struck me at the time as a smart, thoughtful and genial man who was trying to do the right thing. He was, however, the wrong person to head this office because of his involvement in the Iran-Contra affair. The TIA’s seal, complete with the pyramid with the all-seeing eye (from the back of the dollar bill), didn’t help matters. Once all this became public, and the admiral’s role was known, TIA was doomed.

Well, sort of. I don’t know what happened to all of TIA’s work, personnel and contractors. But I do know that Poindexter was ahead of his time in several respects. First, total information awareness—or something approaching it—is what many governments, agencies and companies are seeking to varying degrees and for various purposes—to sell consumers stuff more effectively and efficiently; to prevent terrorist attacks and other harmful events; and, under authoritarian governments, to squelch dissent in violation of fundamental human rights. Second, Poindexter was right to worry about the privacy implications of such data collection and analysis and to try to build into his systems from the outset appropriate safeguards overseen by outside entities that could review his work and hold him accountable in a meaningful way.

I don’t recall Poindexter mentioning the use of artificial intelligence to assist his efforts, but had TIA continued I’m sure eventually it would have. I can only imagine what the public reaction to that would have been.

AI and Big Data are a potent combination with many implications. This post focuses on how adversaries might apply AI to the vast amount of data that they collect about American to understand us, predict what we will do and manipulate our behavior in ways that advantage them. I’m not the first to raise this issue, but I want to stare at it a bit more through the counterintelligence lens.

When thinking about total information awareness (or whatever term you prefer), it is important to understand this equation: AI + Big Data + High-Speed Computing = Power. And because the power that those addends produce over time will be increasingly substantial, people—especially well-resourced nation-states—will seek them aggressively. That reality needs to be understood and dealt with. The desire for total information awareness is not going away. Indeed, as others have pointed out, countries such as China are aggressively pursuing it for a variety of economic, political and military reasons.

But what, exactly, is the nature of the power that AI, Big Data and high-speed computing produce? And how will that power be exercised? As I have mentioned before, others have written extensively about the broad societal impact that these combined elements may have over the coming decades. And I don’t pretend to have the knowledge, understanding or imagination to answer those questions fully. But since my focus here is on the counterintelligence implications of AI, three related concepts are important: understanding, prediction, and manipulation of individuals and groups.

First, understanding and prediction. Our adversaries—which include hostile nation-states, malicious cyber actors, international organized criminals and terrorists—will certainly use AI against us in a variety of ways. But how exactly? With enough data, computing power and reasonably effective algorithms, could U.S. adversaries accumulate enough knowledge and understanding about individuals and groups to predict reasonably accurately our behavior and responses to certain types of stimuli? Would they see patterns in our behavior that Americans do not see or even know that we are exhibiting? Would that behavior reliably and predictably reflect how we think? Would they understand the networks of associations that Americans have in depth and learn the nature and scope of our relationships with a broad range of people, organizations and places?

Of course they would. The advent of social networking services and other forms of digital expression has made many people much more familiar with concepts like this than they were even just a few years ago, such as when I learned about TIA. For example, most people have some understanding of the concept of targeted advertising—and many people like it—even if most of us don’t know much about how it is done or the collateral implications of collection and analysis of detailed information about many of our activities.

As Justice Sonia Sotomayor stated in U.S. v. Jones with respect to GPS monitoring, technology that involves the analysis of large amounts of data about human behavior can “generate[] a precise, comprehensive record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations. . . . The Government can store such records and efficiently mine them for information years into the future.” This is equally true for a person’s “private” activities to the extent that such activities are connected to and revealed by that person’s interaction with digital technology such as web browsing, smartphone apps or biometric devices that are connected to the internet.

Abilities to collect and analyze large datasets about human behavior will soon grow significantly. As I wrote in this post, an explosion of data will result from widespread adoption of 5G wireless technology coupled with the spread of the Internet of Things (IoT) over the next several years. With 5G and the IoT, there simply will be much more data about all of us that will be highly revealing about our activities. As I explained in that post, our cybersecurity posture as a country is generally poor. And IoT devices are likely to be insecure—probably highly insecure. This means adversaries are likely to be successful in stealing a lot of IoT-generated data about what Americans do. With so much digital dust around, it is inevitable that we will leave clear footprints reflecting what we are up to and how we think.

It is not news that the combination of all of that data with powerful data analytics—perhaps aided by AI—will enable adversaries to understand us and then predict what we will do. But we are not ready as a society to handle the counterintelligence implications of adversaries understanding and predicting our behavior so profoundly.

And if they can do that, then it is not too much of a leap to think that they will try to manipulate that behavior. In other words, adversaries will understand our thoughts and actions better than we do and will know to a reasonable degree of certainty how we will behave in the future or respond to various inputs and stimuli. That manipulation is likely to take various forms, including efforts to deceive and confuse people in various ways about what is happening and what the truth is—think about “deep fakes,” for example; overloading people’s senses with useless or irrelevant information so that we cannot accurately discern what adversaries are doing or what is important; and putting misinformation before us to erroneously confirm pre-existing biases and cause us to misperceive reality and to choose the wrong courses of action. They will also try to stoke long-standing animosities and fears so that Americans fight with each other and look foolish to the world we are supposed to be leading.

As we all know, to some degree that future is already here. It is demonstrated by the indictments brought by Special Counsel Robert Mueller against the Internet Research Agency and some Russian intelligence officers, as well as Facebook’s announcement that it deleted numerous accounts that were clearly part of a concerted effort to manipulate some segments of the U.S. population in sophisticated ways. Iran has also gotten into the act in this regard.

I don’t know whether those efforts were powered in part by artificial intelligence or were just the product of reasonably effective traditional intelligence work. Probably some combination of the two. But the problem of foreign adversaries trying to manipulate the United States and its allies is only going to grow over time. And as AI tools improve and become more widely available, so too will their ability to manipulate. Of course, public and private sector entities can use AI in an effort to counter this type of activity. As the New York Times said about one of the Facebook takedowns, “The company is using artificial intelligence and teams of human reviewers to detect automated accounts and suspicious election-related activity.” Expect these manipulation efforts and counter-efforts to continue, expand and grow in complexity—not only in the United States but elsewhere as well. And AI will enable all of that to happen more rapidly in ever more sophisticated and effective ways.

Adversaries’ ability to understand, predict and manipulate our behavior based on Big-Data-driven AI analysis will also have profound implications for U.S. intelligence and counterintelligence operatives. As Ashley Deeks observed about China’s domestic efforts toward total information awareness,

One challenge relates to U.S. intelligence collection. If the Chinese government can recognize every person on the street and easily track a person’s comings and goings, this will make it even harder for foreign intelligence agencies to operate inside the country. Not only will U.S. and other Western intelligence agents be even easier to follow (electronically), but the Chinese government will also be able to identify Chinese nationals who might be working with Western intelligence services—perhaps using machine learning and pattern detection to extract patterns of life. China’s facial recognition efforts thus facilitate its counterintelligence capacities.

This will be increasingly true over time in other countries as well, such as Russia and Iran. Total information awareness by adversaries will also have implications for the U.S. here at home—and for allies in their home territories. Because adversaries are stealing so much data about Americans’ domestic activities, it is no longer rational to think that they do not have a reasonable picture of normal and anomalous behavior in many geographic locations within the U.S. On that basis, it is increasingly probable that they can detect counterintelligence and law enforcement operations and activities in the areas where their intelligence operatives act. This means that they will be better at detecting surveillance activities and undercover operations and identifying human sources. You can bet that China, and Russia and others, will do everything they can to detect FBI counterintelligence activities in Washington, New York or Silicon Valley when they want to engage in an important operation. They will breach U.S. cyber defenses to collect Big Data to help them obtain as much digital dust as they can. Increasingly, they will use AI to figure out what all that dust means so that they can better understand, predict and manipulate U.S. intelligence and counterintelligence activities.

It is highly unlikely that there will be anyone in those governments like Adm. Poindexter worrying about the privacy implications of what they are doing and trying to build in appropriate oversight and transparency mechanisms to protect our civil liberties. No matter what you think of Poindexter and the U.S. government officials and staff who may be engaged now in total-information-awareness-type activities, it is more likely that they share a set of values with most Americans than their counterparts in China, Russia and Iran do, and are constrained by law in ways adversaries are not.

Jim Baker is a contributing editor to Lawfare. He is a former Deputy General Counsel of Twitter, the former General Counsel of the FBI, and the former Counsel for Intelligence Policy at the U.S. Department of Justice. In that latter role, from 2001-2007, he was responsible for all matters presented to the U.S. Foreign Intelligence Surveillance Court. The views expressed do not necessarily reflect those of any current or former employer.

Subscribe to Lawfare