Cybersecurity & Tech

Lawfare Daily: National Security Regulation of Technology and Data Transactions

Jonathan G. Cedarbaum, Justin Sherman, Jen Patja
Wednesday, February 18, 2026, 7:00 AM
Assessing the regulatory programs governing the intersection of technology and national security.

Lawfare Book Review Editor Jonathan Cedarbaum sits down with Justin Sherman, the CEO of Global Cyber Strategies, to discuss his new book, "Navigating Technology and National Security: The Intersection of CFIUS, Team Telecom, AI Controls, and Other Regulations," in which Sherman describes and assesses the proliferation of U.S. regulatory programs designed to guard against national security risks arising from transactions involving technology and data.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Justin Sherman: When we talk about the complexity of how do you prevent the export of a physical thing that just is so different, even if it's a really small chip, that's still quite different than how do you prevent someone from uploading something onto the internet, which is really just a qualitatively and quantitatively different sort of question.

Jonathan Cedarbaum: It's the Lawfare Podcast. I'm Jonathan Cedarbaum, Lawfare’s book review editor, here with Justin Sherman, the founder and CEO of Global Cyber Strategies.

Justin Sherman: It's really important to understand how entangled and how interdependent these systems are, how that interdependence can be weaponized, because these are not decisions you can easily reverse, right?

It's not the case that you can kind of let these programs sit here for a year, then pick them back up and nothing's changed, right? This isn't, you know, pausing like a Netflix episode. I mean, these is a highly sophisticated set of regulations. Adversaries are not taking a holiday because the offices are not staffed or because they're distracted with other nonsense.

Jonathan Cedarbaum: Today, we're talking about Justin's brand new book, “Navigating Technology and National Security: The Intersection of CFIUS, Team Telecom, AI Controls, and Other Regulations,” just published by Wiley.

[Main Episode]

Justin, tell our listeners a little bit about your background and why you chose to write this book.

Justin Sherman: I'm a computer scientist and international relations person by background, so naturally, the sort of contours of the technology and national security intersection have always interested me. And you know, that that sort of blossomed over time into a couple different reasons for writing this book.

One was looking at the fact that we often talk about, and for lots of good reasons, right—We often talk about technology regulation from a consumer protection perspective or maybe from the perspective of civil liberties protection or something like that which is very, very important. But there's also a whole conversation to be had about the extent to which national security laws, national security regulations, national security regulatory programs, are a way of governing technology.

And I don't mean, necessarily, things related to, let's say how the government procures technology for national security or how it might tap into it, let's say through various intelligence authorities or something like that. But really regulating it and so I was sort of interested in digging more into that history and how it came to be.

And the second reason is I've, as I mentioned in the book, I've also worked on issues as a consultant and advisor to the U.S. government related to several of the regulatory programs I talk about in this book, especially including around data security, and I was also interested by the fact that, okay, for some period of time, we've really had a couple main regulatory programs focused on tech and national security, that's been export controls and, as we'll get into some amount, of investment review.

But these days we have so many connected vehicles, data, AI debates about chips and, right—There's so many more of these programs. The number, in addition to the scope or the relevance has really gone up. And so all to say, all very nerdy reasons, but all reasons why I was interested in kind of digging into how this regime is evolving and where we see it going in this hyper-technology driven time we live in.

Jonathan Cedarbaum: Very good. Your book actually describes seven U.S. regulatory programs addressing national security concerns arising from technology and data transactions of various kinds. What are some of the book's main or cross-cutting themes or arguments?

Justin Sherman: There are a few worth mentioning up top. So one is that for a number of these chapters, as you mentioned, the seven programs I look at are sort of how the early chapters are organized.

Actually went back and read, basically, all of the, not every regulatory comment that would take me a thousand years, but maybe with AI we could summarize, but certainly went back and read every regulatory proceeding document published for some of these—for example, with investment review going back 50 years.

And so one theme that really stuck out to me: Over time, consistently and across multiple programs is that there really are debates about, okay, if the government's gonna come into a private sector business, to a private sector tech transaction, to a private sector tech R and D effort, and say, this particular thing is a national security risk and you need to stop, or you need to slow it down, or you need to comply with X, Y, Z, how are you actually determining that that's a, quote, unquote, risk?

So there’s really a lot of debate over time about how you do that and how to really do that in a precise way such that you can differentiate what's really problematic from what might be tolerable, or to help a company, let's say, ensure that they can continue to compete and innovate and build new technologies, which by the way, also will help national security in the long run but also do that in a way that they understand what the government is most concerned about.

So that's kinda—the risk question is one.

Related to that, I sort of argue at the end, perhaps counterintuitively in a national security book, but the transparency is really, really critical. And this is something else that came up time and time again throughout looking back at the regulatory documents, were essentially companies wanting the government to publish really specific lists.

Maybe say, tell us exactly what kinds of things we're allowed to build or not build, or tell us exactly what types of countries you're gonna investigate a transaction with and which ones you're not, and be really, really precise and specific. And a lot of the time the government has rejected those requests saying no, the threat space is really dynamic.

So, we can't pigeonhole ourselves into one specific list of here's what's a cyber threat to U.S. infrastructure, or, here's the way that, you know making this up, but you know, the Chinese intelligence services structure of front company, right? Here's the list to look for.

And the second point, which segues from that hypothetical that there's of course a lot of classified information going on here in terms of how the government might understand the threat space and that also can't be shared. So there's important debates about transparency, maybe we'll come back to that.

And then the last is really just this ongoing set of questions or tensions and so forth between the speed of innovation, the types of innovation, the companies might want to pursue, and the national security considerations.

And sometimes those are really aligned, right? Certainly, we're hearing arguments now that that faster AI development is in line with U.S. national security interests to be able to deploy quickly in various agencies, but at the same time, right, we've certainly seen many cases where, you know, as Lawfare has covered since its inception, for example, how the speed of innovation and software has really created an enormous cybersecurity vulnerability surface.

So, all to say, you know, all things that are ongoing, but I think on some of these, it was interesting to really dig back through the regulatory proceedings and see that, you know, some of the questions being asked a year or two ago in more recent programs and the public comments were quite literally the same kinds of things folks, companies, and others have been asking for decades and decades and decades.

Jonathan Cedarbaum: Very interesting. Let's, with those broader themes in mind, let's take a closer look at each of the programs you analyze, and let's start as you did with the most familiar, the most longstanding, and that is export controls.

What do you think the U.S. has been doing right and wrong as it has to adapt its export control system to these novel issues with emerging technologies and data?

Justin Sherman: So we'll start with what I think has been done well or continues to be done fairly well, which includes, for instance, the fact that, I mean, of course, so the Commerce Department, as, as many listeners know, right, is not being the only one that's involved in export controls at large, but I’ll mostly talk about commerce here in terms of the Bureau of Industry and Security, BIS, because we're talking about technology and that's where a lot of that, you know, outside of the military weapons space, sits.

And so vis-a-vis export controls, one thing that I think continues to be done well is giving companies actually quite specific guidance on what to do. As someone with great interest in this area, I always find it fun to sort of read not just the designations of particular entities, but BIS sort of routinely publishes lists of, you know, here's the exact ways that Russian front entities for the Russian intelligence services are getting around U.S. export compliance checks to buy whatever technology, or here are the new techniques we're seeing in this part of the world to sort of illicitly diffuse technology and here are the red flags to look for. So there's some degree of specific guidance that continues to come out.

Another is we've seen BIS, sort of, sit within a broader set of, in my view, strategic moves to ensure that our adversaries are not diffusing or continuing to leverage technologies that they can usem particularly to commit espionage against the U.S.

So, of course, some of this is getting undercut now with certain chip sales and so on, which maybe we'll get to. But, an example in recent years to me is the Huawei debate, right, and around the Chinese telecom. And there were issues with this in the beginning in terms of how the U.S. government messaged it.

But in my view, export controls and other tools were used quite effectively in tandem. I mean, Huawei's global market share took a real hit. Lots of countries stopped using a lot of their, particularly 5G telecom and 4G telecom network components. So, you know, so there are ways that export controls, I think, have sat within a broader framework for how to think about something complicated like modern Chinese government cyber espionage.

Challenges, though, I question, and of course, this is not a criticism of specific staff, but more at sort of the macro policy level, although maybe a critique of some on the Hill, I suppose, but—is that it doesn't seem to me that, that we've all internalized the lessons from the last 20 to 30 years when you try to apply export controls to rather than physical, tangible goods, which is mostly what they've been applied to for a couple hundred years now.

I mean, there were points in time where export controls were applied to information or this or that, but more recently we've seen failed efforts, you know, around 2013 with the Wassenaar Arrangement to restrict the spread of cybersecurity software, that didn't work. We saw there's a, I cite them—

There's wonderful scholarly papers that have been done on the war, on encryption during the Cold War, right. And how that really, A, didn't work to try and control the diffusion of encryption technologies, and B, actually hurt the United States because you had, you know, mathematicians who were afraid to go to like a conference in Canada because maybe they'd be violating export control law. So I'll stop there.

But I think, you know, when we talk about the complexity of how do you prevent the export of a physical thing that just is so different, even if it's a really small chip, that's still quite different than, how do you prevent someone from uploading something onto the internet, which is really just a qualitatively and quantitatively different sort of question.

Jonathan Cedarbaum: Your mention of Huawei highlights that many of the recent efforts by the U.S. with respect to regulating technology and data transactions are done really with an eye on U.S. rivalry or competition with China.

One of the concepts you discuss in that regard is this notion of a small yard, high walls approach to export controls. What does that concept mean?

Justin Sherman: Small yard, high wall or small yard, high fence, you hear similar things, is a concept that has been attributed to Robert Gates, the former, among other titles, the former Secretary of Defense.

And this really got, I think if, you know, just think back, right, and you look back at some of the discussion, a lot of play around, you know, 2018, 2020, kind of during the saga of Huawei we're talking about, as a way of thinking through how do we lock down a certain subset of technologies in the U.S. supply chain that are really critical to national security, but not over index, not rope in too many technologies or not be too heavy handed in such a way that it undercuts university innovation or the startup ecosystem or something else.

So that was sort of the idea. And of course, on paper, this sounds very sensible. And I do actually think, in theory, that this is a good idea, right? Because the more that you can scope where you're looking in terms of, of national security risk, including on export controls, the more, A, the limited government resources, 'cause they're always limited that you have available can be applied just to what you need to focus on the most.

And then, B, it does, you know, you don't wanna slow everything down. There are some trade-offs and so it helps you limit how much you're impeding certain innovations in R and D. That said, I don't think it was actually executed well and, you know, we won't name them, but you know, certainly a number of folks I know who worked on this quite closely in the last couple of administrations, you know, feel similarly, right, for a variety of reasons.

This sort of became something that, perhaps some of the individuals working on it did not intend. But all to say, I think it hasn't been carried out the most effectively because the first Trump administration drew a pretty big yard to continue with this metaphor. It got a lot of flack around 2017, 2018, 2019 when the Commerce Department was looking at potentially implementing a wide range of export controls on, quote, unquote, artificial intelligence.

And this got a lot of backlash because it was defined in, I say in quotes, in sort of a broad way, and people were saying, well, hold on a minute. This is a really difficult dual use situation, right? There's lots of civilian use cases for the exact same technology that's deployed in the military system. How are you gonna control it? You need to scope this down further. What about the stuff underneath it? Does AI, quote, unquotem include the cloud system that's running the software application?

So it kind of led to all of these, all of these issues, and then they pulled it back for that reason. The Biden administration, again, I think it was good to have this list, but the Bureau of Industry and Security published, or I mean several agencies, but led a lot of the work to help publish this critical and emerging technologies list. And this really was quite large and includes, I do sort of detail the various categories in the book, but really included a lot of different areas ranging from advanced engineering materials, to biotech, to hypersonics, to quantum, to semiconductors, and space.

And again, it's a challenge, right? 'cause you could zoom in on any one of those areas and certainly find or hypothesize ways that that technology could be used to hurt U.S. national security, right? But at the same time, if you look at the list on the whole, it sort of looked like you were just listing everything a venture capital firm might be interested in funding.

And so it gets back to this question of: It's good conceptually, but we really have to be perhaps a little more critical about what makes it on and what, what bar do you need to clear for something to really be so sensitive that it's on that list. And we can get into how, how one might approach that in different ways, right?

Some people say if even one application reaches a certain very high level of criticality, let's say some foreign adversary military use, that's enough. Some might say it has to be the percentage, right? So if it's one tiny, you know, biotech thing that's really concerning and the rest of it’s fine, then that shouldn't be on the list.

So there's ways to debate it, but I think we have a long way to go to. And of course, now very little of this is being thought about in terms of the current administration. So I'm not sure we're gonna fix that right now either.

Jonathan Cedarbaum: Justin, another concept you discuss in the context of the U.S.-China rivalry is the idea of decoupling between the United States and Chinese economies.

What does decoupling mean and is that something that you think is desirable, undesirable, or not really a possibility at all?

Justin Sherman: I think it depends on the sort of area of technology we're talking about. So decoupling—and I'll also plug, I had Sam Bresnick from Georgetown on the Lawfare Podcast a couple months ago to talk about all of, this issue specifically, and that was very interesting and in part informed some of how I think about this.

But folks generally say decoupling to refer to a greater degree of supply chain disentanglement from China, technologically, financially, operationally, and the term at least these days, including in the, you know, business community that people tend to use to refer to something less than decoupling, but not business as usual is de-risking. And so, okay, we're not going to be able to, let's say, completely move manufacturing out of China, but do we have a contingency plan in place? Can we limit manufacturing dependency in certain areas?

Or no, we're not gonna be able to have zero cloud or telecom network touchpoint with that area of the Asia Pacific, but could we, you know, reduce it in some way as an example. So, and same thing, right—There's a lot of software development in Eastern Europe and concerns around Russia and so forth, and same there. I don't wanna only be making this about China, so, but also to say, but it's usually used in the China context.

As you note, is it possible? Who knows? We've all been debating this in D.C. for some time now, around—Is it possible to really pull yourself out of a Chinese tech supply chain in the current age, especially given the manufacturing dependencies that exist, and the degree of Chinese technology, also in third countries, which is really important, right?

If we're talking about decoupling your supply chain, quote, unquote, from China, one way to think about that might be the touch points to entities and organizations and individuals within China's geographic borders.

Another way to think about that, and these are not mutually exclusive, could be, well, where are our touch points with Chinese technology outside of China? Right? If there's a tremendous amount of Huawei deployment in Latin America, or if a number of the cloud systems in the Middle East, including in Saudi Arabia and the UAE and Qatar are running on Chinese infrastructure, how do I think about that?

So there's, there are several ways to sort of approach this, but I agree with folks who say that de-risking, and Sam and others I mentioned are much smarter on this than I am, but I agree with the folks who say de-risking is the better way to talk about it because can you really disentangle completely? Probably not, but you can try and limit your exposure in different ways.

Have folks been thinking about it as an end state? To answer your last part of your question, some have, right? Some have sort of said this is, once we do it, now we can move on and we're not as dependent. And that was kind of the conversation I think maybe, at the first Trump administration's onset, but at this point in time, I think it's more about de-risking and mitigating and understanding that there are always gonna be risks of disruption or infiltration, but how do you limit those in a way that's, you know, aligned with any organization's risk tolerance.

Jonathan Cedarbaum: Got it. We've been talking about export controls. Let's turn to the second regulatory program you address. That's the Committee on Foreign Investment in the United States, known by its acronym, CFIUS. CFIUS has been around, of course, for quite a few decades now. How would you assess CFIUS’s performance in this shift toward transactions involving emerging technologies and data?

Justin Sherman: Yeah, CFIUS turned 50 last year. CFIUS has been quite involved. It's been looking at technology issues since its inception. I offhandedly mentioned a few examples, whether it's with Japanese semiconductor acquisitions during the Reagan administration or, you know, post-USSR collapse or around 9/11, and particularly concerns about foreign terrorist organizations and any ways that they, or their state sponsors, could acquire various technologies.

But the real, bolstering up of CFIUS technology and data focus was in 2018, right where you had FIRRMA pass, the Foreign Investment Risk Review Modernization Act, and really focused on a couple key areas for CFIUS going forward. So there were some non-tech components to that law, such as modifying a couple definitions and the scope of real estate activity that's covered, but it really focused on, okay, if CFIUS is gonna look at foreign investments coming in the U.S. in this day and age, we really need to look at, A, any investments focused on critical technology sectors.

So thinking back to that list we were talking about, of what's considered a critical technology, thinking about touch points to critical infrastructure. So maybe it's not owning things, but again, if you're owning something that connects to something else, you know, e.g. You own, you know, your foreign actors buying an investment in the power system that, you know, underpins a water treatment plant. Or maybe, there was just a great Lawfare piece that Andy Grotto and Jim Dempsey did on all the operational technology in the private sector that military bases depend on. So you could think about ways that that is a concern.

And the third was data and looking at U.S. citizens’ data, which is interesting, right? Not just government data, but U.S. citizens’ data, as a potential area of foreign adversary concern.

So since then, we've had a couple examples of ways this has been used, notably that CFIUS had, at least for some moment in time, opened an investigation into what is now, and what was then, TikTok, but right before that had been the U.S. company, Music.lym bought by Chinese company, ByteDance.

So certainly a lot more people became acquainted with CFIUS than—CFIUS even got a mention in a latest Succession episode. I won't spoil the specifics, but you know, folks, were, folks in our nerdy world were sort of tweeting, this is funny. We're entering the mainstream, but yeah, so TikTok’s probably the most notable in the tech area, but there have been a few others and news reports of others.

Another one is Grindr, that I talk about, right? The gay dating app that Chinese investors bought and then the U.S. government required them to sell back to U.S. owners out of concern. Again, thinking of the 2018 FIRRMA law, because of the data was really the concern about the types of data you could get through that acquisition.

Jonathan Cedarbaum: In your chapter about CFIUS, and in some of your other chapters, you talk about, I think, with some concern about what you call a whiteboard approach to gauging national security risks. What do you mean by that?

Justin Sherman: So this is borrowing both from my own firsthand experiences and then also as I cite in there the writings of various, you know, national security experts or former officials who have talked about these dynamics as well in CFIUS and also beyond CFIUS, which is, I sort of dubbed this, the whiteboard security risk problem, which is that—

On the one hand, it's good to have that whiteboard when you're talking about these risks to game out, how could a foreign government, a foreign adversary do A, B, C, D, right? Because that's what the foreign adversary is doing. They are sitting there figuring out how can we be creative to get around protections or go under the radar, so to speak, and you know, take data or infiltrate systems and so on, in ways that the U.S. wouldn't expect.

So it's a good exercise, but it can also get you can get carried away with that, right? And as I sort of quip, if you sit in an empty room with a whiteboard and a a pen long enough, you can really come up with any scenario. And again, I've, you know, as I'm sure many listeners have, you know, you see this happen firsthand.

And again, I cite some specific examples that some folks who used to work on CFIUS have since written about. I mean, not revealing anything in terms of any particular matters, but just saying at a general level that, yeah, you know, sometimes most staff can sit down and say, this investment looks fine, and all it takes is one person in one agency to sort of speculate that there's a remote possibility that so and so could do blah, blah, blah, and then such and such would happen, and suddenly all of these transactions that most people think are low risk are getting blocked or impeded in some way.

So again, it's back to kind of the first thing I was saying, which is that the key is really to just have really good risk criteria.

And again, there are some programs that I mentioned, export controls, right? They kind of know what their criteria are. They have a set of, set direction in many ways in terms of which technologies are of concern and so forth. And then there are others, like CFIUS, that have long been criticized for being a little all over the place.

And I think that's a place where, you know, having that decision framework could be potentially helpful for everyone involved.

Jonathan Cedarbaum: CFIUS is a program that looks at foreign direct investment in the United States. Another program you address that also looks at foreign investmentm but is much less well known than CFIUS, is Team Telecom.

What is Team Telecom and what does it do?

Justin Sherman: So Team Telecom, which is no longer its formal name, it has this absolutely ridiculous acronym that I'm not gonna try and pronounce. I think it beats every, you know, congressional bill acronym I've ever seen. It's so, it's so absurd.

But Team Telecom's an interagency committee that has been around for a few decades now, and it is chaired by the Justice Department.

There are several other agencies involved, such as the Department of Defense and Homeland Security, but its role is really to advise the FCC on when the FCC is issuing a license or undertaking some other regulatory matter related to, I'll just say broadly, foreign telecommunications issues. That could be a company registered in Hong Kong that wants to partner on a summary cable connected to Los Angeles. That could be a Dutch company that wants to take a small share in a U.S. mobile carrier all the way to various space and other in certain areas, other issues.

So Team Telecom advises the FCC on what are the law enforcement or national security or even economic security issues that might be associated with that particular matter. So it's not akin to, let's say, export controls where there are determinations made around particular types of technologies. It's a matter driven program.

So when a company files, the FCC will refer to Team Telecom, they'll take a look, they'll give their input. Team Telecom doesn't, quote, unquote, make the decision that rests with the FCC and therefore the president, but they’re an important voice in terms of thinking about how do these risks manifest and how might telecommunications networks themselves be a potential source of national security concern?

Jonathan Cedarbaum: How would you compare Team Telecom’s performance to CFIUS?

Justin Sherman: So Team Telecom has had challenges akin to what CFIUS has had, which is CFIUS is often, and I think it scores the worst as I write on this question, but has long been criticized as a black-box program.

'Cause as I mentioned, some folks who work on it say, I'm not quite sure what's going on, and I work on it. And companies have, you know, just on the company and sort of academic side, as I'm sure many have heard a whole number of absurd stories around decisions that are made and then reversed and just really a lot of—It's hard to understand what's going on, and again, recognizing that there's, you know, classification and other things. But there's a real transparency question.

So I say that to say Team Telecom had similar challenges for a bit of time. There was an, in telecom policy world, an influential report that the Senate Communications Committee had done, I'm not gonna remember the year, maybe 10 years ago or something like that, that really looked deeply at Team Telecom. And they interviewed a bunch of staff. They interviewed former FCC commissioners and even folks who were really involved with the process used that exact phrase. They said, it's a black box.

You had former commissioners saying, this is really hard to understand what's going on, and I run, you know, I have significant authority over this and I still don't quite follow. So there were a lot of efforts made after that. And I will credit the first Trump administration is the one that issued an executive order to do some of this.

But Team Telecom probably from, you know, 2019, 2020 on has really improved. Still some challenges, but really improved its transparency, has started issuing more public justifications, even if they're short explaining why they made a particular decision, talking through, here are the bullet points of why, you know, in some real examples you can find online.

Why this project to potentially connect the U.S. to Hong Kong is a security risk, or here's why we're worried about a particular cable to Cuba, and what we perceive as the risk that that could get passed on to other countries. So the information, so they've really improved in some of those ways, but we're also seeing a lot of change to Team Telecom in the last year into the FCC of course, cybersecurity issue set in the last year—

Rolling back regulations, a huge summary and cable rule. I'll keep plugging the podcast. You know, we had the lead FCC author on the podcast to talk about that after it happened, but really a lot of interesting changes, so, it'll take some time to see what that'll do to these issues we're talking about.

Jonathan Cedarbaum: Excellent. The three programs we've talked about so far have been around for a while. The other four programs you addressed in the book are much more recent and initially created by executive order. Let's turn to those. One is the IT and Communications Supply Chain Regime that got off the ground with an executive order by President Trump during his first term, followed by implementing regulations during the Biden administration.

What was the core concern prompting the creation of that new program?

Justin Sherman: The core concern was essentially if we have a technology in the U.S., we're trying—someone wants to sell overseas, we have a way to say: Is that a security risk? Right? Export controls, or if we're building something within the U.S. and a foreign investor wants to reach in, or a foreign telecom wants to connect to it or something.

The programs we mentioned, CFIUS, Team Telecom, we have ways of dealing with that. The concern was, well, what happens with other kinds of technologies that are already in, or could be in the U.S. supply chain? What about a router? What about cybersecurity software? What about a mobile app? What about a connected vehicle component?

And again, not saying that all of these are national security threats, right? Or, just 'cause it says non-US in front that it means it's a security threat. But yeah, you know what? We have enough information in the public domain around how, you know, China, Russia, Iran, others use those kinds of companies or actors in certain cases to spy, to infiltrate supply chains, et cetera.

So is there a way we have to deal with that? And the answer was not really. And so this led to, as you mentioned, the, in short, the ICTS Tech Supply Chain program housed at the Commerce Department that was meant to give it that authority. This was a 2019 Trump executive order. It set up a Commerce Department-led program. It drew on IEEPA, the International Emergency Economic Powers Act. I'm amazed they haven't said IEEPA yet—Underpins a lot of these programs. But essentially, right, it was focused on the supply chain broadly.

And the last of those words is really important because it was quite broad. It was allowed to look at, essentially entire tech categories of transactions. So not just saying this specific router version from this company is a risk or this specific connected vehicle component with this, you know, made in North Korea. I mean, probably not, that's not a real example, but you know, made in North Korea, stamped on it, that's a risk.

But instead saying this entire category is a risk, or all Chinese made routers of this type of risk. So it had kind of a broad authority to look at the supply chain and say, ‘What do we wanna restrict? Maybe some of these we have a mitigation policy, and then otherwise, you know, maybe we have the decision and we do have the authority that we want to actually expel this tech component from the U.S. supply chain.’

Jonathan Cedarbaum: Have we seen much activity under that program yet?

Justin Sherman: We've seen two main things. So the first was a decision in 2024 to ban the Russian cybersecurity company, Kaspersky, from operating in the United States. And Kaspersky had already, seven or eight years earlier, had been banned by the Department of Homeland Security from use on federal agency systems.

The, as they said at the time, the concern was, well, we've all seen the Putin regime take an extraordinarily repressive and controlling approach to the private sector. We're worried about the risks that Kaspersky could be used essentially to spy on systems, right? Because the Russians might say, oh, hey, look, everyone's got this great antivirus installed and you all click agree and give it all your files and permissions and the, how convenient that we can coerce them, right? So that was kind of the thinking.

But, so this decision in 2024 broadened that to the U.S. at large. So Kaspersky stopped offering updates to its cybersecurity products. Companies switched over to vendors besides Kaspersky, I'll say ban asterisk, because you can still go right now and look up, for instance, Kaspersky’s threat feeds. Those were explicitly excluded from this ban. Which makes sense, right? I mean, I don't, you know, there's very little reason to, or if no reason, let alone the legal issues associated with banning, you know, reading the information that Kaspersky is putting out. But it really was kind of a wholesale ban on its products.

The second just to hit quickly was a connected vehicle rule. Much contested in industry as many of these things are, but essentially looking at Chinese and Russian software and hardware components in connected vehicles and implementing restrictions on their ability for future phases of vehicles. It's not a rip and replace situation, but for future versions of vehicles to prevent them from entering the U.S. again with supply chain concerns, with concerns about, you know, there was a hearing last fall on the Senate Armed Services Committee. There was talk of could you have vehicles driving by military bases or around bases and collecting intelligence, certainly espionage driven, but those were the two big moves.

I just wrote about this for Lawfare the other day, the Wall Street Journal has been reporting on essentially the gutting of this office in the last few weeks. And so not sure if they'll be doing much of anything going forward, but those are at least two pretty significant decisions that they had issued to date.

Jonathan Cedarbaum: Another of the recent programs you discuss is the cloud Know Your Customer or KYC program. President Biden mandated the establishment of that program through an executive order in 2021, and then the Biden administration issued draft regulations in January 2024. What would that system require?

Justin Sherman: This would require, as you said, KYC, so Know Your Customer rules for IAS infrastructure as a service providers specifically, so the shorthand, you know, folks sort of call it the cloud rule or the cloud KYC rule, but iit's specifically within cloud as for infrastructure as a service providers.

And it essentially, conceptually, I mean, I compare the two of them in some detail, but conceptually is taken from KYC in the financial sector. So, okay, how do we prevent money laundering, terrorist financing by requiring banks to do due diligence, to keep detective controls documentation of who their customers are, what the transactions are, can we apply that to the cloud?

And as the U.S. government has talked about, you know, and many others have written about, the driving concern was around, could you have U.S. adversaries that are exploiting those cloud ecosystems. Maybe there's a Chinese, you know, university that secretly works with the military that's training an AI system on a U.S. IAS instance, right?

Or maybe they're launching cyber operations from it or something else. So that was kind of the driver is can we institute more documentation and more reporting requirements such that we're not curtailing most of these contracts, but if we see there's a contract in there that really concerns us, the U.S. government has the opportunity to see that and then tell the company to you know, change or limit or terminate that particular contract.

Jonathan Cedarbaum: As I mentioned, and as you discussed in the book, that program got to the stage of proposed regulations but not final rules. Do you expect the current administration to finalize those rules?

Justin Sherman: I don't, I don't know. I mean, no one knows, of course, so I should say I don't, I don't expect that they would necessarily shred the proposed rule.

I could be wrong. I would, if I were to guess, I would say that probably the current administration will not move the rule forward and then whoever is in the next administration will have to make the decision about what to do with it.

And the reason I say that is for is for two reasons, right? One, as I mentioned, some of these programs are getting pulled back in general, but two is given the emphasis right now on deregulation, this rule really got again, lots of companies complain about these rules. I cite some even stories of executives saying to me, I don't care about national, you know, so national security. So sometimes the criticisms are unwarranted, but this one probably more than any other we talk about in the book, really got a lot of heat from industry.

It did not like this, said, you're gonna just blow back up the Snowden era. Distrust of clouds are a backdoor for the government. Like this is just gonna wreck our market share. This is gonna undermine trust and this is not good. So given the current deregulatory emphasis, I would be, surprised isn't strong enough, I'd probably be shocked if this moved forward in any way, but I suppose we'll see.

Jonathan Cedarbaum: The last two programs we'll talk about are very much focused on China. One is the new set of restrictions on bulk data transfers to China. Those also began with an executive order, in this case in 2024, followed by proposed and then final rules that came into effect in the middle of 2025.

What do these restrictions say?

Justin Sherman: These restrictions are primarily aimed at how the acquisition of commercially held data and commercially brokered and sold data could be used by foreign actors. Again, the countries of concern in the rule include, you know, Cuba, Iran, North Korea.

But again, if we're talking about, just based on the news, which countries are carrying out tremendous volumes of cyber espionage and have large, sophisticated tech sectors, obviously China wins, that wins that ranking. So the thinking being, there's a lot of concern about China here, but concern that as I equip in the in the book, why hack when you can buy.

And you know, as I've written about a lot, we have a huge unregulated industry of data brokers in the United States, which are companies that are in the business of selling people's data. So not just collecting it, but outright in some cases, even dumping it into Excel spreadsheets and selling it.

And I had done some studies funded by the, at that time, the Defense Department looking at this problem and so on. And anyway, really concerned that you could get military personnel data, you can get bulk health data, genetic data, and then what could be done with that. If you combine it, let's say if you're China with, oh, everything we stole from the Office of Personnel Management and everything we stole from Marriott and everything we stole from the Anthem Healthcare Company and so on, and really mash it all together.

So all to say the regulations were basically in two buckets. One was if you're outright brokering, as the rule calls it, data, that might be data specifically on government personnel or facilities. That kind of has its own thresholds or bulk data on the U.S. population at large. So those numbers are a little higher, a little different, but that's its own category. You have some restrictions on it.

And the second part of the regulation, which you know, is a whole other time. But some folks criticize and some of these I think are fair criticisms that, you know, the rule was meant to be about data brokerage and now you're including low-risk transfer is what it's called. But the second part of the rule refers to low-risk transfer.

So let's say you're on a federally funded health research contract and you're partnering with a non-U.S. entity. Well, what are you doing with the data? Whose data is that? Where's it going? So questions like that. And, and so in doing so, it's not just data brokers and ad tech that are regulated, even though they're the intended focus of the program from my perspective, you also have banks and, and healthcare companies and others that are not, quote unquote selling the data, but they're subject to the low-risk transfer.

So that's kind of what that program does. And we've, again, we've seen kind of a gutting of that. I mean, I worked a lot on that program in the last administration into the beginning of this administration, and that office as well has been gutted. So we'll see what happens going forward with enforcement.

Jonathan Cedarbaum: Yeah, indeed. I saw just this morning that Lenovo was sued by some private plaintiffs for violating these bulk data restrictions through some of their tracking of customer data and their handling of it.

Do you think that private litigation may step into be the more frequent enforcement tool as compared to direct government enforcement?

Justin Sherman: It's quite possible. We've seen a few lawsuits so far. We also have a bill that was attached to the TikTok ban bill that became law, PADFAA, the Protecting Americans Data from Foreign Adversaries Act.

And this was signed into law as this Justice Department bulk data program is being stood up. So you have the EO, executive order driven program and now you have, Congress passed this law for the FTC to actually do some work in this area. So I don't think that made any sense to do that. That was a conversation for another time.

But to say you also have a statute that now says certain kinds of this activity are a threat to national security. And I definitely think that in the coming years, probably not too many, but there will be plaintiff's firms that look at that and say, okay, this is interesting. This is a clear hook to look at.

You know, how are our companies potentially sending U.S. citizens’ data overseas in ways that are problematic?

Jonathan Cedarbaum: The last program you address is one concerning outbound investment screening. That is screening of investments by U.S. individuals and organizations of certain kinds in China.

Draft regulations, you note, for that outbound investment screening regime or issued in July of 2024, have those rules been finalized and gone into effect?

Justin Sherman: They have been. 2024 was the final rule in January of 2025 was when it went into effect. So it has been, yeah. And the focus of this really which is interesting, right?

'Cause most of these programs we've mentioned, or really, I should say all these programs we've mentioned might be primarily focused on certain countries or certain subsets of countries. But on paper they apply to many of them, right? So even I just mentioned, right, let's say the primary concern of something like CFIUS is probably not North Korean investment in, you know, the U.S. tech sector, North Korea is in there. And if for some reason something happens, that's an authority. The outbound program's different in that way because it focuses only on China and then it focuses only, within China, on three particular sectors, such as microelectronics.

So it actually is quite, just sort of interesting, scoped down and I talked to, as with many of these chapters, you know, a number of the folks who really drove this work in the last administration, and again, that was kind of the thinking, right, was Iran is so papered over with sanctions, that's not really a concern that U.S. businesses are pumping money there, right? Russia, same since 2022. Really not viable or something we're particularly concerned about. North Korea, ha, what tech sector, right.

So, you know, so it's really China that we're concerned about. And I will say the credit to them, I will credit the, both the bipartisan China committee report that was done on U.S. venture capital investments in Chinese AI.

And then CSET at Georgetown, also published a study on, before the committee on this same topic, and a number of the folks who led this work had credited, obviously there's lots of other concerns going on, but had credited those two studies as really driving forward a lot of the public attention to this issue.

So rules are finalized, we'll see what happens or if there's any enforcement, but it's certainly interesting to have one of these amid all the growing, growing, growing scope really kind of zoom in on one particular area of national security threat.

Jonathan Cedarbaum: We've covered some of the specifics now of seven programs. Let's step back as we get toward the end of our episode and circle back to some of your larger themes.

If you had to write an elevator pitch for each of three different audiences—policymakers in the executive branch, members of Congress, and industry leaders—about some of the takeaways from your book, what would be the key points that you would want to drive home to each of those audiences?

Justin Sherman: So for executive branch, I probably would talk about two concepts that I mentioned. One, just in general, the issue of supply chain entanglement, right? More and more of our technology supply chains are really interconnected and interdependent with other countries around the world.

That second “I” word leads to the second point I'd make, which is weaponized interdependence. And as many listeners know, wonderful set of concepts that Henry Farrell and Abraham Newman have built out, and I highly commend, there are several books and papers that really dig into this. There's also an edited volume. They did also say that Adam Siegel and some others contributed to with tech things specifically. So you can actually go read, kind of, their framework applied directly to technology supply chains.

But, I mentioned those two because we're seeing the administration really pull back from a number of these programs, whether it's, you know, gutting some of these offices or indicating at the FCC, that the move is more towards this sort of trusted vendor idea. So rather than doing the same level of screening of every actor through Team Telecom, we're gonna designate some of these as trusted.

As I constantly point out, the flip side problem with that is then the adversary says, oh, so you're not really looking over here. And then that's probably where they'll go.

So, there's concerns there, but also say as, as there's this step back from these programs, I would kinda make the point that it's really important to understand how entangled and how interdependent these systems are, how that interdependence can be weaponized, to go back to Farrell and Newman's concept, because these are not decisions you can easily reverse, right?

It's not the case that you can kind of let these programs sit here for a year, then pick them back up and nothing's changed, right? This isn't, you know, pausing like a Netflix episode. I mean, these is highly sophisticated set of regulations. Adversaries are not taking a holiday because the offices are not staffed or because they're distracted with other nonsense.

So I'm gonna start editorializing a little bit more here. So that really would be my main point is look, a lot of these issues that—we're being told, we'll get to it later, right? We're gonna negotiate with China on trade first. Then we will get later to these security questions. Or we're gonna deal with the ceasefire, supposed ceasefire in quotes, with Ukraine right now. Then we'll get to it later.

Like that second part is really troubling because of the entanglement. Because of the interdependence and all of the adversary tech that can get sink its hooks in deeper in the interim. That was nowhere close to an elevator pitch, so I'll try to be a little more pithy for these last two.

So you also mentioned for members of Congress and for industry. So for Congress, I would say we've in, as I mentioned in some areas, done a pretty good job working with older legal authorities, relatively older legal authorities, applying them to some of these modern tech issues, right? Whether that's with CFIUS, whether that's with export controls.

And in plenty of areas, Congress has done a good job passing new legislation to update those authorities—FIRRMA is a great example with CFIUS, right, really giving it that authority to look at tech, to also look at non-controlling investments is huge, right? Because, and obviously I'm not saying anything, you know, non-public here or anything, but you know, you can get, you can get scenarios where, let's say hypothetically the foreign actor would say, oh, perfect, if you're only looking at transactions over 30% ownership, I'm gonna invest 2%, but my 2% is gonna say asterisk, I want control of the data center, or I want access to the R and D lab, or whatever, right?

So like, anyway, so I mentioned that, so Congress has done a good job updating these laws. We need to keep doing that. So don't let only the executive branch drive, but think about like with statutorily authorizing some of these, a lot of these are executive orders, the bulk data program, Team Telecom—Let's put that in statute. Let's assign some resources to it. Let's give it some teeth.

Last, industry. So the main point I make is: government can learn from industry, industry can learn from government. And probably a wonky, both sides, sounding kind of conclusion.

But just to say right, there are areas, again, reading those decades and decades of regulatory filings, having sat in conversations with executives who have made good points, and executives who have said the most insane things to me about how little they care about China or spying or whatever, is in some areas, government can help you better understand the risks, right? Some companies are super sophisticated. On these issues, including 'cause they hire former U.S. government experts or consult with leading academics, right? Some are not.

And so actually these programs are a way for the U.S. government to provide that expertise and say, Hey, you know what? We know you're not doing this intentionally, but this kind of investment or this kind of sale could actually be a real risk. So there's things they can learn.

Conversely, I do think, as I mentioned, government can be more transparent. You're not gonna provide the list of everything that concerns you. That's ridiculous. It also doesn't exist, right? It's constantly evolving, but, it could be more transparent to help industry navigate these programs with less uncertainty.

'Cause again, at the end of the day, like we said at the outset, and as you mentioned a few times rightfully throughout, the goal is to both have tech innovation, advancements in healthcare, responsibly applying new technologies and new areas, et cetera, et cetera, at the same time as you have national security, right? It's both. And it's not either, or. And so the more that you know, we kind of strike those right balances, I think the better off we can be pick your tech area, pick your buzzword area, right. But no matter what it is, that balance is gonna be really critical going forward.

Jonathan Cedarbaum: Excellent. Thanks very much Justin. It's been a pleasure talking with you.

Justin Sherman: Thanks for having me.

[Outro]

Jonathan Cedarbaum: The Lawfare Podcast is produced by the Lawfare Institute. If you want to support the show and listen ad-free, you can become a Lawfare material supporter at lawfaremedia.org/support. Supporters also get access to special events and other bonus content we don't share anywhere else.

If you enjoy the podcast, please rate and review us wherever you listen. It really does help. And be sure to check out our other shows, including Rational Security, Allies, the Aftermath, and Escalation, our latest Lawfare Presents podcast series about the war in Ukraine. You can also find all of our written work at lawfaremedia.org.

The podcast is edited by Jen Patja with audio engineering by Cara Shillenn of Goat Rodeo. Our theme song is from Alibi music.

And as always, thank you for listening.


Jonathan G. Cedarbaum is a professor of practice at George Washington University Law School, affiliated with the program in national security, cybersecurity, and foreign relations law. During the first year of the Biden Administration he served as Deputy Counsel to the President and National Security Council Legal Advisor.
Justin Sherman is a contributing editor at Lawfare. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; the scholar in residence at the Electronic Privacy Information Center; and a nonresident senior fellow at the Atlantic Council.
Jen Patja is the editor of the Lawfare Podcast and Rational Security, and serves as Lawfare’s Director of Audience Engagement. Previously, she was Co-Executive Director of Virginia Civics and Deputy Director of the Center for the Constitution at James Madison's Montpelier, where she worked to deepen public understanding of constitutional democracy and inspire meaningful civic participation.
}

Subscribe to Lawfare