Cybersecurity & Tech

Lawfare Daily: Scott Singer on AI and US-China Relations

Kevin Frazier, Scott Singer, Jen Patja
Wednesday, July 10, 2024, 8:01 AM
Discussing whether AI has increased tensions between China and the U.S.

Published by The Lawfare Institute
in Cooperation With

Scott Singer, Co-Founder and Director of the Oxford China Policy Lab, joins Kevin Frazier, a Tarbell Fellow at Lawfare, to discuss AI in the context of ongoing and, arguably, increasing tensions between China and the U.S. This conversation covers potential limits on China’s AI ambitions, the durability of the current bipartisan consensus among U.S. officials on the China question, and the factors that may accelerate the race to artificial general intelligence between China and the U.S.

To receive ad-free podcasts, become a Lawfare Material Supporter at You can also support Lawfare by making a one-time donation at

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.



Scott Singer: We're talking about the U.S.-China AI relationship. This is really a multilateral and global issue in part because when you have other actors in the system who can fill in key gaps if you decide to use economic statecraft in a certain way, then what they do matters.

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, a Tarbell Fellow at Lawfare, with Scott Singer, co-founder and director of the Oxford China Policy Lab.

Scott Singer: Everything about AI in general, there's these three critical inputs. We have the algorithms, we have the data, and we have the compute, the hardware that is going into training all of these systems.

And so after the expert controls, the story coming out of China is there is not enough compute to train these models.

Kevin Frazier: Today we're talking about AI in the context of U.S.-China relations. There are no shortage of hot takes on the U.S.-China relationship, many of which seem to be full of hot air. As the director of the Oxford China Policy Lab, Scott, can you walk us through some of the practical difficulties of doing China policy research? Why is it so hard to find reliable and timely insights on China?

Scott Singer: Yeah. Great question, Kevin. I think the first, so there's a few questions. One is how good is your data? So there are issues in terms of, getting data from the Chinese government, how reliable was it in terms of quantitative data, but also thinking about qualitatively, can you get on the ground?

If you are able to get onto the ground are people willing to talk to you? There are ethical concerns, for example, around, what would it mean if you reported something that then put someone in a position where they're unsafe. In the field of AI, which is one that I focus on people are willing to talk relatively transparently, off the record.

We couldn't talk about a lot of what they were saying because we had to, there were only so many people in the field who were working on China and AI at the time. So you would easily be able to figure out based on what they said, who this person was. Then there's the issue of do the people who are doing research in the West have the sort of understanding and knowledge that it takes to produce excellent research on China.

And I think here we get into questions of skills gaps, really deep understanding --- and this has really been a problem in the post COVID world where we just really seen such a drop off in the number of people who have been able to go to the PRC and develop that substantial expertise relationships on the ground to really understand granularly what's going on --- and so from the perspective of a China policy researcher, if you're in China, there are definitely constraints on what you can do in the activities and research that you're able to do. But if you're on the outside, really understanding what's going on in a system that is really a black box is quite difficult.

Kevin Frazier: So talking about black boxes, obviously AI is one of the chief concerns about Chinese policy right now, as you hinted at. Can you give us a sense of just how many people are actually in the weeds of China AI policy and U.S.-Chinese relationships with respect to AI? I think from a outside perspective, we'd imagine, dozens and dozens of people with real expertise on these questions who can give us robust insights.

Is that the case, or what is the nature of the actual number of people who have good, reliable information on this important policy question?

Scott Singer: Yeah, it's a great question. It's a little bit of a terrifying story here. There's a lot of people who touch the U.S.-China AI relationship, because all of a sudden, if they're coming from the AI side, China has now come up in their work, and they have to navigate geopolitical realities.

And people who grew up with more China backgrounds are increasingly forced to navigate the question of emerging technology competition. And so there is a synergy of these worlds, but they were quite small to begin with. And in terms of people who are really developing the skill sets required to understand both the U.S.-China relationship, domestic Chinese AI development, understanding both the country's context, the diplomatic sides and the technological side, I would estimate that among sort of U.S. allies and partners, the number with deep expertise is less than 10. That would be my estimate, at least outside of government. Inside government you have people who are literally engaging in negotiations and figuring this out. And there's probably a lot more variation country to country.

But in terms of who people are reading and who the sort of experts are in this field, it's a shockingly small field.

Kevin Frazier: So we can't even field a football team. I'm talking soccer for those American listeners. We can't even fill the football team of AI outside researchers with a specialty on China, that's staggering.

And as you said, very concerning. So how is the Oxford China policy lab trying to address that shortage? What does that look like in terms of increasing the number of folks who can reliably contribute to this space?

Scott Singer: Yeah, so I would say there's a few ways. One is really upskilling people, so for people who have really robust backgrounds, in our case mostly with China, providing them with a basic training in what is happening at the frontier of AI and emerging technologies and having them figure out how to leverage their backgrounds is really, really critical. Another part of what we do is network construction, so understanding who the different actors are in the space. We have a question we get a lot, which is, is it really important to have someone who can really speak both languages, both Chinese and AI super fluently? Or is it more important to have, people who have both skill sets, but in the same room and in contact with each other?

And the answer is probably both. And so a lot of what we try to do is bridge divides between experts who are filling in different areas of the ecosystem with policymakers who are forced to make really challenging decisions with a lot of limited data. And then, yeah, the last thing that we try to do is basically train the next generation.

So identifying talent when they're young, identifying promising avenues for who could have impact tied to U.S., China, and AI. A lot of our theory of change and what we try to do is really understanding the sort of global context of U.S.-China AI competition. Understanding that this is really something that, AI and supply chains are deeply interconnected with the rest of the world.

Diffusion of AI technologies is going to take place, not just in the U.S. and China, but everywhere. I understand the stakeholders are not just going to be amongst the great powers. And so building a sort of global community of people who are thinking about this is really important.

Kevin Frazier: So given that it's hard to get to, the key fundamentals of the current U.S.-China relationship and Chinese capacity with respect to AI, there have been a lot of conversations where folks just seem to be throwing out assessments of how the relationship currently looks, what China may or may not be doing with respect to AI, and one of those kind of general themes, I'd say, would be that we've potentially hit near rock bottom in terms of U.S.-Chinese relationships. Where do you stand on that question? Are we at rock bottom? There seems to be plenty of evidence if you look at the ongoing trade conflicts, if you look at South China Sea tensions. You could certainly make a compelling case. Where do you land on that narrative?

Scott Singer: Yeah, so I think it depends on the time horizon you're looking at.

If we're talking about, the last 50 years or 100 years of U.S.-China relations, we're definitely like quite near rock bottom. Rock bottom would have probably been in 2022 around Spy Balloon, Nancy Pelosi's visit, really just like very little dialogue going on between the U.S. and China in general, quite literally at the level of diplomatic meetings.

We've seen a little bit, if we're talking about what's been happening over the last year and a half, marginal improvements in terms of we have a few more students from the U.S. in China. From other Western countries, we see people who have greater access to visas. There are more conversations going on around emerging technologies and other pressing issues, but that sort of marginal improvement is so small compared to the overall deterioration of the U.S.-China relationship, which is not to say that, there were not strategic reasons on both sides of this happened. It's not to say that the global context in which this relationship is occurring, is not extremely challenging, but I think it could also get much worse, which is the scary thought.

If we think about the next five to ten years, thinking about what happens with Taiwan, there's been really excellent research coming out of CSIS, for example, the China Power Project, which examines not just the possibility of an invasion, which I think for many China analysts, is focused on the Taiwan Strait, which is a worst case scenario.

But what happens if we see, for example, a quarantine, which would, potentially, create a Taiwan conflict in the U.S. in ways that it wasn't expecting and maybe move the timescale sooner. What happens if AI, depending on the pace of development, things get way more intense there and we have incentives to arms race.

There's a lot of really clear ways that the technologies that these states possess and the geopolitical motivations that they may have could make this relationship much more dangerous really fast.

Kevin Frazier: So obviously, a good way to hopefully relieve these tensions would be to understand a little bit more about why things have gotten so bad.

You pointed out that COVID obviously wasn't great for the U.S.-China relationship, but you've also flagged some potential misconceptions about why we've gotten to this point. What do you think are people maybe putting too much weight on when it comes to understanding why we are where we are right now?

Scott Singer: I don't know if the entire narrative is elite driven.

Like I think a lot of people said Donald Trump and Xi Jinping were really driving the sort of structural, race for critical technologies. And also we're looking at great power competition and the rise of China. Trump was really the one who set things in motion and Biden continued it.

And Xi was the one who has global ambitions. But I think that actually a lot of this was more structural where you see a rising power. And we know that oftentimes rising power is one to set global standards, for example, and have regional, influence. That makes sense. On the flip side, I think that actually what the Biden administration shows us is that there's a bipartisan consensus on China and thinking about to what extent do individual leaders matter?

It seems individual leaders could matter in terms of how composed they are and, to what extent they could be volatile in a crisis, but perhaps less in terms of U.S. left and right dynamics. And so I think that what we misunderstand is this is not like a Xi Jinping thing or a Donald Trump or Joe Biden thing at its core.

Those actors matter a ton and they could shape, the trajectories of conflict should they occur. But I don't think that this is like an individual, top-down mechanism that's driving the competition.

Kevin Frazier: So broadening that conception of the potential sources for these tensions and thinking about, for example, the publics in these respective countries who may be animating some of these narratives or exacerbating or perhaps improving relationships, how have your own studies informed that perspective?

What have you seen on the ground about how publics often play a role in shaping these national security geopolitical conversations?

Scott Singer: Yeah, in general, I think that we have this conception that in the U.S.-China relationship, AI technologies are being developed in situation rooms and boardrooms. That really what matters is those few experts, the 10 who are thinking about U.S.-China AI, plus the executives of the most important AI labs and people in the National Security Council who are really the ones who are going to be driving the shots.

We actually know, from the history of national security policy and emerging technology policy, that's not actually always the case. So a really interesting historical example of this would be, for example, the space race where you actually see JFK leaning into the fact that the Soviets had launched Sputnik to drive, essentially, the creation of DARPA, the goal to land a man on the moon and have that first person be an American.

That was really a public driven mechanism. And so it's interesting because the public is this sort of very weird and strange stakeholder, where they don't have access to information, they probably think differently about these questions than the policy makers do, but they can still constrain public opinion.

So that's a lot of what my own research focuses on and explores how, across these different publics we might imagine that public opinion could be constraining policy decisions.

Kevin Frazier: We're going to focus on the public understanding of this relationship and also the idea either of American superiority or Chinese superiority, mapping that onto the AI context.

How do you see the publics of these respective countries playing a role right now in the potential for some sort of racing dynamic, between the countries to develop artificial general intelligence or AGI.

Scott Singer: Yeah. So I think that there's maybe a few good reference points for this. One would be there is a paper that came out earlier this year by Josh Kertzer at Harvard and some others that explores essentially perspective taking how you understand one another’s actions in the context of I believe it was a South China Sea scenario and basically what we see is this idea internationally you see all the time of a secure dilemma where one side's actions in the context of the South China Sea makes the other less secure.

And I think in the case of AI, this can play out in a few different ways. One is that you basically just plug into this sort of rally around the flag effect and you basically say, America has to win. And we see this happening a bit already. You look at right now it's happening a lot at the level of, Foreign Affairs magazine, where you see, top thinkers on China saying America has to win and here's what winning means.

And so I think you can see it playing out like that. I think it also matters quite substantially, especially in the U.S. for industrial policy. These policies are super, super expensive. They cost billions of dollars. And so to what extent is the public actually willing to double down and pay for it is a really important question when it comes to AI and especially for compute, potentially.

I think that you could potentially also see, depending on how complex dynamics play out more broadly, you could see sort of what Josh describes in terms of security on the arms racing dynamics where you're like, all of a sudden Chinese development becomes not just this niche thing that, maybe you and I are talking about, maybe, my parents or, my cousins or people who aren't thinking about Chinese AI every day are all of a sudden thinking about Chinese AI.

And so really as issues become much more salient and granular into the public eye is what we might expect that presidents other national leaders will carry the most. But it's really interesting when you look historically, because there was this idea in public opinion, in academic literature that there were two presidencies.

There was a president who cared, about domestic politics, about economic issues, the things that we think drive elections, and there was a president who was in charge of foreign policy, and these issues were really separated. What we see now in the world of AI is that these worlds are increasingly intertwined.

So if you think, for example, about TikTok, when I call my friends at home, who are really smart people who don't think about AI all the time, they're just like, I love TikTok. I love using this app. This is so much fun. I don't really care if China sees my data. Why is the U.S. government trying to force a sale or block it? And so there's, the two presidencies, one merges, and even without the sort of convergence, whether it be through dual use concerns or other sort of apps with data transfer concerns, you also see, for example, Richard Nixon during Vietnam constantly asking for polling, FDR in the lead up to World War II constantly asking for polling way before we had access to the amount of data that we do now. So public opinion is always there. And I think that, in the US where understanding of China is generally poor and things just move in moods, it's a critical thing to pay attention to.

And also right now there's really a strong bipartisan consensus on China within the U.S. You look at the Select Committee on the Chinese Communist Party, and we think about Congress right now as being this really bifurcated place. The Democrats and Republicans in this committee seem to be like friends.

They seem to get on super, super well. And I think there's a chance that continues, including in the AI question, but as AI becomes more politicized, as potentially China becomes more politicized, then public opinion could matter there as well.

Kevin Frazier: So would it be a fair assessment then that right now the race narrative with respect to AI and U.S. and China is predominantly elite driven, but perhaps we could see this become a sort of public concern that really makes that race more concerning with respect to those who have fear about AI and AGI becoming a major issue?

Scott Singer: I think it's both top down and bottom up. For example, when you get into the weeds of expert control policy, a small yard, high fence, most people are not thinking about that. But when it does come to things like your jobs or things like, what happens if China takes our jobs or we're concerned about democratic values, then those things become much more publicly salient and they feed back into the loop.

And so you see debates now on questions like biosecurity, where there are these sort of elite concerns around what does dependence on the PRC for certain biotechnologies and other parts of biosecurity supply chains mean. But what exactly the public gets from that is, oh my God, we depend on China for more technology, that's so scary.

And you fuel a narrative that already exists, there's pressure to understand that debate, for example, in a similar narrative. For both good and bad, there are analogies, there's also differences.

Kevin Frazier: Yeah, I have yet to meet a random Joe or Jane on the streets of Miami who tells me, oh my gosh, can you believe those export controls?

So that day has not happened yet. I will call you if and when it does, but building off of export controls, the Biden administration recently issued draft rules for banning or requiring notification of certain investments in AI and other emerging tech areas in China. So this is supposedly meant to further U.S. national security interests and is building off of these export controls. Can you give us a sense of what these draft rules may look like in practice and how China has responded so far?

Scott Singer: This is a topic that has been discussed in the U.S. for several years now, this topic of outbound investment. And the idea is to basically constrain funding from U.S. persons. And I believe it's AI, semiconductors, and quantum information technologies. And the idea is basically that, if you have U.S. investors who are putting money into the system, particularly connected to firms that may be engaging with the PLA, the People's Liberation Army, that that's a bad thing and a U.S. national security risk. There have been questions in general around, legal structures, how exactly you do this, but it seems like there's substantial progress being made here. So I think we're probably on a pathway to seeing, we now have these preliminary rules that are now fully drafted, what this looks like in practice enforcement, I think is a question that we'll have to see down the line. And the other sort of really big question that is, I think, in a way similar to export controls is what happens with allies and partners here. I think that when we're talking about the U.S.-China AI relationship, this is really a multilateral and global issue in part because this is a little bit different than export controls, but when you have other actors in the system who can fill in key gaps if you decide to use economic statecraft in a certain way, then what they do matters too.

So what will, for example, the U.K. do on outbound investment? What about other actors in the Asia Pacific? What they do matters too, especially because these places in some cases are financial hubs where there might be substantial money flowing into China.

Kevin Frazier: So bringing in those other countries, reminding ourselves that the world does not solely consist of U.S. and China, how are other countries treating China as either a threat or a partner in this AI dynamic?

So obviously the U.K. has been very invested in AI. The EU also, at least from a regulatory standpoint, is very much committed to addressing AI. What has been the dynamic between those third countries and this U.S.-China dynamic with respect to AI?

Scott Singer: Yeah, so I'll talk first about the U.K. because I think it actually plays a unique role and then I'll talk about maybe other European countries and then third countries outside of Europe.

So the U.K. is quite interesting because the U.K. was the country that had the first AI safety institute. It was the place where Bletchley happened. And so much of the U.K.'s unique role in comparative advantage was that it was able to facilitate a conversation and bring China to the table in a way that if that sort of initial, AI safety summit happened in the U.S. or in China, it's hard to imagine for political reasons that there would have been a statement signed as one first example.

There's also this question of, as we move into dialogues, the U.K.’s technical experts, what role will they play in informing both U.S. and Chinese standards on AI and building consensus where otherwise might be difficult. And so in terms of where this conversation goes, I think the conversation on AI in the U.K. is really robust due to just like the level of technical expertise that you have here and the amount of talent that places like the AIC Institute have been able to recruit.

It's very impressive. I think the China side of the story is very different. The China side of the U.K. is severely underfunded. What it means is that FCDO, I believe I don't know if it's FCDO or the entire U.K. civil service, had something like fewer than 45 people who had C1, which is professional working fluency in Chinese. So if you're imagining who your China analysts are, whether they're able to engage, that then becomes really challenging.

And so the AI relationship between the U.S. and China seems to be maybe a unique area where because of the AI expertise, the U.K. is able to develop a policy that may end up being substantially more robust than other areas. I think in other areas, the U.K. is trying to balance out a bunch of different relationships.

One of which would be the U.S. relationship, its special relationship, upon which it relies economically and also frankly, politically. You then have--- it's relationship with the EU, which has been fragmented, but they're still super, super close. There's other emerging relationships. You think about potentially U.K.-India, which are going to be interesting in the future and how the U.K. navigates these tensions both in AI, but also more broadly is an open question.

The EU is an interesting actor because we think of the EU as a monolith, but European countries have very different interests as it relates to China. You might have France on the one hand to the chagrin of the U.S. and others, perhaps trying to negotiate on, AI regulatory issues and governance issues.

On the other hand, you have other countries in the EU, like Lithuania, that have superpower relations with China in general. You have acute interest in places like Germany, where they're trying to figure out the question of electric vehicles. The question of overcapacity is a really critical one in general right now, as well as what to do with Chinese electric vehicles.

And that is a pressure that is felt in Germany in a way that you would not feel, for example, in Spain. And so there is the broad regulatory body of the EU, but in terms of individual country interests, it seems like a very, different question. And if we're looking to the rest of the world, the question is not necessarily AI safety.

In fact, this is not really a concern for a lot of these countries. If there is a concern about AI, it's how can we make sure--- I would say the primary one is how can we make sure that we are able to enjoy AI's benefits? And this is an area where, we hope in the future that the U.S. and China are able to race to the top, so to speak.

And by that, compete to deliver the best products and ensure that AI is being used to let people out of poverty, provide employment opportunities, as opposed to, for example, we can imagine that AI could exacerbate global inequality, might be concerned about environmental harms tied to AI.

And so for these third countries, it's at this point, less about frontier safety and much more about diffusion.

Kevin Frazier: So continuing with this race metaphor, we can get a sense of what's under the hood for the American race car, right? OpenAI is obviously leading. We've seen Anthropic develop ever increasingly sophisticated models.

Meta's open models have been regarded as pretty robust. What's under the hood in China? Is there a there there or are they running on one of those like toddler cars where it's actually just little feet powering the engine?

Scott Singer: Yeah, it's a great question. And there are some really amazing experts who are doing really great research on this.

So for example, we have Sihao Huang, who is now a non-resident expert at OCPL who does really excellent research on this. And I'm definitely going to borrow insights from him to use a line from him here, which is China is really a leader in AI without the G. So thinking about particular use cases for AI and thinking about particular industries, where it's going to lead, I think most people on this would say that China is somewhat behind the US. It was before, perhaps even more so now that it's compute constrained following the expert controls. So everything about AI in general, there's these three critical inputs. We have the algorithms, we have the data and we have the compute, the hardware that is going into training all of these systems. And so after the export controls, the story coming out of China is there is not enough compute to train these models. And so it seems to me that China is not necessarily at that same frontier as your OpenAIs and Anthropics. But it doesn't mean that China can't have a significant role to play both domestically and also internationally, because really advanced LLMs and foundation models, even at the very forefront can be very powerful.

And it gets into the question of how accessible they are to these other actors, not because they're diffused into other systems and emerging technologies.

Kevin Frazier: And like politics at Thanksgiving dinner, one question that companies haven't been able to avoid is this U.S.-China relationship. And so I'd love to get at how companies like OpenAI, for example, and Meta are playing a role in shifting the capacity of these different countries.

So we're talking in late June of 2024. We learned recently that OpenAI has announced that it's going to ban access to its services in China, which some allege is going to set the scene for some internal industrial shakeup in China and perhaps lead to an increase in their domestic AI efforts or perhaps create more space for Meta to come in with their open source models.

How should we think about this new complex map of U.S. formal governance relationships, Chinese formal government relationships, and then now these companies who find themselves sitting in the middle, trying to decide to what extent they're going to move things in favor of one or the other?

Scott Singer: I think for most U.S., many U.S. tech firms and Chinese tech firms, they've been caught in the middle of this for many years now. So we're thinking about the watershed moments in U.S.-China tech relationships, you start with a company like Huawei. Huawei 5G which was really that first case that changed the way that Americans are thinking about critical dependencies and emerging technologies.

We see in general that U.S. social media platforms in the PRC have long not been able to operate. ChatGPT was banned in China before OpenAI decided to pull out. And so I think, does this create new opportunities for LLMs, chat interfaces within China? I think those opportunities already existed, frankly.

I think in general, we see a domestic Chinese ecosystem where there is a lot of, domestic development and production. And yes, there's reliance on other actors in the supply chain, other countries, but a lot of this is really happening domestically and internally. And so I don't think that necessarily this is going to represent a radical change. At the company by company level, you might see some changes, but I think the sort of like broader ecosystem of firms navigating these tensions and basically being forced to choose, are you going to be an American entity or a Western entity, or are you going to be a Chinese entity? Or you have product stratification where you have a product that is exclusively for Chinese audiences and exclusively rest of the world?

It's always been difficult for these firms to comply with Chinese regulatory rules, especially when you're into issues around AI, the question of what content are you generating? And is this content generally going to be sensitive in the eyes of the Chinese Communist Party. And that has always been a concern.

And so I think that OpenAI space in China has always been pretty constrained, at least as long as CCP regular has been thinking about it.

Kevin Frazier: And you mentioned earlier the kind of three typical lenses to think about AI governance, data algorithms and compute. The fourth leg of that stool, if you will, would be talent.

And obviously the U.S. has long held a number of experts in AI, but what does that look like on the China side of things? What is their AI talent pool like? And to what extent is that a boom or a bust for them when it comes to trying to keep pace with the U.S.?

Scott Singer: Yeah Remco Zwetsloot has really done a lot of excellent research on this. So to bring in some of his really important insights, China has been really investing and producing in terms of its STEM graduate students, both at the PhD level and at the master's level, and they're really far outpacing what the U.S. is doing here. So I believe that over the last five years that China has 8x-ed its STEM --- I don't know if it's PhD or master's and PhD combined --- output of graduates from 10,000 to 80,000. And in the U.S. the number has only doubled. So the U.S. is really increasing its talent pools, particularly in STEM, but the PRC is really increasing and ramping up its technical experts.

And as we think about what are the characteristics that are going to be very important within the talent competition, a lot of this is going to be intangible, and having that top of the line STEM talent that is able to figure out how to best deploy these algorithms, data, and build compute.

And so I think that there is a substantial need to scale up our emerging technology talent and also to make sure that when we do so that it's intersectional to geopolitical concerns. So do you have people that are able to both understand the technical side of AI, and who also, have at least basic interactions in geopolitics?

Do they have policy experience? Do they understand how levers work? It's all well and good if you have technical experts, but if they don't understand what levers are at their disposal in policy, then their ability to have impact in that particular domain will be quite limited. So that would be my sort of like very bare bones assessment of talent competition.

And then it's also, a question of what's happening in third countries, because third countries are going to be critical for AI governance too, and their talent. So which countries, again, are producing leading STEM PhDs, which countries are investing in their China facing capabilities, and you get a very varied map.

Kevin Frazier: And we've tried to be pretty dang descriptive for this first part of the podcast. Now I'm going to nudge you slightly into speculative land. What are some risks or potential scenarios that you're particularly concerned about or that are on your radar for perhaps pushing in already high tensions into the next level of fervor.

Scott Singer: Yeah, so I would say a really big concern that I would be thinking about right now is structural decline. Or a structural decline in the relationship, so things get worse combined with what I call sleepwalking. So there's this great book by Chris Clark called “The Sleepwalkers,” which basically talks about, why did World War I break out?

And you have all of these crazy actors in World War I. You have leaders who are cousins on the phone having conversations. Phones or telegrams, I don't remember which one. But basically what you have is a situation where no one wants war but war breaks out. You look at a more recent historical situation like the Cuban Missile Crisis where we didn't see a war but if certain military officers had read decisions differently or not answered a certain call, then maybe we would have seen a war break out.

And I think, to me, what really concerns me is a situation where you see a potential sleepwalking scenario over Taiwan or something else. I think another general concern is just like a crowding out of dialogue in general. So you see it I think, actually, much more in places like the U.K., which have a less defined center for U.S.-China debate. But there were definitely calls in the last year to cut off dialogues with China in general. And so if you cut off dialogues in general, you really ramp up the possibilities of misunderstanding it. You can see things going really badly that way. And I think frankly, the status quo when it comes to the U.S.-China relationship in terms of dialogues is already really bad.

The U.S. ambassador, literally went on the record a few days ago talking about how frustrated he was about how U.S. engagement in China has been really, in his view, undermined. And so if that is the status quo of your relationship, and you're trying to figure out these really intensely difficult questions, like how to regulate artificial intelligence, but you're not willing to talk or for the people who do go in and try to engage in the country, you're not able to make significant ground or there's concerns around, what if these people are spies or whatever else, it creates just a space where AI's capabilities become quicker than our conversations on all the risks people are worried about.

Kevin Frazier: So one final speculative question: you mentioned earlier that there's a seemingly bipartisan consensus on how to approach this issue. We've seen folks like Senator Schumer and his bipartisan AI working group tout the need for the U.S. to lead in AI innovation. You mentioned earlier senators generally agreeing on being relatively skeptical of increased collaboration with China.

Do you see the relationship changing much regardless of how the presidential election goes in November? Or do you think this will just be a maintenance of the status quo, which is to say, I see.

Scott Singer: Yeah, I think it's a, really challenging question, and we won't know until we see who wins and what the politics are behind that.

I think from the AI side of things are going to get politicized and polarized much more quickly, perhaps than other parts of China. And it is because so much of AI is intersectional. So if you're concerned, for example, about AI and privacy, that already has a quite developed and clear constituency whose interests are going to matter.

And if you're talking about, regulatory interests, you might see lobbies pushing for AI regulation in certain areas --- you could imagine AI technologies being used on the border for immigration as they're being deployed now. And so if you have, AI cross immigration, then that is obviously going to be a really critical question as well.

I think AI coordination or winning against China, however you want to frame it, depending on your perspective, I think that is probably more likely to outlast, but when you have potentially, if you're thinking about a Trump administration, I think that you could see, Trump go either way when it comes to questions of AI safety and coordination.

The other thing that I would say is, in international relations, there’s this idea of an against type model, which is a very fancy way of saying, Richard Nixon was a really hawkish dude. And he went with Henry Kissinger to China and really set the pathway for establishing diplomatic relations.

So sometimes it takes the person who you would expect to broker a peace or agreement the least to actually make progress on a particular issue. And so I think, who knows what the next Trump administration or another Biden administration would bring. But sometimes it takes precisely the most hawkish person in the room to broker a very challenging agreement.

Kevin Frazier: We will have to leave it there. Thank you so much for joining, Scott.

Scott Singer: Thanks for having me, Kevin.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at The podcast is edited by Jen Patja, and your audio engineer this episode was Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.

Kevin Frazier is an Assistant Professor at St. Thomas University College of Law. He is writing for Lawfare as a Tarbell Fellow.
Scott Singer is a PhD candidate in International Relations and the co-founder and director of the Oxford China Policy Lab at the University of Oxford.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.

Subscribe to Lawfare