Cybersecurity & Tech Democracy & Elections Surveillance & Privacy

Recent Developments in AI and National Security: What You Need to Know

Christopher Gorman
Thursday, March 3, 2022, 11:01 AM

Here’s an introduction to the revolutionary implications of artificial intelligence for national security, and a summary of recent articles in the space.

Code on a computer screen. (https://www.piqsels.com/en/public-domain-photo-jcunm)

Published by The Lawfare Institute
in Cooperation With
Brookings

What Is AI and Why Does It Matter for National Security?

Artificial intelligence (AI)—the ability for a machine to maximize its chance of achieving its goals by acting based on data collected from its environment—is emerging as the defining technology of the 21st century. Forms of AI have existed for decades. But the exponential increase in digital data, expanded computing power and advances in machine learning algorithms have resulted in significant adoption of AI across the public and private sectors over the past decade. AI systems are increasingly able to deliver results that exceed human performance for some of our more advanced biological capabilities, such as vision, hearing, language translation, driving, game playing and complex decision-making. Most modern AI systems rely on machine learning: The systems learn and adapt by using statistical models to draw inferences from patterns in data. A particular type of machine learning, called deep learning—which utilizes artificial neural networks that are inspired by the human brain’s distributed communication nodes—has been responsible for much of the progress in AI over the past decade. Deep learning has made it possible for humanity to automate many more activities than previous technologies allowed, resulting in substantial productivity gains across the public and private sectors. 

These recent advances in AI have revolutionary implications for national security. The National Security Commission on Artificial Intelligence (NSCAI) stated that AI is “world altering,” predicting that AI technologies “will be a source of enormous power for the companies and countries that harness them.” A Belfer Center report commissioned by the intelligence community determined that AI has the potential to be a transformative national security technology—on par with nuclear weapons, aircraft, computers and biotech—in large part due to its ability to drive military and information superiority. In the military context, AI is already being used to automate weapons systems and enable predictive maintenance by estimating failure likelihood on helicopter engines. Intelligence agencies, in particular, stand to benefit from advances in AI given that recent machine learning developments have centered around automating the analysis of images, audio, foreign language and other data. 

World leaders appreciate the transformational national security impact of AI. In 2017, Vladimir Putin declared that “whoever becomes the leader in [artificial intelligence] will become the ruler of the world.” That same year, the People’s Republic of China stated that it aims to lead the world in AI by 2030. The United States has made AI and national security a bipartisan priority. It established the NSCAI, launched the National Artificial Intelligence Initiative and created significant AI efforts in the Department of Defense and the intelligence community. Focus on AI advancement has not been limited to the U.S., China and Russia; more than 30 other countries have published national AI strategies. 

Beyond its direct national security applications, AI’s enormous economic potential further reinforces the technology’s criticality to long-term national power. Recent advances in deep learning have spurred a huge amount of private-sector investment in and adoption of AI technologies. Venture capitalists invested more than $75 billion in AI startups in 2020, according to a study from the Organization for Economic Cooperation and Development. A McKinsey global survey reported that 56 percent of respondents said their companies are using AI, with more than a quarter of those respondents stating that at least 5 percent of earnings are attributable to AI. Another study found that most companies accelerated AI adoption during the coronavirus crisis, with 86 percent of firms stating that AI “is becoming a ‘mainstream technology’” at their company. AI is also being applied to the world’s most pressing challenges. When the pandemic began in 2020, global private investment in AI projects in the “Drugs, Cancer, Molecular, Drug Discovery” focus area increased 450 percent year-over-year to $13.8 billion, according to a Stanford report.

These AI investments have already begun to generate significant economic benefits. Tesla promises “full self-driving capabilities” as a standard feature via its Autopilot AI system. AI-enabled voice assistants such as Amazon Alexa, Apple Siri and Google Assistant are used by more than 100 million people in the U.S., with 71 percent of consumers preferring to search with a voice assistant over physically typing a search. Amazon alone has deployed Alexa on hundreds of millions of devices. AI is also helping companies significantly with internal efficiency; for example, Alphabet realized a 15 percent reduction in data center energy using a deep learning algorithm developed by AI subsidiary DeepMind, resulting in significant cost savings and environmental benefit. Outside of technology firms, the Washington Post is using an AI-based system, Heliograf, to provide “large-scale, data-driven coverage of major news events” through AI-written stories and AI-powered audio updates. And J.P. Morgan, alongside other financial services institutions, is employing AI for a range of use cases including anomaly detection, intelligent pricing and document analysis. Advances in AI adoption across industries have led management consultancies to estimate that AI could deliver additional economic output of more than $13 trillion to almost $16 trillion by 2030. The immense economic potential of AI could lead to drastic changes in the global balance of power

Below is a news update on what’s happening in AI and national security. It provides an overview of the latest news, research papers and government activity in the area over the past three months.

What’s Happening in AI and National Security: Dec. 1, 2021 to Mar. 1, 2022.

AI and National Security News and Commentary

Cynthia Strand, The Perfect Storm of Technology, Intelligence and AI, The Cipher Brief (Feb. 21, 2022). 

Strand, a former CIA executive, shares her perspective on the importance of AI, machine learning (ML) and natural language processing (NLP) to great power competition. Strand identifies AI use cases across core intelligence activities and support functions, and notes that acquisition and authority to operate are two critical barriers to implementing AI capabilities at mission speed. 

Diana Gehlhaus, To Get Better at AI, Get Better at Finding AI Talent, DefenseOne (Feb. 16, 2022).

Gehlhaus recommends the Defense Department work with the military services to establish AI-specific goals for cultivating technical talent. While recent research suggests that Defense is a top employer of technical talent, the department is underutilizing its expertise and having difficulty cultivating a skilled corps in AI. Gehlhaus thus makes three recommendations for workforce empowerment: measurable, service-level goals for AI expertise; role-agnostic AI education and assignments; and support to “AI rock stars” to facilitate AI adoption and technical talent development. 

Will Griffin, America Must Win the Race for A.I. Ethics, Fortune (Feb. 15, 2022).

Griffin identifies how recently enacted federal law defines AI ethics and recognizes its importance. He then shares three recommendations for embedding ethics into AI deployment: create an AI use case archive, harmonize existing AI ethics vetting frameworks and develop a public communications strategy for AI ethics. 

Jason Sherman, Russia-Ukraine Conflict Prompted U.S. to Develop Autonomous Drone Swarms, 1,000-Mile Cannon, Scientific American (Feb. 14, 2022).

Writing before Russia’s 2022 invasion of Ukraine, Sherman traces many recent U.S. military efforts to develop AI and autonomous systems to a U.S. Army study of Russia’s 2014 invasion of Ukraine’s Crimea and Donbas regions. The article notes the importance of AI and autonomy research for the development and deployment of drone swarms. 

Kayla Goode and Dahlia Peterson, The US Can Compete With China in AI education—Here’s How, The Hill (Feb. 4, 2022).

Center for Security and Emerging Technology (CSET) analysts Goode and Peterson urge the U.S. to adopt coordinated AI education and workforce policies to maintain its competitive edge with China. They argue that China’s AI education system, where the most popular undergraduate major is AI and there is mandatory AI coursework in high school, dramatically eclipses comparable U.S. initiatives. The authors recommend a federally led national endeavor to coordinate AI education, training and workforce education policy.

Caitlin M. Kenney, Navy Puts AI, Unmanned Systems to the Test in Five-Sea, 60-Nation Exercise, DefenseOne (Feb. 3, 2022). 

The U.S. and nine partner nations conducted a large naval exercise that involved 80 air, surface and underwater unmanned systems. The unmanned systems ran through 14 training scenarios to test their utility for missions like rescuing overboard sailors and area monitoring. The unmanned system testing was part of a larger joint naval exercise between the Fifth and Sixth Fleets, which lasted 18 days and involved 9,000 participants from 60 nations.

Amy Zegart, American Spy Agencies Are Struggling in the Age of Data, Wired (Feb. 2, 2022). 

Zegart, in a story adapted from her recently released book “Spies, Lies, and Algorithms,” explains that rapid technological change poses three challenges to American intelligence agencies: Technological advances are increasing the diversity, capability and speed of adversaries; big data democratization is revolutionizing sensemaking and increasing the value of open-source insights; and intelligence agencies increasingly must sacrifice secrecy and engage with the outside world for data and innovation. 

Michael Martina, Former U.S. Security Officials Urge Congress to Act on China Legislation, Reuters (Feb. 1, 2022).

A bipartisan group of 16 former senior U.S. national security officials—including John Brennan, James Clapper, Michele Flournoy, Stephen Hadley, Jane Harman, Michael Hayden, Leon Panetta, Matthew Pottinger and Eric Schmidt—sent a letter to congressional leadership urging the passage of technology competitiveness legislation, which the former officials said was needed to “maintain strengths and comparative advantages against rising adversaries.” 

Amanda Miller, Turning Up the Heat on AI: The Pentagon Battles Its Own Inertia to Make Progress in Artificial Intelligence, Air Force Magazine (Jan. 19, 2022). 

This article explores the work of the Air Force’s AI Accelerator at MIT—a group of 16 Air Force personnel and 140 MIT researchers working on 10 projects to advance AI. The article notes that the Defense Department has more than 600 AI projects underway, funded in part by the $3 billion increase in science and technology research funding for fiscal year 2022. 

Sarah Bauerle-Danzman, Is the US Going to Screen Outbound Investment? Atlantic Council (Jan. 10, 2022). 

There is potential tension between the significant U.S.-China investment flows and the growing Washington consensus supporting decoupling to protect American strategic advantage. In this article, Bauerle-Danzman explores what is within the bipartisan National Critical Capabilities Defense Act of 2021—known as an “outbound CFIUS”—and examines three potential risks should the bill pass. 

Justin Doubleday, NGA CIO Eyes Big Shifts for Cloud, Cybersecurity and Machine Learning in 2022, Federal News Network (Jan. 6, 2022). 

The National Geospatial-Intelligence Agency (NGA), a leader in the intelligence community in adopting advanced commercial technologies, continues to advance its AI/ML and commercial geospatial intelligence, according to Federal News Network. According to Doubleday, the NGA’s chief information officer is updating the agency’s software strategy and is shifting to a zero trust cybersecurity model. 

Cincinnati Public Radio WVXU, Henry Kissinger and Former Google Head Eric Schmidt Say Diplomatic Dialogue to Control AI Needs to Happen Now (Jan. 3, 2022).

At a Council on Foreign Relations event for their new book—“The Age of AI: And Our Human Future”—Henry Kissinger and Eric Schmidt highlight the danger of automated “launch-on-warning systems” that could cause mass destruction without direct human involvement. To avoid war initiated by computers, Kissinger and Schmidt call for the immediate initiation of diplomatic discussions on the future of AI. 

Cate Cadell, China Harvests Masses of Data on Western Targets, Documents Show, Washington Post (Dec. 31, 2021). 

A Washington Post review of hundreds of Chinese bidding documents, contracts and company filings determined that the People’s Republic of China is collecting information from Western social media for surveillance and targeting purposes. The article highlights significant examples of China data collection on foreign targets, including a program to create a database on foreign journalists and academics.

Stanford University Human-Centered Artificial Intelligence (HAI), Summary of AI Provisions From the National Defense Authorization Act 2022 (Dec. 27, 2021). 

Stanford HAI provides its yearly overview of AI provisions in the National Defense Authorization Act. The summary highlights initiatives for microelectronics research, AI performance evaluation, executive education on emerging technologies, novel acquisition practices for emerging technologies, a new occupational series for digital career fields and an AI data repository.

Recent Research Papers

Avi Goldfarb and John R. Lindsay, Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War, International Security (Feb. 25, 2022). 

Goldfarb and Lindsay argue that AI is not a substitute for human decision-making, as challenges of data quality and judgment persist. The authors thus identify two strategic implications. First, AI adoption will increase military organization complexity so that data and judgment challenges can be accommodated. Second, data and judgment will become attractive targets in strategic competition. Thus, AI will increase the importance of humans in war, with AI-enabled conflict involving uncertainty, organizational friction and chronic controversy. 

Jeffrey Edmonds and Samuel Bendett, Russian Military Autonomy in a Ukraine Conflict, CNA (Feb. 14, 2022).

Writing before Russia’s 2022 invasion of Ukraine, Edmonds and Bendett posit that Russia would use many unmanned systems in Ukraine that it has tested in Syria and other theaters. They assess that Russian military and political leadership view autonomous systems as key to military success, and that unmanned aerial vehicles (UAVs) are likely to play a large role in the conflict with Ukraine. The authors also observe that Russia is making headway in unmanned ground vehicles (UGVs) and unmanned underwater vehicles (UUVs).

Gregory S. Dawson and Kevin C. Desouza, How the U.S. Can Dominate in the Race to National AI Supremacy, Brookings (Feb. 3, 2022).

In this Brookings TechTank report, Dawson and Desouza suggest that America’s biggest issue in AI development is people, not spending or technology. They offer three options for the U.S. to achieve AI prominence: extract lessons from the space race for talent development, take a multinational consortium approach and create a robust partnership with one other country. The authors additionally recommend four action items for the U.S.: educate the population on the future of AI, create a sense of urgency, raise the profile of STEM and closely evaluate potential international partners. 

Nathan Allan and Marian “Ify” Okpali, Artificial Intelligence Creeps Onto the African Battlefield, Brookings (Feb. 2, 2022). 

AI is shaping conflict in Africa in two ways. First, AI-driven surveillance—enabled in part by Huawei systems—is empowering security services’ response to terrorist and organized crime activity. Second, AI-powered drones are beginning to be used in African conflict, such as the deployment of Turkish-made STM Kargu-2 drones against Libyan warlord Khalifa Haftar’s forces. The authors argue, however, that AI cannot address the fundamental drivers of armed conflict in Africa, given the nascent stage of technological deployment, the relative inapplicability of AI to fighting insurgencies and the inability of AI-driven security solutions to address underlying causes of insecurity. 

Sarah Kreps and Richard Li, Cascading Chaos: Nonstate Actors and AI on the Battlefield, Brookings (Feb 1., 2022).

Kreps and Li posit that commercial off-the-shelf AI capabilities—such as drones, AI-enabled cyber weapons and large-scale disinformation tools—are leveling the playing field between state and non-state actors. For example, the authors observe significant terrorist and drug cartel use of drone attacks, which they believe AI will make more efficient and lethal. They make three recommendations for policymakers: work with private actors to shape AI technology, embrace norms that maintain a human in the loop and invest in technological competitiveness.

Seth Stodder and Thomas S. Warrick, Biometrics at the Border: Balancing Security, Convenience, and Civil Liberties, Atlantic Council (Jan. 31, 2022). 

This issue brief explains how the Department of Homeland Security is expanding its use of facial biometric technology at border crossings, which offers significant security and efficiency benefits yet poses three primary risks: the expansion of the surveillance state, cybersecurity risks to biometric data, and accuracy and bias concerns. The authors propose four recommendations: continue the biometrics program, improve its facial-comparison service, adopt the Homeland Security Advisor Council Biometrics Subcommittee recommendations on data retention and bias, and spend congressionally appropriated funds to expand border biometric capabilities. 

Michael F. Stumborg et al., Dimensions of Autonomous Decision-Making, CNA (Jan. 21, 2021). 

This CNA study identifies 13 dimensions of autonomous decision-making, developed from a list of 565 risk elements, that should be considered to employ legal, ethical and effective intelligent autonomous systems for military purposes. The study recommends that Defense Department acquisition professionals and military commanders utilize it as a risk assessment checklist to avoid unethical use of autonomous systems. 

Emily Harding, Move Over JARVIS, Meet OSCAR: Open-Source, Cloud-Based, AI-Enabled Reporting for the Intelligence Community, Center for Strategic & International Studies (Jan. 19, 2022).

Harding, the former deputy staff director of the Senate Select Committee on Intelligence, argues that the intelligence community must embrace the open-source intelligence (OSINT) revolution by applying AI/ML capabilities and unclassified cloud capabilities at scale. She envisions an open-source, cloud-based, AI-enabled reporting capability, “OSCAR,” to generate insights and save analyst time. Harding identifies culture, security and policy roadblocks to implementing systems like OSCAR, and issues 19 recommendations across five categories for the intelligence community to overcome them. Should the intelligence community fail to make significant progress on AI/ML and OSINT in the next year, Harding recommends five “bold steps,” including a parallel AI/ML and cloud acquisition process, a new “Indefinite Delivery/Outcome Oriented” (IDOO) contract category, an intelligence community innovation incubator, Office of the Director of National Intelligence total budget authority over AI/ML and cloud, and personnel incentive shifts.

Amy J. Nelson and Alexander H. Montgomery, Is the U.S. Military’s Futurism Obsession Hurting National Security? Brookings (Jan. 18, 2022). 

This Brookings post identifies four reasons why the U.S. military, and society more broadly, is “future obsessed”: the Jetsons effect, predictions of better technology, the apparent nature of catastrophic risks and overcorrection. It then examines four problems with this obsession: not preparing for the current crisis, kicking the can down the road on long-term issues, locking onto particular scenarios and focusing on escapism over engagement.

Andrew Lohn and Micah Musser, AI and Compute: How Much Longer Can Computing Power Drive Artificial Intelligence Progress? Center for Security and Emerging Technology (CSET) (Jan. 18, 2022).

This CSET analysis observes that the historical trendline of AI models doubling computing power usage every 3.4 months is “unsustainable” due to training cost, hardware availability and engineering difficulties. The authors recommend reorientation toward hardware and algorithm efficiency approaches as a way to offset the slowdown in the growth of computing power. Additionally, they observe that researchers may target specific applications as opposed to employing generalized, “brute-force” methods that drove much of AI’s advancement during the 2010s. 

Samar Fatima et al., How Countries Are Leveraging Computing Power to Achieve Their National Artificial Intelligence Strategies, Brookings (Jan. 12, 2022). 

This report assesses countries’ technological preparedness for AI to visualize which countries are leading in AI based on their technology and research capabilities and levels of investment (see Figure 1).

A graphic showing where several countries fall on the technology and research and investment dimensions.
Source: Brookings

Ryan Hass et al., U.S.-China Technology Competition, Brookings (Dec. 23, 2021).

Thirteen Brookings scholars share their written assessments on the role of technology in U.S.-China competition. They collectively assert that sustaining the status quo, or doing more of the same, is “neither tenable nor attractive as a policy objective.” Rather, they argue, the U.S. needs greater clarity on strategic objectives and technological priorities, as well as increased reliance on international coalition-based approaches, to accelerate innovation and lead the development of global digital infrastructure that enables the free flow of information. 

Latest Government Moves

United States

John Keller, DARPA to Outfit F-16D Jet Fighter With Artificial Intelligence (AI) to Boost Trust in AI as a Human Partner, Military & Aerospace Electronics (Feb. 23, 2022).

The Defense Advanced Research Projects Agency (DARPA) has solicited industry proposals for converting existing F-16 aircraft into human-in-the-loop testbed aircraft to support teaming between manned and unmanned aircraft. DARPA believes it can improve U.S. dogfighting capabilities by empowering a human pilot to lead AI-powered semi-autonomous UAVs from the cockpit. 

Government Accountability Office (GAO), Artificial Intelligence: Status of Developing and Acquiring Capabilities for Weapon Systems (Feb. 17, 2022).

In this 53-page report, the GAO assessed the Defense Department’s pursuit of AI capabilities. The department had at least 685 AI projects as of April 2021. GAO found that most Defense AI capabilities are focused on analyzing intelligence, enhancing unmanned weapons systems and providing warfighting recommendations. It also found that the department’s AI efforts face traditional technology adoption challenges, such as long acquisition times, as well as novel ones, such as a paucity of training data to enable machine learning. 

Colin Demarest, Oracle Gets Go-Ahead to Host Top Secret Air Force Data, C4ISRNET (Feb. 15, 2022). 

Oracle’s cloud service was approved to host top secret/sensitive compartmented information and special access program data for the Air Force, according to an announcement by the company’s national security division. Oracle’s foray into hosting top-secret Defense Department information builds on its being one of five companies to be awarded the CIA’s Commercial Cloud Enterprise (C2E) contract, alongside Amazon, Google, IBM and Microsoft.

Brandi Vincent, AI Algorithms Could Rapidly Deploy to the Battlefield Under New Initiative, Nextgov (Feb. 9, 2022).

Defense Department Joint Artificial Intelligence Center (JAIC) Director Lt. Gen. Michael Groen announced that the center is developing a joint operating system and integration layer for the combatant commands to develop and deploy AI systems. The effort is part of JAIC’s Artificial Intelligence and Data Accelerator, which seeks to boost data-based decision-making by the combatant commands.

David Vergun, Artificial Intelligence, Autonomy Will Play Crucial Role in Warfare, General Says, DOD News (Feb. 8, 2022).

During a Senate Armed Services Committee nomination hearing for Lt. Gen. Michael Kurilla’s promotion to general and CENTCOM commander, Lt. Gen. Kruilla discussed how the U.S. military is using AI for F-35 target detection and prioritization. Kruilla stated that AI has helped the military identify hundreds of targets and prioritize them within “seconds versus what would normally take hours normally, or sometimes even days.”

Jackson Barnett, John Sherman Tapped to Be Acting Chief Digital and AI Officer at DOD, FedScoop (Feb. 2, 2022).

Defense Department Chief Information Officer John Sherman has been dual-hatted as the department’s acting chief digital and AI officer (CDAO). Sherman said the CDAO office creates a “collective ecosystem” by combining the Defense Digital Service’s software development talent, the Joint Artificial Intelligence Center’s AI initiatives and the chief data officer’s data management responsibilities. CDAO is set to be fully operational and led by a permanent CDAO by June 1, with candidate identification ongoing. 

Mark Pomerleau, NSA’s Cybersecurity Directorate Looks to Scale Up This Year, C4ISRNET (Feb. 2, 2022).

 The National Security Agency’s (NSA’s) Cybersecurity Directorate is working to secure Defense Department machine learning and artificial intelligence systems, according to its technical director, Neal Ziring. Ziring noted that securing AI/ML systems requires extending security to the early stages of development, such as “gathering training data and training initial models.”

Deputy Secretary of Defense Kathleen Hicks, Establishment of the Chief Digital and Artificial Intelligence Officer, Department of Defense (Dec. 8, 2021).

The Defense Department consolidated the Joint Artificial Intelligence Center, the Defense Digital Service and the chief data officer under the chief digital and AI officer (CDAO), a new role that will “serve as the Department’s senior official responsible for strengthening and integrating data, artificial intelligence and digital solutions.”

China

Stephen Chen, Chinese AI Team Claims Big Win in Battle to Teach Dogfights to Drones, South China Morning Post (Jan. 30, 2022). 

Chinese researchers claim to have developed an AI system that helps Chinese drones win dogfights in fewer training simulations than a comparable American system. In a simulation against J-10 fighter jets, the human pilot was unable to evade the drone for more than 12 minutes. This success builds on the 2020 demonstration by Maryland-based Heron Systems, in which its AI-powered drones defeated F-16 fighter pilots in a dogfighting competition. 

Stephen Chan, What You Need to Know About China’s AI Ethics Rules, TechBeacon (Jan. 24, 2022). 

This article summarizes the goals, standards and initiatives set out in the Chinese Ministry of Science and Technology’s “Ethical Norms for New Generation Artificial Intelligence,” released in 2021 (English translation of the document’s full text here). It lists the document’s six core principles; identifies three management standards; and explores other issues in the document pertaining to research, development and quality control.

Bloomberg News, China Calls on Nuclear-Armed Nations to Focus on AI, Space (Jan. 4, 2022)

The director-general of the Chinese Foreign Ministry’s Arms Control Department called for the five permanent members of the U.N. Security Council—China, France, Russia, the U.S. and the U.K.—to talk more directly about non-nuclear strategic stability matters, including AI, space and missile defense. 

Matt Sheehan, China’s New AI Governance Initiatives Shouldn’t Be Ignored, Carnegie Endowment for International Peace (Jan. 4, 2022). 

This article analyzes the three different approaches to Chinese AI governance taken by three separate People’s Republic of China bodies: the Cyberspace Administration of China, the China Academy of Information and Communication Technology, and the Ministry of Science and Technology. It observes that the Cyberspace Administration of China has made the most mature, rule-based and influential regulations of AI in the past year, while the Ministry of Science and Technology has taken the lightest approach to AI governance. 

CNA, The China AI and Autonomy Report: Issue 9 (Feb. 24, 2022); Issue 8 (Feb. 10, 2022); Issue 7 (Jan. 27, 2022); Issue 6 (Jan. 13, 2022); Issue 5 (Dec. 16, 2021); Issue 4 (Dec 2, 2021).

CNA’s recent newsletters on AI and autonomy in China highlight a few recent articles on China’s use of AI for national security, including the People’s Republic of China’s 14th Five-Year Plan for National Informatization, a Washington Post article that suggests foreign investors may be helping China improve its AI-enabled military surveillance capabilities, and a South China Morning Post article discussing China’s submission of a position paper on military AI to the United Nations.

Russia

Anna Nadibaidze, Russian Perceptions of Military AI, Automation, and Autonomy, Foreign Policy Research Institute (FPRI) (Jan. 27, 2022).

This FPRI report explains what’s behind Russian leadership’s pursuit of weaponized AI. It examines Russian motivations, plans, capabilities and ethical considerations in the country’s pursuit of military AI and autonomous systems.

CNA, AI and Autonomy in Russia: Issue 31 (Feb. 7, 2022); Issue 30 (Jan. 24, 2022); Issue 29 (Jan. 10, 2022); Issue 28 (Dec. 17, 2021); Issue 27 (Dec. 6, 2021).

CNA’s recent biweekly reports on Russia’s use of AI highlight a few ongoing efforts in the field, including the pursuit of an AI-enabled system to analyze foreign policy for the Ministry of Foreign Affairs, a draft bill to regulate AI-human relations and the development of an autonomous underwater unmanned system. 

Other Governments

Joe Saballa, India “Increasingly Focusing” on AI for Military Applications, The Defense Post (Feb. 14, 2022). 

India has established a Defense Artificial Intelligence Council led by Defence Minister Rajnath Singh, who announced that the country plans to develop 25 defense-specific AI products by 2024. The Indian navy reports to have 30 AI projects underway, and the country has stood up a Defence AI Project Agency with $13.2 million in funding. 

Seth J. Frantzman, Israel Unveils Artificial Intelligence Strategy for Armed Forces, Defense News (Feb. 11, 2022). 

Israel Defense Forces (IDF) have created an AI strategy and released an unclassified version of the document, according to a senior IDF official. Digital transformation has been a centerpiece of IDF strategy over the past several years, and AI “played a key role” in the 2021 Israel-Palestinian conflict. 

Tehran Times, Iran Plans to Become a Leading Country in AI (Jan. 30, 2022). 

Iran’s Information and Communication Technology Institute completed its national AI development road map in November 2021, according to the Tehran Times. The article states that Iran’s AI road map calls for $8 billion of investment in AI. 

U.K. Department for Digital, Culture, Media & Sport, Office for Artificial Intelligence and Chris Philp MP, New UK Initiative to Shape Global Standards for Artificial Intelligence (Jan. 12, 2022). 

The U.K. announced that the Alan Turing Institute will create an AI Standards Hub to help shape global technical standards for AI. The hub’s launch is a part of the U.K.’s National AI Strategy, “a ten-year plan to strengthen the country’s position as a global science superpower and harness AI to transform the economy and society while leading governance and standards to ensure everyone benefits.”


Chris Gorman is a student at Harvard Law School, where he is on the executive boards of the National Security Journal and the National Security & Law Association. Previously, Chris worked as a management consultant, advising public sector defense and security organizations on strategy and technology issues.

Subscribe to Lawfare