Select Page

Cybersecurity Implications of the First-Ever U.S. National Security Memorandum on Artificial Intelligence

 

By Haiman Wong, R Street

On Oct. 24, 2024, the White House issued the first-ever National Security Memorandum on Artificial Intelligence (AI), which outlines a comprehensive strategy for harnessing AI to fulfill U.S. national security needs while prioritizing its safety, security, and trustworthiness. This directive also aims to maintain U.S. leadership in advancing international consensus and governance around AI, building on progress made over the past year at the United Nations, as well as the AI Safety Summits in Bletchley and Seoul. Most notably, this memorandum directly fulfills the obligation to offer further direction for AI use in national security systems, as defined in subsection 4.8 of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI released last October.

The directive underscores the need to balance responsible AI use with flexibility, ensuring its potential is not unduly limited—particularly in high-stakes national security applications. While the memorandum holds broader implications for AI governance, the following cybersecurity-related measures are particularly noteworthy and essential to advancing AI resilience in national security applications:  

1. Establish a Comprehensive Framework to Advance AI Governance and Risk Management in National Security

A central pillar of this memorandum is the introduction of the Framework to Advance AI Governance and Risk Management in National Security. This accompanying framework, paralleling the Office of Management and Budget’s earlier memorandum on Advancing the Responsible Acquisition of AI in Government, provides a structured, comprehensive approach to managing the layered risks associated with AI use. For example, the framework mandates the continued testing, monitoring, and evaluation of AI systems, ensuring vulnerability assessments and security compliance throughout the AI lifecycle. The framework also requires robust data management standards, including the secure handling, documentation, and retention of AI models, alongside standardized practices for data quality assessment post-deployment.

Crucially, the framework offers targeted guidance for determining prohibited AI use and managing “high-impact” AI systems. This approach ensures that agencies employ stringent and holistic risk management practices, especially when deploying AI applications that significantly impact U.S. national security.

2. Safeguard AI System Security and Integrity from Foreign Interference Risks and Cyber Threats

Recognizing that foreign adversaries are increasingly targeting AI innovations to advance their own national objectives, the memorandum tasks the National Security Council and the Office of the Director of National Intelligence (ODNI) with reviewing national intelligence priorities to improve the identification and assessment of foreign intelligence threats targeting the U.S. AI ecosystem (Section 3.2(b)(i)). Moreover, the ODNI, in coordination with the Department of Defense (DOD), Department of Justice, and other agencies, are responsible for identifying critical nodes in the AI supply chain that could be disrupted or compromised by foreign actors, ensuring that proactive and coordinated measures are in place to mitigate such risks (Section 3.2(b)(ii)). To mitigate the risk of gray-zone methods, the Committee on Foreign Investment in the United States is also directed to assess whether foreign access to U.S. AI proprietary information poses a security threat, providing a regulatory mechanism to ban harmful transactions (Section 3.2(d)(i)).

Notably, the Artificial Intelligence Safety Institute (AISI) assumes expanded responsibilities to advance AI resilience. In particular, AISI is tasked with issuing specialized guidance to AI developers on managing safety, security, and trustworthiness risks in dual-use models; establishing benchmarks for AI capability evaluations; and serving as the primary conduit for communicating risk mitigation recommendations (Section 3.3(e)). Through these combined efforts to detect, assess, and block supply chain risks, the United States reinforces its commitment to protecting its technological edge and leadership.  

3. Leverage AI’s Potential in Offensive and Defensive U.S. Cyber Operations

To harness AI’s potential to enhance both offensive and defensive U.S. cyber operations, the memorandum tasks the Department of Energy (DOE) with launching a pilot project to evaluate the performance and efficiency of federated AI and data sources, which are essential for frontier AI-scale training, fine-tuning, and inference (Section 3.1(e)(iii)). This project aims to refine AI capabilities that could improve cyber threat detection, response, and offensive operations against potential adversaries, aligning with the findings presented in the Senate’s roadmap for AI policy.

Additionally, where appropriate, the Department of Homeland Security (DHS), Federal Bureau of Investigation, National Security Agency, and DOD are tasked with publishing unclassified guidance on known AI cybersecurity vulnerabilities, threats, and best practices for avoiding, detecting, and mitigating these risks during AI model training and deployment (Section 3.3(h)(ii)). This guidance is also expected to cover the integration of AI into other software systems, contributing to the secure deployment of AI in operational settings. Together, these actions have the potential to strengthen the United States’ ability to leverage AI for cyber operations, helping to maintain a decisive technological advantage over adversaries who are actively seeking to use AI to undermine our security.

4. Secure AI in Critical Infrastructure

The memorandum also underscores the importance of securing AI within U.S. critical infrastructure, recognizing the risks AI can pose in sensitive sectors, including nuclear, biological, and chemical environments. In collaboration with the National Nuclear Security Administration and other agencies, the DOE is tasked with developing infrastructure capable of systematically testing AI models to assess their potential to generate or exacerbate nuclear and radiological risks (Section 3.3(f)(iii)). This initiative includes maintaining classified and unclassified testing capabilities, incorporating red-teaming exercises, and ensuring the secure transfer and evaluation of AI models.

Furthermore, the memorandum requires the DOE, along with the DHS and other agencies, to develop a roadmap for classified evaluations of AI’s potential to create new or amplify existing chemical and biological threats, ensuring rigorous testing, and proactively safeguarding sensitive information (Section 3.3(g)(i)). Through these efforts, the memorandum aims to protect the United States’ critical infrastructure from emerging AI-related vulnerabilities, ensuring resilience against both unintentional risks and deliberate attacks.

5. Attract, Build, and Retain a Top-Tier AI Workforce

The memorandum underscores the critical importance of cultivating and retaining a robust AI talent pipeline to maintain expertise vital to national security—a long-standing struggle, especially in cybersecurity, where the administration has already launched targeted hiring initiatives to close talent gaps. For instance, Sections 3.1(c)(i) and 4.1(c) outline provisions to attract international AI experts, including expediting visa processes and addressing hiring hurdles. Specifically, the DOD, Department of State, and DHS are instructed to revise hiring policies to ensure they attract AI-related technical talent and align with national security missions. This includes offering expedited security clearances and scholarship programs aimed at building technical expertise within the government.

These workforce initiatives also align with findings from the Senate AI Insight Forums, which stressed the need to provide pathways for international students and entrepreneurs to remain in the United States post-education, leveraging tax incentives, and strong protections for patents and intellectual property to foster innovation.

Looking Ahead:

In light of the rapid pace at which foreign adversaries aim to deploy AI to usurp U.S. technological leadership, military advantage, and international influence, the release of this highly anticipated memorandum marks a significant and strategic milestone in AI governance and cybersecurity. By aligning ambitious AI innovation and integration goals with targeted cybersecurity and national security guidance, the memorandum aims for a balanced approach that seeks to avoid the dangers of self-imposed barriers, such as overregulation and bureaucratic delays, while preserving the nation’s technological edge.

Highly responsive to insights from recent forums and working groups, the memorandum signals an ongoing commitment to refining AI governance through collaboration and cutting-edge research. However, as AI technology and global threats evolve, regular reassessment will be essential to preserve the memorandum’s balance between fostering innovation, swift integration, and safeguarding national security interests. Sustaining this momentum will be imperative to fully realizing the objectives set forth in this memorandum.

 


Haiman Wong is Resident Fellow, Cybersecurity and Emerging Threats at R Street.