"People can die if we do this wrong." This stark pronouncement, delivered by Mark Myshatyn, Enterprise AI Architect at Los Alamos National Laboratory, at the 2025 AI Engineer World's Fair, cuts through the hype surrounding artificial intelligence to reveal the profound responsibilities inherent in its government applications. His lightning talk provided a rare glimpse into how a venerable institution like Los Alamos is not just adopting but actively shaping the future of AI, particularly in the high-stakes arena of national security.
Myshatyn’s presentation, delivered to an audience of founders, VCs, and AI professionals, underscored Los Alamos’ deep and often overlooked history in the field. Far from being new to AI, "Los Alamos has been part of the AI/ML world for 69 years," he noted, showcasing a black-and-white photo of a scientist playing "Los Alamos Chess" against an early supercomputer in 1956. This rich heritage in applied statistics and machine learning provides a unique foundation as the lab now navigates the complexities of modern AI agents.
The core of Los Alamos’ current AI thrust lies in its "AI Scientist Model," an agentic workflow designed to accelerate scientific discovery. Myshatyn demonstrated an AI system tasked with designing an inertial confinement fusion (ICF) capsule for a sister lab, Livermore. The agents interact by reading scientific papers, searching the web, generating and critiquing ideas, proposing designs, and crucially, running 1D simulations on high-performance computing assets. This iterative process allows the AI to evaluate failures, develop new designs, and optimize for maximum yield. The entire design and simulation process, which would typically take humans significantly longer, was completed in just two hours with no human intervention. This capability exemplifies how AI agents can act as powerful force multipliers, compressing scientific discovery cycles and enabling breakthroughs previously unimaginable. "What we can let a model know was really the change that happened here," Myshatyn stated, emphasizing the paradigm shift from models simply knowing to actively exploring and iterating.
Los Alamos’ National Security AI (NSAI) Office operates with a clear mandate: to push the boundaries of AI science for national security, build strong ties with commercial and academic partners, and support the right tools for science and operations. The scope of their work is immense, spanning 40 square miles of labs and test sites, including 13 nuclear facilities. This vast ecosystem requires AI not just for scientific advancement but for critical operational tasks like payroll, procurement, and cybersecurity.
The deployment of AI in such a critical environment introduces a distinct set of challenges, particularly concerning trust and responsibility. Unlike commercial applications where errors might lead to financial losses, government AI directly impacts national security and human lives. This dual imperative—accelerating innovation while safeguarding against catastrophic failure—is a central theme.
The regulatory landscape for government AI is rapidly evolving, presenting both obstacles and opportunities. Myshatyn highlighted recent Office of Management and Budget (OMB) memoranda (M-25-21 and M-25-22) that push federal agencies to integrate AI more deeply into their operations while also emphasizing the protection of civil rights, liberties, and safety. This framework, still in its early stages of development, acts as a "clean sheet of paper" for agencies to define their AI strategies. Existing compliance frameworks like FedRAMP and the DoD Cloud Computing Security Requirements Guide (CCSRG) are extensive, sometimes exceeding a thousand controls, and historically slow to adapt. However, new initiatives like FedRAMP 20x aim to streamline and automate compliance.
Related Reading
- Jensen Huang Warns Against State-Level AI Regulation, Champions Federal Oversight
- Enterprise AI Spend Diverges: Agents Lag, Model Access Surges
- AWS AgentCore Unlocks Production-Ready AI Agents for Enterprise
For commercial partners looking to engage with federal agencies, Myshatyn outlined key architectural considerations: build for explainability, isolation, and governance. Explainability is crucial for auditing decisions and ensuring accountability, especially when "AI is considered high-impact when its output serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety." Isolation is vital for containing potential breaches in sensitive environments, a non-negotiable for national security. Governance, including robust open-source dependency management and patching plans, ensures the integrity and trustworthiness of AI systems. Finally, maintaining speed is paramount; federal agencies cannot afford to lag behind in AI development.
Los Alamos actively seeks partnerships with commercial and academic entities to tackle problems that are either incredibly hard or impossible within commercial industry due to their scale, sensitivity, or specialized nature. They possess petabytes of data that have never and will never touch the internet, along with unparalleled subject matter expertise in diverse scientific domains. This unique environment offers a fertile ground for collaboration, where the right application of math and science can indeed change the world, as Los Alamos has demonstrated throughout its history. The opportunity to help shape the future of secure, trustworthy, and impactful AI for national security is immense, and the lab extends an open invitation to those willing to build that future together.

