"People can die if we do this wrong." This stark reality underscores the unique challenges of integrating AI into national security, a point emphatically made by Mark Myshatyn, Enterprise AI Architect at Los Alamos National Laboratory (LANL).
Myshatyn spoke at the AI Engineer World's Fair, offering a rare glimpse into how a venerable institution like LANL, a cornerstone of U.S. nuclear security and scientific research, navigates the rapidly evolving landscape of artificial intelligence, particularly focusing on AI agents and the stringent regulatory environment they operate within.
Far from being newcomers to AI, LANL has a deep-seated history in the field. "We've actually been doing applied AIML for almost 70 years," Myshatyn noted, citing early work in 1956 on Los Alamos Chess and Monte Carlo methods, long before the current generative AI explosion. This legacy positions them uniquely to understand AI's foundational principles and its practical applications.
For LANL, the advent of AI agents represents a significant leap. It’s about empowering AI models to move beyond mere knowledge retrieval to active problem-solving and execution. Myshatyn highlighted how this shift allows them to "move science faster" and address complex national security challenges with unprecedented efficiency, a critical capability when facing demands to achieve "better, faster, cheaper, and more to protect our country."
Integrating these advanced AI capabilities into government workflows is not without immense regulatory hurdles. Myshatyn detailed the layers of compliance, including NIST 800-53 and FedRAMP, which dictate how AI systems must handle sensitive, controlled, and classified data. This isn't about simple office automation; it's about managing geopolitical and kinetic risks.
Despite their internal expertise and vast resources, LANL recognizes the necessity of collaboration. They actively seek partnerships with commercial industry and academia, understanding that they "absolutely need those partnerships." This shared responsibility is critical for pushing the boundaries of AI science, especially in areas where commercial entities may not operate due to the unique risks and data classifications. "The responsibility of what our AI tools and services do starts to matter," Myshatyn stated, emphasizing the need for robust governance and trust in AI systems handling critical national security missions.
LANL’s mission necessitates working with petabytes of highly sensitive data that will never touch the public internet. This unique context means they cannot simply adopt commercial off-the-shelf solutions. They must develop their own models and tools or work with partners who can meet the highest standards of isolation and security. This environment fosters innovation in areas often inaccessible to the broader tech industry, pushing the very limits of what AI can achieve under the most demanding conditions.



