Open Source AI: Boon or Bane for Security?

IBM's Martin Keen and Gabe Goodhart discuss the security implications of open-source AI, balancing innovation with risk.

Four panelists on a video call discussing AI and open source security.
Image credit: Security Intelligence· IBM

In a recent discussion on the Security Intelligence podcast, IBM's Master Inventor Martin Keen and Chief Architect AI Open Innovation Gabe Goodhart explored the complex relationship between open-source AI and security. The conversation, hosted by Matt Kosinski, delved into the inherent advantages of open-source models, such as fostering innovation and transparency, while also highlighting the significant security challenges they present.

The Experts' Perspectives

Martin Keen, a Master Inventor at IBM, brings a wealth of experience in technological innovation. His perspective often focuses on the practical application and long-term implications of emerging technologies. In this discussion, Keen acknowledged the widespread adoption and benefits of open-source models but cautioned against oversimplifying their security implications.

Gabe Goodhart, Chief Architect AI Open Innovation at IBM, offered a deep dive into the architectural and strategic considerations of AI development. His role involves navigating the cutting edge of AI, including the security challenges that arise from open innovation. Goodhart emphasized the need for robust security practices, even when leveraging the collaborative power of open source.

Related startups

The full discussion can be found on IBM's YouTube channel.

Is open source safe? Featuring Mixture of Experts - IBM
Is open source safe? Featuring Mixture of Experts — from IBM

The Double-Edged Sword of Open Source AI

The core of the discussion revolved around the inherent tension between the benefits of open-source AI and its potential security vulnerabilities. While open-source models allow for rapid development, broader access, and collaborative improvement, they also expose the underlying code and architecture to a wider audience, including those with malicious intent.

Kosinski initiated the conversation by asking Goodhart about his stance on open-source AI. Goodhart expressed his full commitment, stating, "I've got to be the person you're referencing on mixture of experts that's all in, all the time, on open source. That's where I live, it's what I do." He elaborated that while open source is generally positive, it comes with a critical caveat: "You know, open source is great, open source is terrible when it comes to cybersecurity people like Jeff here... you tell them you're doing and how good it is and how useful it is, they will find a way to tell you that this thing is dangerous, it's reckless, and that they solved this problem 30 years ago and nobody was listening."

Keen echoed this sentiment, highlighting the security implications. He stated, "Security through obscurity is not a viable strategy." He argued that while closed-source models might seem more secure due to their proprietary nature, true security comes from transparency and rigorous examination. He pointed out that even closed systems can have vulnerabilities, and the openness of open-source allows for more eyes to scrutinize the code, potentially leading to faster identification and patching of flaws. However, he also acknowledged the risk: "The flip side is that the bad guys can also find those vulnerabilities."

The Trust Factor and Unseen Vulnerabilities

A significant point of discussion was the concept of trust in AI systems. Both experts agreed that for AI to be widely adopted and trusted, particularly in sensitive applications, transparency is paramount. However, the very nature of complex AI models, especially those with billions of parameters, makes complete transparency a challenge.

Keen drew a parallel to cryptography, explaining the principle of "Kerckhoffs's principle," which states that the security of a system should rely on the secrecy of the key, not the obscurity of the algorithm. He applied this to AI, suggesting that the algorithms and their weights should ideally be transparent, but the keys to their operation and the sensitive data they process must remain secret. He noted that the notion of keeping AI models secret for security reasons is often a false sense of security, as vulnerabilities can still exist and be exploited.

Goodhart added a crucial layer to this, discussing the difficulty of understanding and auditing the behavior of very large models. He stated, "We're sitting in today, and I think there's a lot of debate around questions of who gets access to what models and when... and we've been talking about the benefits and the risks that come along with that. But what needs to be managed, what needs to be thought about, is how do we, you know, manage those risks?" He highlighted the concern that as models become more complex and autonomous, ensuring their behavior remains aligned with human intentions becomes increasingly difficult.

The Path Forward

The conversation concluded with a consensus that while open-source AI offers immense potential, a proactive and vigilant approach to security is essential. The rapid pace of AI development means that security considerations must be integrated from the outset, not as an afterthought. This includes not only securing the models themselves but also the infrastructure and processes surrounding their deployment and use.

Both Keen and Goodhart emphasized the need for continued research into AI interpretability and security. As AI systems become more powerful and integrated into critical infrastructure, understanding their decision-making processes and mitigating potential risks will be crucial for ensuring their safe and beneficial deployment.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.