Erica Brescia: AI models need responsible development

Erica Brescia of Redpoint Ventures discusses how AI can bolster cybersecurity through responsible development and open-source collaboration.

4 min read
Erica Brescia, Managing Director at Redpoint Ventures, in a video interview discussing AI and cybersecurity.
Image credit: Bloomberg Tech· Bloomberg Technology

Erica Brescia, Managing Director at Redpoint Ventures, discusses the critical intersection of artificial intelligence and cybersecurity in a recent Bloomberg Tech segment. Brescia, a prominent figure in venture capital with a portfolio including companies like Snowflake, Stripe, and OpenAI, emphasizes the growing need for responsible development of increasingly powerful AI models.

The full discussion can be found on Bloomberg Technology's YouTube channel.

AI Cyber Race Is a Fight for Governance, Says Redpoint’s Brescia - Bloomberg Technology
AI Cyber Race Is a Fight for Governance, Says Redpoint’s Brescia — from Bloomberg Technology

Erica Brescia's Perspective

Brescia's extensive experience in the technology and venture capital sectors, particularly her involvement with companies at the forefront of AI and cybersecurity, provides a unique vantage point. Her firm, Redpoint Ventures, has a diverse portfolio that includes successful tech giants and emerging startups, indicating a broad understanding of market trends and technological advancements. Her insights are particularly valuable given the current rapid evolution of AI capabilities and the associated security challenges.

Related startups

AI's Dual Role in Cybersecurity

The conversation highlights the dual nature of AI in cybersecurity. On one hand, powerful AI models can be used by malicious actors to discover vulnerabilities and launch sophisticated attacks. On the other hand, these same models can be instrumental in identifying and mitigating these threats. Brescia points out that the development of AI models themselves needs to be approached with a strong emphasis on security and ethical considerations.

"We've always known that these models were going to get increasingly powerful, and we need to work together across the private and public sectors to make sure that we keep people safe," Brescia stated. This collaborative approach is crucial for navigating the complex security challenges posed by advanced AI.

The Advantage of Open-Source AI

Brescia suggests that open-source AI models offer a unique advantage in the cybersecurity realm. By making these models accessible, researchers and developers can more readily identify vulnerabilities and contribute to their remediation. "I think the best companies are the ones that are going to manage to get the value out of AI across the stack, and not only code, but also support, etc.," she explained. This implies that companies adopting AI will need to build their own tooling and processes to manage and secure these technologies effectively.

She further elaborated on the benefits of open-source AI for security: "AI can do a lot to improve open source. Using these models can help us find vulnerabilities that we didn't know were there. It can help them patch vulnerabilities more quickly. So, I think it's great for the world." This highlights AI's potential to bolster the security of the internet's foundational infrastructure.

Responsible AI Development and Deployment

Brescia stressed the importance of responsible disclosure and collaboration when vulnerabilities are discovered. "When you find a zero day, you have to report it responsibly before you share your work with the world," she noted. This principle extends to the deployment of AI, where proactive security measures are paramount.

Regarding the adoption of AI within companies, Brescia observed a trend where successful companies are developing their own internal tooling. "Companies that are going to get ahead are the ones that are going to build their own tooling for their own work in venture. For example, we have companies like Ramp, which is a Series B company, and they're using AI to manage their entire financial operations," she said. This demonstrates how AI is not just about coding but also about enhancing operational efficiency and security across various business functions.

Brescia concluded by emphasizing the proactive stance needed: "We need to be responsible with powerful AI models. It reminds me of responsible disclosure in cyber. When you find a vulnerability, you have to report it responsibly before you share your work with the world." This sentiment underscores the ongoing need for ethical considerations and security-first approaches as AI technology continues to advance and permeate more aspects of business and society.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.