The recent CNBC interview with Jan Oberhauser, founder and CEO of the workflow automation platform n8n, offered a salient glimpse into the evolving landscape of enterprise AI. Oberhauser spoke with the interviewer about n8n’s latest funding round, which notably included backing from Nvidia’s venture arm, NVentures, alongside other prominent investors like Accel and Sequoia. The discussion centered on n8n's unique position in helping businesses deploy AI agents effectively, emphasizing an approach that prioritizes flexibility, independence, and the seamless integration of human oversight.
n8n positions itself not as a developer of foundational AI models, but as the essential orchestration layer that brings these powerful technologies into practical, production-ready workflows. Oberhauser articulated this vision clearly, stating, "The nice thing about something like n8n is like it allows like to combine humans, AI, and code." This statement underscores a critical insight: for AI to truly deliver value in complex enterprise environments, it cannot operate in a vacuum. It requires a robust framework that integrates disparate systems, allows for custom logic (code), and crucially, maintains human agency where nuanced judgment or oversight is necessary. AI, as Oberhauser suggests, is not a monolithic solution but a powerful component within a broader, intelligent system.
The strategic emphasis on an open, independent platform is perhaps n8n's most compelling differentiator, and a core insight for any founder navigating the current AI race. When pressed on the advantage of an open-source-like model in an industry often driven by proprietary ecosystems, Oberhauser clarified n8n's stance: "We are not locked into any LLM, so are not our customers. Means like they don't have to buy chips. They don't have to train our own models. Like we ourselves and our customers can use whatever model they want to use, and they can host it wherever they want." This commitment to flexibility directly addresses a significant concern for enterprises: vendor lock-in. By enabling businesses to integrate various large language models (LLMs) from different providers—be it OpenAI, Google, or others—n8n empowers its users with choice and future-proofing against rapid technological shifts or changes in provider offerings. This independence extends to data sovereignty, allowing companies to host their data on their own servers, a critical factor for security, compliance, and competitive advantage.
Nvidia’s investment in n8n, through its NVentures arm, further illuminates this strategic shift. While Nvidia is predominantly known for its indispensable AI chips, its backing of n8n signals a recognition that the value chain of AI extends far beyond hardware. The ability to effectively deploy and manage AI agents in diverse enterprise settings is becoming as crucial as the underlying computational power. Oberhauser confirmed that n8n does not directly purchase Nvidia chips, reinforcing the idea that NVentures' investment is not about creating a captive customer, but rather about fostering an ecosystem where AI deployment can thrive, regardless of the specific hardware or models employed. This move by Nvidia suggests a sophisticated understanding of the AI market's maturation, where enabling broader adoption through flexible software platforms is key to long-term growth.
The immediate, tangible applications of n8n’s technology highlight another vital insight: AI's impact is already being realized in automating mundane yet critical business functions. Oberhauser provided a clear example in customer support: "a new support email arrives, n8n picks the email up, it scans the content, checks what's actually happening in there, then it checks all of your data sources like Zendesk, Notion, whatever you have, can automatically answer that that that the ticket." This process can be fully automated for low-risk inquiries or include a "human in the loop" for complex cases, ensuring accuracy and maintaining quality. Such automation directly translates into efficiency gains, allowing human employees to focus on higher-value tasks and reducing operational costs.
Related Reading
- OpenAI and Broadcom Unite to Redefine AI Compute
- OpenAI's Infrastructure Gambit: Custom Chips and Strategic Moats
- Gabelli Sees AI as Revolution, Touts National Fuel Gas for Data Center Energy
Beyond customer support, n8n’s platform is also being leveraged for more complex, high-stakes applications. Oberhauser mentioned that even a "Ministry of Defense" uses their platform, alongside large corporations like Vodafone. This breadth of adoption, from routine customer service to national security, underscores the platform’s versatility and the universal need for robust, adaptable AI deployment tools. The ability to orchestrate complex sequences of tasks, integrating various AI models, existing data sources, and human intervention, is what truly unlocks the transformative potential of AI for enterprises.
The interview with Jan Oberhauser thus serves as a compelling narrative on the pragmatic realities of AI integration. It’s not just about building the most powerful AI models, but about building the infrastructure that allows these models to be seamlessly integrated, managed, and optimized within existing business operations. n8n's emphasis on an independent, flexible, and human-centric approach to AI workflow automation offers a blueprint for how enterprises can harness AI's power without sacrificing control, security, or adaptability. The investment from a hardware giant like Nvidia further validates this approach, signaling a broader industry understanding that the future of AI lies in its intelligent deployment.

