With over 80% of enterprises now turning to open-source Large Language Models (Open LLMs), the challenge isn't accessing powerful models - it's deploying them efficiently and securely. Today, companies must stitch together more than 10 different components just to begin deployment, with each optimization requiring thousands of engineering hours. AI-infrastructure startup Pipeshift announced today their $2.5 million seed round to solve this challenge, launching a new-age Platform-as-a-Service (PaaS) that enables engineering teams to orchestrate AI workloads across any infrastructure - cloud or on-premises - with unprecedented speed and control.
The round was led by Y Combinator and SenseAI Ventures, with additional participation from Arka Venture Labs, Good News Ventures, Nivesha Ventures, Astir VC, GradCapital, and MyAsiaVC. Seasoned Silicon Valley angels like Kulveer Taggar (CEO of Zuess), Umur Cubukcu (CEO of Ubicloud and former Head of PostgreSQL at Azure), and Krishna Mehra (former Head of Engineering at Meta and co-founder of Capillary Technologies) also joined the round.

Unlike existing players who are GPU-brokers offering one-size-fits-all solutions, Pipeshift understands the enterprise need for control and flexibility of infrastructure and offers an end-to-end MLOps stack for enterprises to train, deploy and scale open-source GenAI models — LLMs, vision models, audio models and image models — across any cloud or on-prem GPUs. As a result, enterprises can deploy their AI workloads in production faster and more reliably. Additionally, as more model and hardware architectures coming in the market, Pipeshift future-proofs infrastructure investments by offering flexibility through their modular MLOps stack that allows enterprises to bring down their GPU infrastructure costs without any additional engineering effort.
“2025 marks the year when Generative AI transitions into production and engineering teams are witnessing the benefits of using open-source models in-house. This offers high levels of privacy and control alongside enhanced performance and lower costs. However, this is a complex and expensive process involving multiple components being stitched together,” said Arko Chattopadhyay, co-Founder and CEO of Pipeshift. He added, “Pipeshift's enterprise-grade orchestration platform eradicates the need for such extensive engineering investments by not only simplifying deployment but also maximizing the production throughput."
Pipeshift’s founding team has been working on this problem for over a year. Co-founders, Arko Chattopadhyay, Enrique Ferrao and Pranav Reddy met during their undergraduate studies at Manipal Institute of Technology, where they were leading a defence robotics non-profit supported by NVIDIA, Dassault Systems and SICK Sensor Intelligence. The team was focused on deployment of machine learning models on cloud and on edge for processing real-time sensor data and task-specific vision models. In 2023, they scaled an Llama2-powered enterprise search app within an organization of over 1000 employees, completely on-prem. While building this, they saw the challenges of running and scaling private AI workloads in production, pushing them to start optimizing the same at Pipeshift.
The timing for Pipeshift is significant. As AI reshapes markets and redefines competition, companies know the rewards for seizing the moment are immense. However, security and data privacy risks loom large, demanding protection for proprietary IP. These challenges compound in a rapidly evolving technology landscape where missteps lead to expensive delays and lost opportunities. Adding to this complexity is the uniqueness of every business problem. No two AI strategies are the same, and every deployment must align with the distinct needs of the organization. Pipeshift solves this by bringing in the flexibility and precision of open LLMs and the scalability of their enterprise MLOps platform. Businesses overcome these challenges while managing resource demands and ensuring compliance — all without losing sight of their broader goals.
“Enterprises prefer open-source GenAI for the benefits of privacy, model ownership and lower costs. However, transitioning GenAI to production remains a complex and expensive process requiring multiple components to be stitched,” said Rahul Agarwalla, Managing Partner of SenseAI Ventures.
Having already worked with over 30 companies including NetApp, Pipeshift aims to become the trusted partner for organizations looking to unlock AI's potential while maintaining control of their infrastructure and data.

