Cedric Clyburn on Models as a Service

Red Hat's Cedric Clyburn discusses the evolution of AI from code assistants to Models as a Service (MaaS), highlighting on-premise and hybrid deployments with Kubernetes and OpenShift.

3 min read
Cedric Clyburn on Models as a Service
IBM

Cedric Clyburn, Senior Developer Advocate at Red Hat, discusses the evolving landscape of generative AI and the rise of "Models as a Service" (MaaS). Clyburn highlights how developers have progressed from using AI as simple coding assistants to integrating sophisticated models into their applications through APIs. This shift is driven by the need for greater control over data, privacy, and cost-efficiency, particularly in sensitive sectors like healthcare and finance.

The Evolution of AI Integration

Clyburn traces the adoption of generative AI, noting its progression from basic code completion tools in 2022 to more advanced applications in 2023. Initially, developers accessed AI models through public APIs, but this has evolved into a demand for more sovereign and private AI solutions. This transition is fueled by the growing need to manage sensitive data and ensure compliance with various regulations.

Models as a Service (MaaS) Paradigm

The core of Clyburn's discussion revolves around the concept of Models as a Service (MaaS). He explains that MaaS allows organizations to access and manage a variety of AI models, such as those for natural language processing or computer vision, through a unified API. This approach offers significant advantages, including:

Related startups

The full discussion can be found on IBM's YouTube channel.

AI Models as a Service: Powering Agentic AI, Privacy, & RAG - IBM
AI Models as a Service: Powering Agentic AI, Privacy, & RAG — from IBM
  • Transparency: Clear insights into billing and the cost of GPU utilization for each model.
  • Data Privacy and Governance: Enabling organizations to maintain control over their data and adhere to regulatory requirements.
  • Scalability: The ability to scale AI resources efficiently to meet fluctuating demands.
  • Observability: Providing detailed telemetry for token usage, logging, and tracing across the AI model lifecycle.

Clyburn emphasizes that MaaS is becoming a de facto standard for deploying AI, allowing teams to leverage multiple models through a single, manageable interface.

On-Premise and Hybrid AI Deployments

A significant aspect of the MaaS discussion is its applicability in diverse environments. Clyburn highlights that organizations can deploy AI models using technologies like Kubernetes and OpenShift, enabling them to run workloads either on-premise, in the cloud, or at the edge. This flexibility is crucial for applications dealing with sensitive data or requiring low-latency processing.

He contrasts this with the traditional approach of relying solely on third-party LLM providers, which can introduce risks related to data privacy, security breaches, and unpredictable costs. By bringing AI models in-house or managing them through a controlled platform, organizations gain greater oversight and can mitigate these risks.

Open Source Technologies for AI Infrastructure

Clyburn points to the importance of open-source technologies in building this AI infrastructure. Platforms like Kubernetes and OpenShift provide the orchestration capabilities needed to manage complex AI workloads, including the efficient deployment and scaling of models across various hardware, such as GPUs.

He notes that while vanilla Kubernetes can be used, solutions like Red Hat OpenShift offer additional enterprise-grade features crucial for AI development and deployment. These include enhanced security, centralized management, and robust observability tools like Prometheus and Grafana, which are essential for monitoring model performance and resource utilization.

The Future of AI Development

The shift towards Models as a Service signifies a maturation of the AI deployment landscape. It empowers organizations to build and scale AI applications more effectively, with greater control over their data and infrastructure. Clyburn concludes by encouraging viewers to consider the benefits of this approach for their own AI initiatives, emphasizing the move towards more sovereign and manageable AI solutions.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.