nOps, a Databricks Built On partner managing over $4 billion in annual cloud spend, has rebuilt its production application on Databricks Lakebase. This strategic pivot streamlines its architecture, eradicating the need for separate systems to manage application data and analytics.
The company's previous setup mirrored a common challenge for Independent Software Vendors (ISVs) on Databricks: keeping analytics in the Lakehouse separate from application needs requiring a relational database. This often led to maintaining complex ETL pipelines and cron jobs to sync data between systems, causing delays and increasing operational overhead.
nOps operates an automated cloud cost optimization platform, constantly monitoring, purchasing, and exchanging cloud commitments across AWS, GCP, and Azure. Its model relies on real-time analysis of usage patterns and commitment portfolios to maximize savings while minimizing risk. This data-intensive operation has long leveraged the Databricks Lakehouse for its analytical backbone.
The Two-World Problem
However, the customer-facing application, where users manage budgets and view savings, relied on a separate relational database. This created a disconnect, with data latency between the application and the Lakehouse.
Expanding to multi-cloud support strained this existing architecture. nOps recognized the need for a unified platform that could handle both operational and analytical workloads efficiently.
Choosing Lakebase
The decision to adopt Databricks Lakebase, a managed PostgreSQL database integrated with the Lakehouse, was driven by three key factors: tight coupling to the Lakehouse, auto-scaling capabilities, and ease of adoption.
Jordan Stein, Director of Product at nOps, highlighted the immediate access to frequently changing customer data from Lakehouse pipelines without scheduled jobs or lag. This real-time data flow is a significant improvement.
Lakebase's serverless, auto-scaling compute, which scales to zero when idle, impressed the nOps team. This aligns perfectly with the cost-optimization principles of a company that practices what it preaches.
Familiarity with the Databricks workspace and features like point-in-time restore and flexible OAuth roles simplified the transition. It meant no new tools for their teams to learn.
A Unified Architecture
The new architecture positions Lakebase as the central PostgreSQL database and the single source of truth for both the front-end application and AI infrastructure. Databricks Lakehouse continuously consumes data from Lakebase for analysis and metric computation.
Standardized metrics computed in the Lakehouse are surfaced directly in the front-end via Databricks Metric Views. Data flows unidirectionally from Lakebase into the Lakehouse for analytics, ensuring an unambiguous source of truth and a clean architecture.
This approach eliminates the "sync tax" – the costly code required to move data between disparate systems. Lakebase's native integration with Unity Catalog and Delta Lake sync replaces custom ETL pipelines with managed infrastructure, freeing up engineering time.
Unified governance across operational and analytical data is another significant benefit. When the operational database is a Unity Catalog asset, security policies and lineage are managed in one place.
As a fully managed PostgreSQL, Lakebase ensures compatibility with existing libraries, ORMs, and SQL tools, minimizing rewrite efforts. Migrating involves simply updating a connection string, not redeveloping queries. This is a stark contrast to managing separate database stacks, an issue many ISVs building on Databricks face. For instance, Backstage similarly ditched PostgreSQL for Databricks Lakebase, highlighting the trend toward consolidation.
Usage-based pricing with scale-to-zero offers economic advantages for variable workloads. This allows ISVs to pay only for capacity used, directly impacting unit economics.
The ISV Advantage
nOps's adoption demonstrates how Lakebase resolves the tension between OLTP and analytics. By integrating the operational database directly into the Lakehouse, ISVs can ship features faster, as a significant category of integration work is eliminated.
Early adopters like nOps gain a competitive edge by building on a more integrated and simpler platform. Their willingness to embrace Lakebase early has resulted in faster data pipelines, reduced operational overhead, and an improved customer experience.
This move by nOps, which is detailed further on the Databricks blog, showcases a powerful architectural pattern for other ISVs looking to consolidate their data infrastructure.