Mobile gaming giant HARDlight has overhauled its A/B testing analysis with a custom framework built on Databricks. This move addresses a common bottleneck in live-service games: the slow, manual process of analyzing experimental results, which often leads to delayed decisions and eroded confidence in data-driven iteration. The new system aims to standardize analysis, accelerate insight delivery, and democratize access to experiment outcomes across the organization.
The challenge for HARDlight was not just speed but also trust. Inconsistent analytical approaches led to differing interpretations, hindering alignment and weakening A/B testing's role as a scientific decision-making tool. Different stakeholders required varying levels of detail, from daily status updates to deep dives into player behavior, a spectrum current dashboards struggled to serve effectively.
To scale experimentation, HARDlight needed a unified approach to inference and accessible results. They developed a Databricks-native A/B testing analysis framework that automates the entire process from data ingestion to decision support. Statistical modeling is now applied consistently and transparently upstream, with results published to a daily-refreshing dashboard. This dashboard begins with an LLM-generated summary and allows for deeper exploration of metrics, diagnostics, and recommended actions.