The current gold rush in artificial intelligence, marked by dizzying investments and transformative potential, faces a silent, yet formidable hurdle: the inability to effectively measure its actual impact on enterprise productivity. This critical oversight forms the core of a compelling discussion between Russ Fradin, CEO and co-founder of Larridin, and Alex Rampell, General Partner at a16z, who spoke on the a16z podcast about the lessons AI can glean from the early days of internet advertising. Their conversation exposes a $700 billion problem where companies are spending vast sums on AI tools without a clear understanding of whether they are yielding tangible benefits, or even being used at all.
Fradin, drawing on his experience building measurement infrastructure in the ad tech boom, highlights a striking parallel: "Like ad tech, you’re trying to figure out, does the advertising work? A lot of ad tech is here's an advertisement, and there's this attribution problem." Today, a similar attribution challenge plagues AI. Enterprises are pouring capital into AI solutions, yet many lack the fundamental tools to assess return on investment, leaving a significant portion of these expenditures as speculative bets rather than strategic investments.
A pervasive issue is the clandestine adoption of AI tools by employees. Rampell notes that "there's somebody at every big company who has figured out, I could do something in one minute that used to take eight hours." These highly productive individuals, often fearing repercussions or being seen as circumventing official channels, frequently hide their AI usage from management. This creates a significant blind spot for companies, preventing them from understanding where genuine productivity gains are occurring and how best to scale these efficiencies.
The problem is compounded by the absence of a clear productivity baseline. Without knowing what productivity looked like before AI, companies struggle to quantify its benefits. Furthermore, relying on employee surveys to gauge productivity, as many do, falls prey to Goodhart’s Law: "When a measure becomes a target, it is no longer accurate as a measure." Employees, aware that their answers might reflect on the tools their bosses just bought, are incentivized to report positive outcomes, regardless of actual impact. This creates a distorted view of AI’s efficacy, masking both failures and true successes.
Larridin aims to address this by building a foundational measurement infrastructure for enterprise AI, moving beyond simple adoption rates to actionable insights. Their approach marries traditional productivity research with proprietary behavioral data, seeking to answer crucial questions for CFOs and CIOs. This isn't about stifling innovation but rather accelerating it by providing clarity. "Not with the goal of stopping anything," Fradin asserts, "frankly with the goal of accelerating it." Companies need to know what tools they have, who is using them, and critically, if that usage translates into actual, measurable productivity.
Related Reading
- Amazon's AI Imperative: Efficiency, Chips, and the Looming Energy Question
- AI to Drive Market Higher, Deflate Costs Through 2026
- AI's Enduring Rally: Navigating Growth and Diversification in a Broadening Market
The impact extends beyond individual tasks to organizational responsiveness. Are departments collaborating more efficiently? Are engineers responding faster to product input? These real-world metrics, rather than lines of code written or emails sent, offer a more accurate picture of AI's value. The current lack of such granular, behavioral data means enterprises are flying blind, unable to discern which AI investments are truly moving the needle.
The challenge is immense, given the sheer volume of AI tools emerging and the diverse roles within large organizations. Companies struggle with how to train their workforce, manage security risks, and navigate regulatory complexities, particularly in regions like the EU. The anxiety among employees about job security or appearing redundant further complicates the picture, leading to a reluctance to openly embrace new AI workflows without clear guidance and a safe environment. Larridin's Nexus product, for example, is designed to provide safe spaces for AI usage, blocking illegal queries and ensuring data privacy, thereby fostering adoption without fear. This infrastructure is vital not just for measuring current impact, but for strategically planning future AI integration and ensuring competitive advantage in a rapidly evolving landscape.

