The prevalent model of artificial intelligence, where developers constantly manage and prompt AI tools, imposes a significant "mental load" that stifles innovation. This was the central theme articulated by Kath Korevec, Director of Product at Google Labs, during her presentation at the AI Engineer Code Summit. Korevec argued for a paradigm shift towards "proactive agents," autonomous AI collaborators designed to anticipate needs and act intelligently within a developer's workflow, freeing human creativity from the mundane.
Korevec framed the current state of AI agents with a relatable household analogy: a broken dishwasher. While her husband offered to do the dishes, she found herself constantly reminding him, effectively carrying the "mental load" of the task despite not physically doing it. This, she explained, mirrors the developer experience with today's asynchronous AI agents. "They can handle some of the work," she noted, "but we're still the ones as developers carrying that mental load and monitoring them."
Humans, she asserted, are fundamentally "unitaskers," not parallel processors. While we may juggle multiple goals, our execution is sequential. Switching between tasks incurs a significant cognitive cost, potentially diminishing productivity by up to 40%. The pause, the gap in attention required to manually kick off an AI task and then wait for its completion, breaks flow and momentum. The solution, Korevec posited, lies in building trust with AI collaborators. Developers shouldn't be expected to babysit their agents; instead, they need systems that understand context, anticipate needs, and know precisely when to intervene. "We want Jules to do the dishes without being asked," she quipped, referring to Google Labs' project.
