The prevalent model of artificial intelligence, where developers constantly manage and prompt AI tools, imposes a significant "mental load" that stifles innovation. This was the central theme articulated by Kath Korevec, Director of Product at Google Labs, during her presentation at the AI Engineer Code Summit. Korevec argued for a paradigm shift towards "proactive agents," autonomous AI collaborators designed to anticipate needs and act intelligently within a developer's workflow, freeing human creativity from the mundane.
Korevec framed the current state of AI agents with a relatable household analogy: a broken dishwasher. While her husband offered to do the dishes, she found herself constantly reminding him, effectively carrying the "mental load" of the task despite not physically doing it. This, she explained, mirrors the developer experience with today's asynchronous AI agents. "They can handle some of the work," she noted, "but we're still the ones as developers carrying that mental load and monitoring them."
Humans, she asserted, are fundamentally "unitaskers," not parallel processors. While we may juggle multiple goals, our execution is sequential. Switching between tasks incurs a significant cognitive cost, potentially diminishing productivity by up to 40%. The pause, the gap in attention required to manually kick off an AI task and then wait for its completion, breaks flow and momentum. The solution, Korevec posited, lies in building trust with AI collaborators. Developers shouldn't be expected to babysit their agents; instead, they need systems that understand context, anticipate needs, and know precisely when to intervene. "We want Jules to do the dishes without being asked," she quipped, referring to Google Labs' project.
Most current AI developer tools are reactive, requiring explicit prompts or actions to generate a response or suggestion. This model, while efficient in its compute usage, places the burden of task management squarely on the human. Imagine a future, Korevec urged, where compute is no longer a limiting factor. Instead of a single reactive assistant, developers could have dozens of small, proactive agents working in parallel, quietly identifying patterns, flagging friction, and handling tedious tasks before being asked.
These proactive systems, Korevec explained, hinge on four essential ingredients: Observation, Personalization, Timely Action, and Seamless Integration. Observation requires the agent to continuously understand the developer's code changes, patterns, and overall project context. Personalization means the agent must learn individual working styles, preferences, and even areas of code to avoid. Timely Action is crucial; an intervention too early is disruptive, too late, and the moment is lost. Finally, Seamless Integration ensures the agent operates within existing developer environments—terminals, repositories, IDEs—rather than forcing users into new, isolated applications.
The vision unfolds in three levels of proactivity, akin to a kitchen hierarchy. Level one, the "attentive sous chef," represents the current capabilities of Jules. It detects missing tests, unused dependencies, or unsafe patterns and proactively suggests or implements fixes. This phase focuses on maintaining code quality and tidiness, allowing the developer to concentrate on more complex tasks.
Level two elevates the agent to a "kitchen manager" role, where it becomes contextually aware of the entire project. It learns the developer's frameworks, deployment styles, and code structure. This manager anticipates needs, keeping the workflow rhythmic and efficient by understanding broader project goals.
Level three is the ultimate ambition: an orchestrator that understands not just context but also consequence. At this level, agents like Jules, Stitch (a design agent), and Insights (a data agent) collaborate, connecting real-world signals like analytics and telemetry to propose improvements across the entire application. This collective intelligence identifies performance fixes, prevents regressions through design changes, and organizes actions based on live data, all while keeping the human firmly in the loop for refinement and redirection.
Google's Jules project is actively pursuing this evolution. Current developments include memory capabilities, allowing Jules to write and edit its own knowledge base, and a critic agent for adversarial code review. Future iterations will introduce a "to-do bot" that proactively addresses flagged tasks and an environment agent to streamline setup. This move towards system awareness aims to transform Jules from a reactive assistant into a truly proactive teammate.
Korevec concluded with a powerful call to action: "The product we build today actually won't be the product we have in the future… We get to invent the future right now." The challenge for founders, VCs, and AI professionals is to embrace this opportunity, to "take bold steps" and "question everything" about traditional software development. The future of AI is not merely about autonomous systems, but about a symbiotic alignment between human and artificial intelligence, collaborating across the full lifecycle of a project.

