Lisa Orr, Product Leader at Zapier, shared a compelling narrative about how her company is leveraging artificial intelligence to transform its support operations, enabling the support team to actively ship code.
The core problem was the sheer volume of support tickets generated by API changes, overwhelming traditional support workflows. Zapier's journey began with a clear objective: to expedite the resolution of integration issues by empowering the support team with AI-driven tools.
The initial approach involved shadowing engineers to understand their workflows and identify opportunities for AI intervention. This led to the creation of an "API playground" equipped with AI tools for diagnosis and test generation. However, this proved ineffective because it disrupted engineers' existing workflows, pulling them away from their integrated development environments (IDEs). Orr highlighted this early misstep, noting that the tools "pulled builders out of their workflows." This realization was a critical pivot point, shifting the focus towards embedding AI capabilities directly into the tools engineers already used.
The next iteration involved developing "MCP tools" designed for direct use within IDEs. While these tools saw some adoption, the most promising, "Diagnosis," was too slow. Engineers were unwilling to wait for its lengthy processing time, underscoring the need for an asynchronous solution. This frustration with synchronous processes is a common hurdle in AI adoption, where immediate feedback is often desired but not always feasible for complex tasks. The team recognized that "Engineers wouldn't wait for it, revealing we needed an asynchronous approach."
This led to the development of the Scout Agent, a sophisticated system designed to chain AI tools together autonomously. The agent's primary function is to read support tickets, gather relevant context, generate potential fixes along with corresponding tests, and then submit these as merge requests (MRs) ready for review. This autonomous capability has significantly boosted the support team's capacity, allowing them to manage high ticket volumes effectively. The ability to present "an MR ready for review means they can validate and ship a fix quickly before needing to jump on the next incoming ticket" is a game-changer for support efficiency.
A fundamental insight gained throughout this process is that the true challenge lies not solely in code generation, but in the surrounding ecosystem. Before Scout Agent can generate code, it must possess the correct context and demonstrate its reasoning to build engineer trust. Post-generation, engineers need a streamlined way to validate and correct proposed fixes; otherwise, MRs risk becoming abandoned. Embedding Scout Agent directly within GitLab addressed this by eliminating context switching, allowing teams to iterate on solutions seamlessly.
Zapier measures improvement through three key failure modes: categorization accuracy (whether Scout should even attempt a ticket), fixability assessment (if a code fix is indeed required), and solution quality (the efficacy of the generated code). Each metric provides distinct avenues for improvement.
Currently, Scout Agent is responsible for 40% of the support team's integration fixes. The future holds expansion into engineering teams and further downstream automation, including testing, shipping, and migration processes.

