“Context engineering is such a good term, I wish I came up with that term,” admitted Harrison Chase, co-founder of LangChain, reflecting on the industry’s accelerating focus toward enabling artificial intelligence to tackle complex, multi-step tasks. He emphasized that the critical path to building reliable, long-horizon AI agents lies not solely in continually improving foundational models, but in mastering the complex infrastructure and feedback loops surrounding them. The conversation, hosted by Sonya Huang and Pat Grady of Sequoia Capital on the Training Data podcast, provided a sharp analysis of the architectural evolution required to move AI from single-turn prompts to autonomous systems capable of executing multi-day projects.
Chase, whose LangChain framework has become central to agent development, explained that the core challenge of long-horizon agents is managing state and context over extended periods of time and numerous interactions. Early attempts at agents, such as AutoGPT, proved the concept but were ultimately unreliable because the underlying models and scaffolding lacked the necessary robustness. Now, with more capable Large Language Models (LLMs), the industry is moving past simple frameworks toward what Sequoia has termed "agent harnesses"—opinionated, structured architectures designed to guide and constrain the non-deterministic nature of the underlying models.
