"Tools part refers to the ability to add capabilities to the models... LLMs can't do that on their own. They need tools to be able to perform those actions." This foundational insight, articulated by Katia Gil Guzman of OpenAI's Developer Experience team, sets the stage for a significant shift in AI development. During a recent OpenAI Build Hours session, Guzman, alongside Christine Jones from Startup Marketing, unveiled and demonstrated OpenAI's new suite of built-in tools, designed to empower developers with unprecedented capabilities for scaling AI applications. These tools abstract away much of the underlying complexity, allowing large language models (LLMs) to interact with the real world directly and autonomously.
The core innovation lies in bridging the inherent limitations of LLMs with practical, real-world functionalities. While LLMs excel at language understanding and generation, their knowledge is typically limited by their training data cutoff, and they lack native abilities to perform actions or access external, dynamic information. Built-in tools fundamentally change this paradigm. "Built-in part means you don't have to code anything... you can just use it," Guzman explained, highlighting the immediate accessibility and simplified integration that these hosted tools offer. This means developers can imbue their AI agents with powerful new skills without writing custom functions or managing complex infrastructure.
One of the most immediate benefits is the ability to overcome LLM knowledge cutoffs. For instance, the **Web Search** tool grants models the capacity to query the internet for up-to-date information, sidestepping the temporal limitations of their training data. This is crucial for applications requiring current events, real-time data, or dynamic facts. Complementing this is **File Search**, which allows models to pull relevant information from uploaded documents or vector stores. This feature significantly simplifies the implementation of Retrieval Augmented Generation (RAG), as developers no longer need to manually handle complex processes like chunking, embedding, and ranking proprietary data. "All of that can be automated with the File Search tool," Guzman affirmed, underscoring the efficiency gains.
Beyond information retrieval, built-in tools enable sophisticated interaction and computation. The **Model Context Protocol (MCP)** tool is particularly transformative, offering LLMs access to hundreds of remote MCP servers, effectively connecting them to a vast ecosystem of external applications and APIs, such as Shopify or Stripe. This allows models to perform actions like querying e-commerce inventories or checking financial balances. The **Code Interpreter** further expands this computational prowess, enabling models to generate and execute Python code directly within OpenAI's infrastructure. This capability is invaluable for complex tasks such as data analysis, mathematical computations, and deep image understanding, allowing agents to process and derive insights from structured data.
The true power of these built-in tools emerges from their seamless integration and interoperability. They foster the development of highly capable agentic workflows that can span multiple domains. Developers can combine Web Search for external data, MCP for online sales information (e.g., from Stripe), and Code Interpreter for analyzing offline sales data (e.g., from a CSV file) and visualizing trends. This multi-tool orchestration allows AI agents to tackle comprehensive tasks that would otherwise require extensive, bespoke coding.
A live demonstration showcased the practical application of these tools in building a data exploration dashboard. The model, given a natural language query about sales data, intelligently leveraged Web Search for external market trends, utilized the Stripe MCP to retrieve online sales figures, and employed the Code Interpreter to analyze local CSV files containing offline sales data. It then generated visualizations, presenting a holistic view of the business performance. This capability was further illustrated by a customer spotlight on Hebbia, which utilizes web search for intricate finance and legal workflows, demonstrating tangible real-world impact.
This evolution significantly lowers the barrier to entry for advanced AI development. "Once you give the model access to the tool, it can decide when and how to use it and we automatically add the tool result to the conversation context," Guzman explained. This "all-in-one" approach, where OpenAI handles the execution and context management, democratizes the creation of sophisticated AI agents. Developers can now build more robust, dynamic, and context-aware applications with minimal coding, focusing their efforts on creative problem-solving rather than infrastructure plumbing.

