Artificial intelligence systems, for all their impressive capabilities, are fundamentally limited by the quality and depth of the context they receive. Stephen Chin, VP of Developer Relations at Neo4j, presented at the AI Engineer Code Summit on the critical role of "Context Engineering: Connecting the Dots with Graphs," a discipline emerging to tackle the inherent shortcomings of Large Language Models (LLMs) and elevate AI's reasoning, problem-solving, and explainability. He highlighted how moving beyond simple prompt engineering to a more structured, dynamic approach using graph technology is not merely an enhancement but a foundational shift.
The discussion centered on the evolution from traditional prompt engineering—often a "one-shot, clever phrasing" approach—to a more sophisticated paradigm: context engineering. This evolution is necessitated by the growing complexity of AI agents, which demand dynamic, goal-driven, and selectively curated inputs. "This allows us to think not like prompt engineers, but like information architects," Chin explained, emphasizing the shift towards actively building the model's contextual understanding rather than just crafting clever queries. The objective is to provide LLMs with a rich, relevant, and structured informational landscape, moving from noisy, unfocused data to clear, actionable signals.
A core insight woven throughout the presentation is that LLM responses are only as good as the quality of the context they receive. Without robust context, even the most advanced models risk misinterpreting information, losing track of details, or generating unreliable conclusions—a phenomenon colloquially known as "garbage in, garbage out." Context engineering addresses this by integrating multiple sources of information: Retrieval Augmented Generation (RAG) for pulling in external data, managing state and history for memory, and structuring outputs for better interoperability. This holistic approach ensures that AI agents operate with a comprehensive understanding of their operational domain and user intent.
Memory stands as a critical pillar of effective context engineering. Chin delineated two primary categories: short-term and long-term memory. Short-term memory encompasses the current context, focusing on compressing relevant information and integrating tool results efficiently, preventing the context window from being flooded with noise. Long-term memory, on the other hand, captures learnings over extended interactions, comprising episodic (event-based), semantic/structural (meaning and relationships), and procedural/instructional (how-to knowledge) elements. By effectively managing both short and long-term memory, AI systems can fill informational gaps and avoid the common pitfalls of hallucination.
The practical application of context engineering, particularly through knowledge graphs, offers tangible improvements. Knowledge graphs represent facts about people, places, events, or things as interconnected nodes and relationships, creating a human and LLM-friendly readable format. This organizing principle provides a rich context for reasoning about data, effectively serving as a digital twin of an organization or domain. The fundamental components of a Neo4j graph, as Chin illustrated, are nodes (entities), relationships (associations), and properties (attributes, including vector embeddings for semantic search). This structured representation allows for complex queries and deep navigation, far surpassing the capabilities of traditional vector searches alone.
The synergy between LLMs and knowledge graphs is where true power emerges. While LLMs excel in language understanding, reasoning, and creativity, knowledge graphs provide the essential knowledge, context, and enrichment. When combined, these technologies enable "GraphRAG," an evolution of Retrieval Augmented Generation. "GraphRAG is any retrieval pipeline which also uses graphs as part of the retrieval process," Chin clarified. This integration leads to significantly improved relevancy in answers, as it leverages not just semantic similarity but also factual, domain-specific, and structured knowledge. Furthermore, GraphRAG enhances explainability, allowing users to understand the reasoning behind an AI's responses by tracing the information flow within the knowledge graph. It also facilitates robust security through role-based access control directly on the graph data.
Related Reading
- Unlocking Subagents: Brian John's Hack for Codex CLI
- Agents are Robots Too: The Infrastructure Imperative for Next-Gen AI
- Architecture Copilots: The Overlooked Frontier for Enterprise ROI
A demonstration showcased the practical implementation of GraphRAG using Neo4j's knowledge graph builder and Claude Code. By ingesting SBOM (Software Bill of Materials) and VEX (Vulnerability Exploitability eXchange) documents into a Neo4j graph, the system could identify and interlink entities like software components, vulnerabilities, and their relationships. When Claude was tasked to query about a specific vulnerability, it leveraged the graph's schema and multi-step Cypher queries to retrieve highly detailed and contextual information, including CVE numbers, affected libraries, attack types, severity, technical descriptions, and remediation steps. This granular, verifiable information contrasts sharply with the often vague or hallucinated responses from LLMs lacking such structured context.
This advanced approach allows for iterative navigation and information retrieval over the graph, enabling AI agents to engage in sophisticated reasoning. By storing learnings from user and agent interactions, visualizing conversation flows, and analyzing contextual data, graphs empower developers to build explainable AI systems. These systems can go beyond mere pattern matching, offering deeper insights, identifying improvement opportunities, and even preventing the propagation of misinformation. Ultimately, graphs provide the robust, interconnected framework necessary for AI to truly understand, reason, and act with precision and reliability in complex, real-world environments.

