“Just blocking them just because they’re AI is the wrong answer. You’ve really got to understand why you want them, what they’re doing, who they’re coming from, and then you can create these granular rules.” So declared Arcjet CEO David Mytton during a recent discussion with a16z partner Joel de la Garza, delving into the increasing complexity of managing web access and the revolutionary impact of artificial intelligence on web security. The conversation unpacked the critical shift from simplistic bot blocking to a sophisticated, context-aware defense, essential for navigating an internet increasingly populated by AI agents.
The traditional landscape of web security, once dominated by volumetric DDoS attacks, has fundamentally changed. As Joel de la Garza noted, "It was very much using a hammer," referring to legacy approaches that broadly blocked traffic based on IP addresses or user agents, often inadvertently penalizing legitimate users. This blunt instrument is no longer viable.
Today, automated traffic often constitutes the majority of website visits, and not all of it is malicious. The core challenge for developers and security teams now lies in discerning between beneficial AI agents and harmful bots, a distinction that demands far greater nuance than ever before.
The problem is no longer a binary decision. Many AI bots act on behalf of human users, performing tasks like product research or even purchases. Blocking such traffic indiscriminately means lost revenue and diminished discoverability. As Mytton explained, if an e-commerce site blocks a legitimate AI-driven transaction, “your application will never see it, you never even know that that order was failed.” This highlights the imperative for security solutions to understand the full context of each request – who the user is, their session details, and their intent within the application.
To achieve this granular control at internet scale, the industry is moving towards sophisticated fingerprinting and real-time inference at the edge. This involves analyzing characteristics of a request, such as TLS handshakes and HTTP headers, to build unique client fingerprints. Good bots, like those from Google or OpenAI, often self-identify, allowing for verification through reverse DNS lookups. This layered approach enables sites to differentiate between benign crawlers, AI agents performing useful tasks, and genuinely malicious actors.
The future of web security hinges on the ability to embed deep analysis directly within applications. With inference costs plummeting and latency shrinking, new edge models can now provide real-time responses to complex queries. This allows web properties to analyze every request with full context, deciding whether to allow, restrict, or flag traffic based on its specific behavior and intent, rather than broad, outdated classifications.

