Is the human baseline for driving what autonomous vehicles should be trained on? This provocative question, posed by an expert in a recent discussion, cuts to the heart of a burgeoning debate in artificial intelligence: should AI strive for sterile, rule-based perfection, or should it learn to navigate the messy, unpredictable, and often illogical world of human behavior?
In a recent episode of the "Mixture of Experts" podcast, host Tim Hwang spoke with IBM experts Kaoutar El Maghraoui, Gabe Goodhart, and Ann Funai. The panel explored the complex interplay between human cognition and artificial intelligence, touching on everything from how LLMs are changing our brains to how autonomous vehicles (AVs) are evolving to be safer by, paradoxically, acting more human.
The conversation opened with a look at a recent MIT paper, "Your brain on ChatGPT," which probes whether we are using AI to augment intelligence or simply becoming "optimally lazy." While the fear of cognitive atrophy is palpable, the experts offered a more nuanced view. Ann Funai, CIO and VP of Business Platform Transformation, suggested that offloading certain tasks can be a strategic advantage, noting it "frees up brain space for the stuff that I am intrigued by." This reframes the debate from a simple dumbing-down of humanity to a reallocation of our cognitive resources. The core issue, as Principal Research Scientist Kaoutar El Maghraoui articulated, isn't whether LLMs make us smarter or dumber, but "whether we choose to engage with them in ways that sharpen or soften our minds."
This tension between AI's capabilities and its real-world integration is nowhere more apparent than in the development of autonomous vehicles. The panel discussed the surprising finding that AVs driving more aggressively—closer to how an actual human would—might actually be safer. An overly cautious, perfectly rule-abiding AV can be disruptive and unpredictable to the human drivers around it, who expect certain social cues and behaviors on the road. By programming AVs to perform a "rolling start" at an intersection, for instance, they better align with the established, albeit imperfect, social contract of driving.
However, this push for human-like behavior creates immense new challenges. As multiple AV companies like Waymo, Zoox, and Tesla deploy their fleets, each with its own proprietary driving style, the environment becomes a complex mix of human and machine actors. Funai sharply identified the resulting training dilemma: "If the robo-taxi acts one way, Zoox acts another way, and Waymo's acting a third way... how do you even train against that?" This fragmentation means that an AV trained on human behavior must now also learn to interpret the distinct "dialects" of other AVs, a problem that complicates the path to widespread, safe adoption. The quest is no longer just about building a perfect driver, but about building a socially compatible one.

