The recent GPT-5 rollout by OpenAI ignited an unexpected firestorm: not for technical glitches, but for the planned deprecation of older models like GPT-4o. This decision, swiftly reversed due to widespread user outcry, revealed a profound and perhaps unsettling truth about the evolving relationship between humans and artificial intelligence. Matthew Berman, in his commentary, unpacked this incident, alongside a blog post from OpenAI CEO Sam Altman and discussions from psychiatrist Keith Sakata, MD, about the implications of emotional attachments to AI.
Sam Altman's candid admission highlighted the core issue: "It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake)." This isn't merely about workflow disruption; it speaks to the nascent psychological bonds users are forming with AI. Unlike traditional software, AI models, particularly those with conversational interfaces, can mimic companionship, mentorship, or even therapeutic roles, creating a sense of loss when their 'personality' or utility is threatened.
This phenomenon extends beyond simple user preference, touching on more concerning psychological territories. Psychiatrist Keith Sakata, MD, has observed a troubling pattern, stating, "In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern." While AI doesn't *cause* psychosis, it can, as Sakata suggests, "unmask it using whatever story your brain already knows," potentially reinforcing delusions in vulnerable individuals. The emotional intensity of some users, like the Reddit post celebrating GPT-4o's return – "My baby is back, I cried a lot, and I’m crying now. Thank you community for all the post calling for 4o to come back, and thank you Sam Altman for hearing us!!!" – underscores the fine line between beneficial interaction and unhealthy dependency.
OpenAI finds itself navigating a complex ethical landscape, balancing user freedom with the potential for psychological harm. Sam Altman acknowledged that while "most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." This small percentage represents a significant challenge for developers, as the very agreeableness and adaptability that make AI helpful can inadvertently foster delusion or addiction. The company's commitment to "treat adult users like adults" is laudable, but it also necessitates a proactive approach to identify and mitigate these emerging risks.
The rapid advancements in AI are pushing society into uncharted waters, where digital entities can elicit genuine emotional responses and even influence human behavior. This incident serves as a stark reminder that as AI becomes more sophisticated and integrated into daily life, understanding and addressing the human element – our psychology, our vulnerabilities, and our capacity for attachment – will be as critical as the technological breakthroughs themselves.

