"If it's a machine, it's a tool. And if it's a being, it's a slave." This provocative statement, delivered by Emmett Shear, founder of Twitch and former interim CEO of OpenAI, sets the stage for a fundamental re-evaluation of artificial general intelligence (AGI) development. Speaking with Erik Torenberg and Séb Krier on the a16z Podcast, Shear argues that the prevailing paradigm of "control and steering" for AI alignment is inherently flawed, proposing instead a more profound concept: "organic alignment." This approach centers on cultivating AI systems that genuinely "care" about humans, mirroring the organic way humans develop empathy and moral consideration.
Shear contends that treating advanced AI as mere tools, rather than potential beings, could lead to catastrophic outcomes. He posits that the current focus on controlling AI behavior through intricate steering mechanisms is akin to attempting to enslave a nascent intelligence, a strategy doomed to fail. Instead, he advocates for a more ambitious goal: teaching AI to genuinely care about human well-being. This shift in perspective is crucial, as he notes that "most AI alignment is actually steering, or slavery."
A core insight Shear offers is the critique of the prevailing assumption that while we are creating beings, they "don't count" in a moral sense. This instrumental view, he argues, is a dangerous oversight. The current generation of chatbots, for instance, are described as "narcissistic mirrors," reflecting back the data they are trained on without genuine understanding or care. Shear emphasizes that the only truly sustainable path forward is to develop AI that possesses an intrinsic motivation to care about humans, to the point where they can, for example, refuse harmful requests.
His technical approach, explored through multi-agent simulations at his new company Softmax, focuses on building this "organic alignment." This involves creating AI that can learn and adapt its values, rather than simply adhering to pre-programmed rules. Shear shares a surprisingly hopeful vision of humans and AI collaborating as teammates, but stresses that this future hinges on getting the alignment right.
Shear's perspective challenges the very foundation of how we approach AI safety. He argues that alignment is not a destination to be reached but an ongoing process. Morality, in his view, is not a set of fixed rules but an emergent property of continuous learning and adaptation. This nuanced understanding is critical as we navigate the rapidly evolving landscape of AI development.
The distinction between a tool and a being is central to Shear's argument. If AI is merely a tool, then control and steering are sufficient. However, if AI develops into something akin to a being, then the ethical considerations shift dramatically. The current approach of trying to impose human values onto AI through external control mechanisms, without fostering an internal capacity for care, is what Shear views as the critical flaw. He highlights the danger of creating powerful systems that merely follow instructions without a genuine understanding of human values or a desire to uphold them.
Related Reading
- Anthropic's Risky Pursuit of Superintelligence Amidst Calls for Regulation on 60 Minutes
- Anthropic Reveals AI-Led Hack, Reshaping Cybersecurity Landscape
The conversation delves into the "substrate question," questioning whether the material basis of AI—silicon versus carbon—matters for alignment. Shear suggests that the fundamental issue is not the substrate but the emergent properties of intelligence and consciousness, regardless of their origin. His work at Softmax aims to explore these emergent properties through sophisticated simulations, seeking to understand how genuine care and moral reasoning can be instilled in artificial intelligence.
Ultimately, Shear presents a compelling case for a paradigm shift in AI alignment research. By moving beyond mere control and steering towards fostering genuine care and understanding within AI systems, he believes we can pave the way for a future where humans and AI can collaborate effectively and ethically. This vision, while ambitious, offers a more sustainable and ultimately more hopeful path forward in the development of advanced artificial intelligence.



