Cristopher Moore, a distinguished professor from the Santa Fe Institute with a diverse background spanning physics, computer science, and machine learning, recently sat down with Machine Learning Street Talk at the Diverse Intelligences Summer Institute. The conversation explored the surprising efficacy of current AI models, the inherent structure of the real world, and the boundaries of computability, all viewed through Moore's self-proclaimed "frog" philosophy of diving deep into specific, concrete problems.
Moore argues that the remarkable success of models like transformers isn't merely about brute-force computation or statistical luck. Instead, it stems from a fundamental truth: "Real-world data is not designed by an adversary to be as tricky as possible." The world, he posits, is not random; it is imbued with rich, exploitable structures that these models learn to leverage. This intrinsic order, rather than adversarial complexity, is the wellspring of AI's current capabilities.
