"I think the bias-variance trade-off is an incredible misnomer. There doesn't actually have to be a trade-off." This provocative statement from Professor Andrew Wilson of NYU, articulated during his interview with MLST, encapsulates a fundamental challenge to decades of machine learning orthodoxy. Wilson, a distinguished figure in AI research, spoke with the interviewer about the prevailing misconceptions surrounding model complexity, generalization, and the very nature of artificial intelligence, arguing that many deeply held beliefs are not only wrong but actively hinder progress.
The prevailing wisdom in machine learning has long dictated a cautious approach to model complexity. The classic bias-variance trade-off posits a delicate balancing act: a model that is too simple (high bias) might underfit, failing to capture underlying patterns, while one that is too complex (high variance) might overfit, essentially memorizing data and performing poorly on unseen examples. For years, this trade-off has guided model design, pushing practitioners to fear overly expressive models.
