The efficacy of modern deep neural networks hinges on residual connections that facilitate stable optimization and information flow. However, the conventional approach of stacking layers introduces significant computational overhead. This paper introduces SCORE (Skip-Connection ODE Recurrent Embedding), a novel discrete recurrent alternative that reframes network depth as an iterative refinement process. This innovation, detailed on arXiv, proposes a single shared neural block applied iteratively, inspired by Ordinary Differential Equations (ODEs) but crucially avoiding complex solvers.
Iterative Refinement Over Stacking
SCORE replaces the sequential composition of independent layers with a recurrent application of a single, shared neural block. The core update rule, ht+1 = (1 - dt) * ht + dt * F(ht), leverages a step size dt to control the magnitude and stability of updates. This formulation effectively treats network depth as a discrete iterative process. Unlike continuous Neural ODEs, SCORE employs standard backpropagation with a fixed number of iterations, simplifying implementation and eliminating the need for ODE solvers or adjoint methods. This iterative depth strategy is a significant departure from traditional architectural designs.