The complex orchestration of neural activity within electroencephalography (EEG) signals remains a significant hurdle in translating neuroscience findings into actionable AI applications. Existing foundation models, while advancing generalized EEG decoding, often exhibit a bias towards high-frequency oscillations due to conventional masking strategies during self-supervised pretraining. This can lead to under-exploration of crucial low-frequency rhythmic patterns.
Challenging the Reconstruction Objective with Gaussian Smoothing
To address this, Darankoum et al. introduce a foundation model employing a novel Gaussian-smoothed masking strategy on Short-Time Fourier Transform (STFT) maps. By applying joint time, frequency, and time-frequency Gaussian masks, the reconstruction task is significantly amplified in difficulty. This forces the model to learn more intricate neural patterns, encompassing both high- and low-frequency domains, a critical step for comprehensive EEG analysis. This refined approach to pretraining is central to the effectiveness of SpecHi-Net EEG decoding.