The standard approach to adapting large pretrained models for new tasks involves a sequential application of parameter-efficient fine-tuning (PEFT) and low-rank compression. This decoupled strategy risks suboptimal compression by failing to align the compressed subspace with downstream objectives, potentially wasting precious global parameter budgets. To address this, researchers have introduced JACTUS (Joint Adaptation and Compression with a Task-aware Union of Subspaces), a novel framework that seamlessly unifies these two critical processes.
Unified Subspace for Efficient Adaptation
JACTUS operates by first estimating input and pre-activation gradient covariances from a small calibration set. It then forms an orthogonal union of these covariances with the pretrained weight subspace. A projected low-rank approximation is performed within this unified subspace. Crucially, JACTUS employs a global rank allocation strategy based on marginal gain per parameter, training only a compact core matrix. This explicit coupling of directions preserved for compression with those required for adaptation mitigates misalignment issues, leading to a deployable low-rank model that avoids retaining full frozen weights and enables rapid, robust tuning.