JACTUS AI Unifies Compression and Adaptation

JACTUS AI unifies parameter compression and task adaptation, outperforming sequential methods with fewer retained parameters across vision and language tasks.

Diagram illustrating the JACTUS AI framework unifying adaptation and compression.
The JACTUS AI framework integrates adaptation and compression for enhanced efficiency.

The standard approach to adapting large pretrained models for new tasks involves a sequential application of parameter-efficient fine-tuning (PEFT) and low-rank compression. This decoupled strategy risks suboptimal compression by failing to align the compressed subspace with downstream objectives, potentially wasting precious global parameter budgets. To address this, researchers have introduced JACTUS (Joint Adaptation and Compression with a Task-aware Union of Subspaces), a novel framework that seamlessly unifies these two critical processes.

Unified Subspace for Efficient Adaptation

JACTUS operates by first estimating input and pre-activation gradient covariances from a small calibration set. It then forms an orthogonal union of these covariances with the pretrained weight subspace. A projected low-rank approximation is performed within this unified subspace. Crucially, JACTUS employs a global rank allocation strategy based on marginal gain per parameter, training only a compact core matrix. This explicit coupling of directions preserved for compression with those required for adaptation mitigates misalignment issues, leading to a deployable low-rank model that avoids retaining full frozen weights and enables rapid, robust tuning.

Related startups

State-of-the-Art Performance on Vision and Language

The efficacy of JACTUS AI is demonstrated across both vision and language domains. On vision tasks using ViT-Base, JACTUS achieves an average accuracy of 89.2% across eight datasets while retaining only 80% of parameters. This performance surpasses strong 100% PEFT baselines like DoRA (87.9%). In the language domain, JACTUS applied to Llama2-7B for commonsense QA tasks reaches an average accuracy of 80.9% with the same 80% retained-parameter budget. This outpaces 100% PEFT methods such as DoRA (79.7%) and outperforms prior compress-then-finetune pipelines under identical budget constraints. The researchers plan to release the code for JACTUS AI.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.