LASER-RMT: Optimizing Models Through Random Matrix Theory

In their study, Cognitive Computations presents an innovative implementation of Layer-Selective Rank Reduction (LASER) aimed at optimizing large language models. Leveraging insights from the Marchenko-Pastur law in Random Matrix Theory, this adaptation signifies a significant departure from the original LASER's brute-force approach. Their method strategically streamlines model complexity while maintaining or improving performance metrics such as perplexity. This deliberate reduction, informed by the mathematical underpinnings of the Marchenko-Pastur law, streamlines the optimization process for language models, establishing a new benchmark for refinement in this field.

There is documentation for it, as well as the code for it, on its github repository (opens in a new tab).