Skip to content

A curated list of Parameter Efficient Fine-tuning papers with a TL;DR

Notifications You must be signed in to change notification settings

zelaki/awesome-LoRA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 

Repository files navigation

Low-Rank Decomposition

Title & Authors TL;DR Links
LoRA: Low-Rank Adaptation of Large Language Models
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen
$h = W_0x + BAx$
$B \in \mathbb{R}^{d\times r}$ $A \in \mathbb{R}^{r\times d}$ $r \ll d$
Github
Paper
DoRA: Weight-Decomposed Low-Rank Adaptation
Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, Min-Hung Chen
DoRA decomposes the pre-trained weight into two components, magnitude and direction, and LoRA adapts direction Github
Paper
VeRA: Vector-based Random Matrix Adaptation
Dawid J. Kopiczko, Tijmen Blankevoort, Yuki M. Asano
VeRA levereges random projection to further reduce the trainable parameters Github
Paper
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
Qingru Zhang, Minshuo Chen, Alexander Bukharin, Nikos Karampatziakis, Pengcheng He, Yu Cheng, Weizhu Chen, Tuo Zhao
$h = W_0x + U\Sigma V^Tx$
Prune the singular values of unimportant updates
Github
Paper
Mixture-of-Subspaces in Low-Rank Adaptation
Taiqiang Wu, Jiahao Wang, Zhe Zhao, Ngai Wong
$h = W_0x + BSAx$
$B \in \mathbb{R}^{d\times r}$ $A \in \mathbb{R}^{r\times d}$
$S \in \mathbb{R}^{r\times r}$ $r \ll d$
Github
Paper

Orthogonal Finetuing

Title & Authors TL;DR Links
Controlling Text-to-Image Diffusion by Orthogonal Finetuning
Zeju Qiu, Weiyang Liu, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian Weller, Bernhard Schölkopf
$h = \mathbf{R} W_0x$
$\mathbf{R} \mathbf{R}^⊤ = \mathbf{I}$
$R=(I+Q)(I−Q)^{−1}$ where $Q$ is a skew-symmetric matrix satisfying $Q=−Q^⊤$
Github
Paper
Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Weiyang Liu, Zeju Qiu, Yao Feng, Yuliang Xiu, Yuxuan Xue, Longhui Yu, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf
An efficient parametrization of $\mathbf{R}$ inspired by FFT algorithm Github
Paper
Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation
Xinyu Ma, Xu Chu, Zhibang Yang, Yang Lin, Xin Gao, Junfeng Zhao
Rotation matrix $\mathbf{R}$ can be represented as a product of Givens Rotations Github
Paper
Bridging The Gap between Low-rank and Orthogonal Adaptation via Householder Reflection Adaptation
Shen Yuan, Haotian Liu, Hongteng Xu
Rotation matrix $\mathbf{R}$ can be represented as a product of Householder Reflections
$H= I -2uu^T$
Github
Paper

Theoretical Analysis of LoRA

Title & Authors TL;DR Links
Asymmetry in Low-Rank Adapters of Foundation Models
Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi, Haitz Sáez de Ocáriz Borde, Rickard Brüel Gabrielsson, Leshem Choshen, Marzyeh Ghassemi, Mikhail Yurochkin, Justin Solomon
Tuning B is more impactful than tuning A Github
Paper

About

A curated list of Parameter Efficient Fine-tuning papers with a TL;DR

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published