notesum.ai
Published at December 3Unlocking Tuning-Free Few-Shot Adaptability in Visual Foundation Models by Recycling Pre-Tuned LoRAs
cs.CV
cs.AI
cs.LG
Released Date: December 3, 2024
Authors: Zixuan Hu1, Yongxian Wei2, Li Shen3, Chun Yuan2, Dacheng Tao1
Aff.: 1Nanyang Technological University; 2Tsinghua University; 3Sun Yat-sen University

| Method | CIFAR-FS | MiniImageNet | VGG-Flower | CUB | |||||
|---|---|---|---|---|---|---|---|---|---|
| 5-way 1-shot | 5-way 5-shot | 5-way 1-shot | 5-way 5-shot | 5-way 1-shot | 5-way 5-shot | 5-way 1-shot | 5-way 5-shot | ||
| FT | Full Finetuning | 22.81 | 28.33 | 21.16 | 23.60 | 23.11 | 31.25 | 21.27 | 24.47 |
| Linear-probe | 80.06 | 95.49 | 82.04 | 94.12 | 89.65 | 97.77 | 85.84 | 97.40 | |
| LoRA + Linear | 79.29 | 95.43 | 82.00 | 94.83 | 88.47 | 97.63 | 85.87 | 97.32 | |
| 79.54 | 95.62 | 82.77 | 95.12 | 89.32 | 97.65 | 86.12 | 97.38 | ||
| LoRAs Avg + Linear | 80.25 | 96.07 | 83.59 | 95.43 | 90.05 | 97.73 | 87.13 | 97.49 | |
| MOLE | 80.31 | 96.11 | 83.53 | 95.41 | 90.14 | 97.68 | 87.07 | 97.21 | |
| LoRAHub | 81.23 | 96.24 | 83.68 | 95.72 | 90.89 | 97.75 | 87.22 | 97.51 | |
| FTF | NN | 78.06 | 94.09 | 81.08 | 93.85 | 89.75 | 97.78 | 85.11 | 96.09 |
| LoRAs Avg + NN | 79.37 | 93.45 | 81.72 | 94.64 | 90.08 | 97.92 | 85.16 | 97.23 | |
| CMAL | 81.02 | 93.59 | 81.89 | 94.81 | 91.10 | 97.98 | 86.51 | 97.32 | |
| LoRA Recycle | 89.69 | 97.05 | 88.60 | 96.12 | 94.53 | 98.59 | 91.12 | 97.67 | |
| LoRA Recycle | 91.03 | 96.53 | 87.51 | 96.25 | 94.38 | 98.53 | 90.16 | 97.48 | |
| LoRA Recycle | 90.91 | 96.08 | 87.21 | 95.85 | 94.05 | 98.56 | 90.65 | 97.41 | |
| LoRA Recycle | 89.70 | 96.69 | 87.36 | 96.05 | 94.28 | 98.76 | 91.21 | 98.23 | |