notesum.ai
Published at November 18FedCoLLM: A Parameter-Efficient Federated Co-tuning Framework for Large and Small Language Models
cs.CL
cs.AI
Released Date: November 18, 2024
Authors: Tao Fan1, Yan Kang2, Guoqiang Ma2, Lixin Fan2, Kai Chen1, Qiang Yang1
Aff.: 1Department of Computer Science and Engineering, HKUST, Hong Kong; 2WeBank, China
| Task | Method | S1: Server GPT-2-Large | S2: Server OPT-6.7B | S3: Server LLaMa2-7B |
| CQA | Zero-Shot | 36.3 | 48.7 | 39.5 |
| Centralized | 54.7 | 68.6 | 69.0 | |
| FedCoLLM | 53.5 | 68.1 | 67.1 | |
| OBQA | Zero-Shot | 19.4 | 27.6 | 31.8 |
| Centralized | 28.2 | 34 | 39.8 | |
| FedCoLLM | 25.4 | 34.4 | 37.8 | |
| ARC-C | Zero-Shot | 21.7 | 30.7 | 40.0 |
| Centralized | 28.8 | 37.1 | 49.0 | |
| FedCoLLM | 27.4 | 36.0 | 45.4 | |
| ARC-E | Zero-Shot | 53.2 | 65.6 | 69.3 |
| Centralized | 59.5 | 70.2 | 76.8 | |
| FedCoLLM | 59.5 | 69.3 | 75.2 |