notesum.ai
Published at November 20Federated Continual Learning for Edge-AI: A Comprehensive Survey
cs.AI
cs.DC
cs.NI
Released Date: November 20, 2024
Authors: Zi Wang1, Fei Wu1, Feng Yu1, Yurui Zhou1, Jia Hu1, Geyong Min1
Aff.: 1Department of Computer Science, Faculty of Environment, Science and Economy, University of Exeter, United Kingdom

| Approach | Paper | Key Contribution |
| Generative Replay (Section 2.1) | (shenaj2023asynchronous, ) | Asynchronous FCL with class prototypes replay |
| (hendryx2021federated, ) | Federated prototypical networks | |
| (qi2022better, ) | Model consolidation and consistent enforcement | |
| (babakniya2023don, ; babakniya2023data, ) | Compensate for the absence of old data by a data-free generative replay | |
| (zhang2023target, ) | GR and KD within an exemplar-free continual learning | |
| Parameter Regularization (Section 2.2) | (dong2022federated, ) | The first work to alleviate local and global forgetting in FCCL |
| (dong2023no, ) | A category-balanced gradient-adaptive compensation loss and a category gradient-induced semantic distillation loss | |
| (dong2023federated, ) | The first global continual segmentation model for FISS | |
| (legate2023re, ) | Re-weight the softmax logits prior to computing the loss | |
| (hu2022federated, ) | Channel attention NN model and federated aggregation algorithm based on the feature attention mechanism | |
| (yao2020continual, ) | Importance weight matrix for better initialization of federated models | |
| Parameter Decomposition (Section 2.3) | (yoon2021federated, ) | Weighted inter-client transfer based on task-specific parameters |
| (zhang2022cross, ) | Task-specific parameter aggregation and cross-edge strategies for initial decision for federated models | |
| (luopan2023fedknow, ) | knowledge extraction and gradient restoration based on weight-based pruning, and gradient integration | |
| Prompt-based methods (Section 2.4) | (halbe2023hepco, ) | A lightweight generation and distillation scheme to consolidate client models at the server based on prompting |
| (bagwe2023fedcprompt, ) | Asynchronous prompt learning and contrastive continual loss | |
| (liu2023federated, ) | A rehearsal-free FCL method based on prompting with the consideration of privacy and limited memory | |
| Knowledge Distillation (Section 2.5) | (usmanova2022federated, ; usmanova2021distillation, ) | The first work to extend LwF to the federated setting |
| (ma2022continual, ) | A client division mechanism and the server distillation with the unlabeled surrogate dataset | |
| (wei2022knowledge, ) | Overcoming the server knowledge forgetting caused by data isolation | |
| (jin2024fl, ) | Sample label smoothing loss function leveraging KD to enhance the local model memory |