| Algorithms |
Parameter for cooperation tuning |
| Personalization |
| is performed at |
|
Approach for mitigating poisoning attacks for local models |
|
|
|
| Clustering [ghosh2020efficient]
|
Number of clusters |
✓ |
|
| Perform fine-grained clustering by increasing clustering |
| number to group the attackers into one category |
|
| Parameter decoupling [arivazhagan2019federated]
|
| Number of globally shared |
| parameters or layers |
|
✓ |
✓ |
| Decrease the globally shared parameters and keep |
| poisoned parameters locally |
|
| Model interpolation [hanzely2020federated]
|
| The proportion of the global |
| model and local models |
| in the mixture |
|
✓ |
✓ |
| Decrease the proportion of the global model and make |
| benign local models less affected by attacks |
|
| Multi-task learning [smith2017federated]
|
| The coefficients of the linear |
| combination that makes |
| up the cloud model |
|
|
✓ |
| Build cloud models for benign participants only |
| with other benign participants based on |
| model similarity measurements |
|
| Transfer learning [chen2020fedhealth]
|
| Iterations of local |
| model adaption |
|
|
✓ |
| Perform more iterations of local adaption to correct |
| attacks from the global model |
|
| Knowledge distillation [zhu2021data]
|
| The threshold that |
| controls the direction |
| of knowledge flow |
|
✓ |
✓ |
| Only distill useful knowledge and filter filter attacks |
| during knowledge distillation |
|
| Reward shaping [hu2021reward]
|
| The proportion of |
| global model-based reward |
|
|
✓ |
| Decrease the portion of global model-based reward |
| to make benign local models less affected by attacks |
|