notesum.ai
Published at November 7Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
cs.LG
cs.AI
Released Date: November 7, 2024
Authors: Luis M. Lopez-Ramos, Florian Leiser, Aditya Rastogi, Steven Hicks, Inga Strümke, Vince I. Madai, Tobias Budig, Ali Sunyaev, Adam Hilbert

| Technique | Description | Manifestations in sample |
| Feature relevance | Calculate relevance scores for model variables. | e.g., SHAP [36, 41, 56], |
| Custom builds [55, 63] | ||
| Local explanations | Estimate whole model through less complex subsystems. | e.g., GradCAM [64, 52, 65], |
| heatmaps [66] | ||
| Simplification | Facilitate model while maintaining performance. | e.g., simpler models [67, 60] |
| Text explanations | Generate symbols that explain the results of the model. | - |
| Visual explanations | Visualize the inference process of the model. | - |
| Explanations by example | Provide representative examples that allow insight into the model. | - |
| Algorithmic transparency | Enable users to follow and understand the processes by the model. | e.g., Linear regression [68], |
| Decision trees [69, 70] | ||
| Decomposability | Explain each model part separately for full comprehension. | e.g., Inference splits [71] |
| Simulatability | The inference of models could be simulated by a human. | e.g., Rule-based systems [72] |