Emerging Challenges and Future Directions in Federated Unlearning
DOI:
https://doi.org/10.48001/jocsvl.2024.127-14Keywords:
Federated Learning (FL), Federated unlearning, Incentive mechanisms, Machine Learning (ML), Privacy protectionAbstract
Federated Unlearning (FU) is becoming increasingly important in Federated Learning (FL) for safeguarding privacy by enabling the removal of specific client data from trained models. This review delves into the key challenges and emerging directions in FU, particularly around incentive mechanisms, environmental sustainability, and its use in large foundation models. We explore how personalized incentives can help retain users and examine strategies to reduce energy consumption in unlearning processes, promoting greener AI. Additionally, we address the complexities of implementing FU in large-scale models, where partial retraining may cause unpredictable impacts on model performance. Tackling these issues is crucial for advancing FU solutions that are scalable, efficient, and sustainable, especially as FL expands across diverse applications.
Downloads
References
Bano, H., Ameen, M., Mehdi, M., Hussain, A., & Wang, P. (2023, December). Federated Unlearning and Server Right to Forget: Handling Unreliable Client Contributions. In International Conference on Recent Trends in Image Processing and Pattern Recognition (pp. 393-410). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-53085-2_31.
Deng, Z., Luo, L., & Chen, H. (2024, October). Enable the Right to be Forgotten with Federated Client Unlearning in Medical Imaging. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 240-250). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-72117-5_23.
Han, Q., Lu, S., Wang, W., Qu, H., Li, J., & Gao, Y. (2024). Privacy preserving and secure robust federated learning: A survey. Concurrency and Computation: Practice and Experience, e8084. https://doi.org/10.1002/cpe.8084.
Huynh, T. T., Nguyen, T. B., Nguyen, P. L., Nguyen, T. T., Weidlich, M., Nguyen, Q. V. H., & Aberer, K. (2024, August). Fast-fedul: A training-free federated unlearning with provable skew resilience. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 55-72). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-70362-1_4.
Jain, N., Tran, T. K., Gad-Elrab, M. H., & Stepanova, D. (2021, September). Improving knowledge graph embeddings with ontological reasoning. In International Semantic Web Conference (pp. 410-426). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-88361-4_24.
Nguyen, T. D., Nguyen, T., Le Nguyen, P., Pham, H. H., Doan, K. D., & Wong, K. S. (2024). Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions. Engineering Applications of Artificial Intelligence, 127, 107166. https://doi.org/10.1016/j.engappai.2023.107166.
Su, S., Li, B., & Xue, X. (2023). One-shot federated learning without server-side training. Neural Networks, 164, 203-215. https://doi.org/10.1016/j.neunet.2023.04.035.
Wu, X., Zhang, Y., Shi, M., Li, P., Li, R., & Xiong, N. N. (2022). An adaptive federated learning scheme with differential privacy preserving. Future Generation Computer Systems, 127, 362-372. https://doi.org/10.1016/j.future.2021.09.015.
Yin, L., Lin, S., Sun, Z., Li, R., He, Y., & Hao, Z. (2024). A game-theoretic approach for federated learning: A trade-off among privacy, accuracy and energy. Digital Communications and Networks, 10 (2), 389-403. https://doi.org/10.1016/j.dcan.2022.12.024.
Zhu, H., Xu, J., Liu, S., & Jin, Y. (2021). Federated learning on non-IID data: A survey. Neurocomputing, 465, 371-390.