Federated Machine Learning Performance Evaluation with Flower: MNIST vs CIFAR-10

Authors

  • Singanamalla. Jaya Mohnish, Gandra. Shiva Krishna, Kota. Venkata Narayana, Jonnalagadda. Surya Kiran, Thatavarti. Satish, Inbarajan.P

Keywords:

Federated Learning, Flower Federated Framework, Machine Learning, TensorFlow, MNIST, CIFAR-10.

Abstract

This paper delves into Federated Learning's advantages, an innovative approach to distributed machine learning that enables model training using decentralized data, eliminating the necessity for centralized aggregation. The primary objective involves conducting extensive experiments with image datasets to discern the superior model training methodology between federated and non-federated approaches. Flower Federated Learning, an open-source framework integrated seamlessly with the TensorFlow machine learning platform, is utilized in the investigation. The pivotal point of this investigation revolves around a comparative analysis of federated and non-federated configurations, utilizing the renowned MNIST and CIFAR-10 datasets. Focus canters on key performance metrics such as precision, recall, and F1 score. In this paper, the experiment is conducted with a federated setup having two clients, each containing only 50\% of random data from each dataset class. These clients are connected to a server where the global model is stored and updated as part of the experiment. The non-federated setup involves a single client for model training and testing. The federated setup achieves 92%-96% for MNIST and around 45%-50% for CIFAR-10 datasets over various rounds and 25 to 50 epochs. In contrast, the non-federated setup achieves around 96% for MNIST and 85% for CIFAR-10 datasets. These results represent a brief experimentation phase on a smaller client setup, with the potential for similar updates for a higher number of clients. However, it's crucial to note that the field of federated learning is evolving rapidly, promising advancements that may narrow, if not bridge, the existing performance gap. The empirical analysis concludes that, as of now, traditional setups exhibit superior performance in comparison to federated configurations. Nevertheless, the evolving nature of federated learning warrants continued investigation and experimentation, as it holds significant potential to transform the landscape of distributed machine learning soon.

Downloads

Download data is not yet available.

References

Chan, Y. H., & Ngai, E. C. (2021). FedHe: Heterogeneous models and communication-efficient federated learning. 2021 17th International Conference on Mobility, Sensing and Networking (MSN). https://doi.org/10.1109/msn53354.2021.00043

Doon, R., Kumar Rawat, T., & Gautam, S. (2018). Cifar-10 classification using deep Convolutional neural network. 2018 IEEE Punecon. https://doi.org/10.1109/punecon.2018.8745428

Ghosh, S. (2022). Comparative analysis of boosting algorithms over MNIST handwritten digit dataset. Evolutionary Computing and Mobile Sustainable Networks, 985-995. https://doi.org/10.1007/978-981-16-

-3 69

Grafberger, A., Chadha, M., Jindal, A., Gu, J., & Gerndt, M. (2021). FedLess: Secure and scalable federated learning using Serverless computing. 2021 IEEE International Conference on Big Data(BigData).

https://doi.org/10.1109/bigdata52589.2021.9672067

Ilias, C., & Georgios, S. (2019). Machine learning for all: A more robust federated learning framework. Proceedings of the 5th International Conference on Information Systems Security and Privacy.

https://doi.org/10.5220/0007571705440551

Kar, B., Yahya, W., Lin, Y., & Ali, A. (2023). Offloading using traditional optimization and machine learning in federated cloud–edge–Fog systems: A survey. IEEE Communications Surveys & Tutorials, 25(2),1199-1226. https://doi.org/10.1109/comst.2023.3239579

Lee, C., & Lee, W. (2022). Participant selection scheme of federated learning in Non-IID data distribution environment. The Journal of Next generation Convergence Technology Association, 6(11), 2063-2075.

https://doi.org/10.33097/jncta.2022.06.11.2063

Li, K. H., De Gusm ̃ao, P. P., Beutel, D. J., & Lane, N. D. (2021). Secure aggregation for federated learning in flower. Proceedings of the 2nd ACM International Workshop on Distributed Machine Learning. https://doi.org/10.1145/3488659.3493776

Lin, S., Zhou, Z., Zhang, Z., Chen, X., & Zhang, J. (2021). Edge intelligence via federated meta-learning. Edge Intelligence in the Making, 53-79. https://doi.org/10.1007/978-3-031-02380-4 3

Mahanan, W., Chaovalitwongse, W. A., & Natwichai, J. (2021). Data privacy preservation algorithm with K-anonymity. World Wide Web, 24(5), 1551-1561. https://doi.org/10.1007/s11280-021-00922-2

Ram Mohan Rao, P., Murali Krishna, S., & Siva Kumar, A. P. (2018). Privacy preservation techniques in big data analytics: A survey. Journal of Big Data, 5(1). https://doi.org/10.1186/s40537-018-0141-8

Sun, G., Cong, Y., Dong, J., Wang, Q., Lyu, L., & Liu, J. (2022). Data poisoning attacks on federated machine

learning. IEEE Internet of Things Journal, 9(13), 11365-11375. https://doi.org/10.1109/jiot.2021.3128646

Sandholm, T., Mukherjee, S., & Huberman, B. A. (2022). Demo — SPoKE. Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. https://doi.org/10.1145/3548606.3563701

Tsuji, K., Imai, S., Takao, R., Kimura, T., Kondo, H., & Kamiya, Y. (2021). A machine sound monitoring for predictive maintenance focusing on very low frequency band. SICE Journal of Control, Measurement, and System Integration, 14(1), 27-38. https://doi.org/10.1080/18824889.2020.1863611

Turina, V., Zhang, Z., Esposito, F., & Matta, I. (2020). Combining split and federated architectures for efficiency and privacy in deep learning. Proceedings of the 16th International Conference on emerging Networking Experiments and Technologies. https://doi.org/10.1145/3386367.3431678

Wahab, O. A., Mourad, A., Otrok, H., & Taleb, T. (2021). Federated machine learning: Survey, multi-level classification, desirable criteria and future directions in communication and networking systems. IEEE Communications Surveys & Tutorials, 23(2), 1342-1397. https://doi.org/10.1109/comst.2021.3058573

Wang, P., Fan, E., & Wang, P. (2021). Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognition Letters, 141, 61-67. https://doi.org/10.1016/j.patrec.2020.07.042

Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning. ACM Transactions on Intelligent Systems and Technology, 10(2), 1-19. https://doi.org/10.1145/3298981

Downloads

Published

24.03.2024

How to Cite

Gandra. Shiva Krishna, Kota. Venkata Narayana, Jonnalagadda. Surya Kiran, Thatavarti. Satish, Inbarajan.P, S. J. M. . . (2024). Federated Machine Learning Performance Evaluation with Flower: MNIST vs CIFAR-10. International Journal of Intelligent Systems and Applications in Engineering, 12(3), 2535–2544. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/5725

Issue

Section

Research Article