Author(s)
Dr. M. Munafur Hussaina, Ms. K.Rumana
- Manuscript ID: 140067
- Volume: 2
- Issue: 1
- Pages: 417–430
Subject Area: Computer Science
Abstract
Expansion of distributed network systems and the increasing demand for efficient network resource management, traditional centralized optimization approaches face significant challenges, particularly regarding data privacy and scalability. Federated learning (FL), an emerging paradigm in machine learning, enables collaborative model training across multiple decentralized nodes while ensuring that sensitive data remains local, thereby preserving user privacy. This paper explores the application of federated learning techniques for network optimization in distributed environments, focusing on maintaining optimal network performance without compromising privacy. We investigate the integration of FL with network intelligence frameworks to enable adaptive and efficient resource allocation, congestion control, and fault management across heterogeneous network nodes. Our study includes a comprehensive review of existing federated learning algorithms and their suitability for network scenarios, followed by the design of a novel FL-based optimization model tailored to dynamic and large-scale networks. Experimental results demonstrate that the proposed model achieves significant improvements in network throughput, latency reduction, and resilience against privacy attacks compared to traditional centralized and decentralized optimization methods. Furthermore, we address the challenges related to communication overhead, model convergence, and heterogeneity of network devices in FL deployment. This research highlights the potential of federated learning as a privacy-preserving solution for next-generation intelligent network management, paving.