CSTAR-FL: Stochastic Client Selection for Tree All-Reduce Federated Learning
Abstract
Federated Learning (FL) is widely applied in privacy-sensitive domains, such as healthcare, finance, and education, due to its privacy-preserving properties. However, implementing FL in dynamic wireless networks poses substantial communication challenges. Central to these challenges is the need for efficient communication strategies that can adapt to fluctuating network conditions and the growing number of participating devices, which can lead to unacceptable communication delays. In this article, we propose Stochastic Client Selection for Tree All-Reduce Federated Learning (CSTAR-FL), a novel approach that combines a probabilistic User Device (UD) selection strategy with a tree-based communication architecture to enhance communication efficiency in FL within densely populated wireless networks. By optimizing UD selection for effective model aggregation and employing an efficient data transmission structure, CSTAR-FL significantly reduces communication time and improves FL efficiency. Additionally, our approach ensures high global model accuracy under scenarios where data distribution is heterogeneous from uds. Extensive simulations in dynamic wireless network scenarios demonstrate that CSTAR-FL outperforms existing state-of-the-art methods, reducing model convergence time by up to 40% without losing the global model accuracy. This makes CSTAR-FL a robust solution for efficient and scalable FL deployments in high-density environments.
Type
Publication
IEEE Transactions on Mobile Computing