Privacy-Aware Federated Learning with Adaptive User Selection

Edge AI, Distributed Networks, Privacy-Sensitive AI Systems
Federated LearningPrivacy-Preserving MLEdge ComputingDistributed Systems

Federated learning (FL) enables multiple edge devices to collaboratively train a machine learning model without the need to share potentially private data. Federated learning proceeds through iterative exchanges of model updates, which pose two key challenges: (i) the accumulation of privacy leakage over time and (ii) communication latency. These two limitations are typically addressed separately: (i) via perturbed updates to enhance privacy and (ii) user selection to mitigate latency both at the expense of accuracy. In this work, we propose a method that jointly addresses the accumulation of privacy leakage and communication latency via active user selection, aiming to improve the trade-off among privacy, latency, and model performance. To achieve this, we construct a reward function that accounts for these three objectives. Building on this reward, we propose a multi-armed bandit (MAB)-based algorithm, termed privacy-aware active user selection (PAUSE) – which dynamically selects a subset of users each round while ensuring bounded overall privacy leakage. We establish a theoretical analysis, systematically showing that the regret growth rate of PAUSE follows that of the bestknown rate in MAB literature. To address the complexity overhead of active user selection, we propose a simulated annealing-based relaxation of PAUSE and analyze its ability to approximate the rewardmaximizing policy under reduced complexity. We numerically validate the privacy leakage, associated improved latency, and accuracy gains of our methods for the federated training in various scenarios.

Project Overview

Federated learning (FL) enables multiple edge devices to collaboratively train a machine learning model without the need to share potentially private data. Federated learning proceeds through iterative exchanges of model updates, which pose two key challenges: (i) the accumulation of privacy leakage over time and (ii) communication latency. These two limitations are typically addressed separately: (i) via perturbed updates to enhance privacy and (ii) user selection to mitigate latency both at the expense of accuracy. In this work, we propose a method that jointly addresses the accumulation of privacy leakage and communication latency via active user selection, aiming to improve the trade-off among privacy, latency, and model performance. To achieve this, we construct a reward function that accounts for these three objectives. Building on this reward, we propose a multi-armed bandit (MAB)-based algorithm, termed privacy-aware active user selection (PAUSE) – which dynamically selects a subset of users each round while ensuring bounded overall privacy leakage. We establish a theoretical analysis, systematically showing that the regret growth rate of PAUSE follows that of the bestknown rate in MAB literature. To address the complexity overhead of active user selection, we propose a simulated annealing-based relaxation of PAUSE and analyze its ability to approximate the rewardmaximizing policy under reduced complexity. We numerically validate the privacy leakage, associated improved latency, and accuracy gains of our methods for the federated training in various scenarios.

The Challenge

Federated learning systems must balance model accuracy, communication latency, and privacy across distributed devices. Involving all users in training increases communication cost and can expose more private information. Additionally, selecting participants without a principled strategy leads to inefficient training and unstable performance.

Our Solution

Dynamic selection method that selects participating users during federated learning while controlling privacy leakage and communication latency. The approach jointly optimizes user selection and learning performance over multiple training rounds. It provides theoretical guarantees to balance accuracy, efficiency, and privacy.

Technology Stack

Python

Results That Matter

Measurable impact delivered through innovative AI solutions

Bounded privacy leakage

Goal 1

Better accuracy–latency balance

Goal 2

Ready to achieve similar results for your business?

Project Gallery

Visual highlights from the implementation

Support assistant UI

Ready to Transform Your Business?

Let's discuss how we can create a custom AI solution that delivers measurable results for your organization.