Blog
July 25, 2023
15 mins

Defending against attacks on federated learning using secure aggregation

Federated learning is one of the most proven privacy enhancing technologies used to enable collaborative research projects across multiple data sources or collaborators. As the adoption of this new technology grows, so do concerns about the limitations of federated learning regarding its actual privacy guarantees. Owkin experts have explored how we can make federated learning even more secure by adding more layers of privacy enhancing technologies and testing their effectiveness against cyberattacks.

Common attacks on federated learning projects

In 2023, healthcare became the third most targeted industry for cyberattacks. Healthcare data are highly sensitive and considered an easy target compared to many other sectors. A recent survey found that 73% of healthcare provider organizations use legacy IT systems - which often have gaps in security - among other issues in how databases and devices are managed. 

Recent developments in privacy enhancing technologies such as federated learning (FL) can provide more secure ways to do research on patient data, but they must be thoroughly tested. Although FL is privacy enhancing by design, as machine learning models are trained across multiple data holders without centralizing data points, cyberattackers could theoretically observe the exchanging of models and learn some information about the underlying data. To understand how to defend against potential attacks, researchers first explore how security vulnerabilities could arise in a variety of settings. There are very well established scientific literature on FL attacks, which broadly fit in 4 categories:

Property attack

The goal of property attacks is to infer meta characteristics of other participants’ training data. Cyberattackers could theoretically exploit vulnerabilities to understand specific data properties such as demographic information or medical conditions.

Membership attack 

The goal of a membership attack is to determine whether a specific data sample was used to train the network. Cyberattackers could theoretically exploit differences in the model’s behavior when trained on certain data points to identify whether a particular data sample was included in the training dataset or not. 

Reconstruction attack

The goal of a reconstruction attack is to reconstruct or obtain the data samples used for training. Cyberattackers could theoretically analyze changes made to the model then try to reconstruct individual training samples by leveraging the model updates received during the training process.

Reattribution attack

The goal of a reattribution attack is to determine which participant contributed a specific piece of data or update. Cyberattackers could theoretically exploit patterns in the updates exchanged during the training process to expose sensitive information about individual participants - potentially re-identifying patients.

What is data reconstruction?

Data reconstruction is the process of restoring incomplete data back to its original state using computational techniques. It’s often used to estimate missing values, recover damaged data or enhance image quality. Cyberattackers may use these techniques to attempt to rebuild the original data samples based on incomplete information exchanged during the model training process.

How to defend against attacks on federated learning projects
Fig 1: Secure multiparty computation enables model training on distributed data without any individual participant being able to see the others’ data.
What is differential privacy?

Differential privacy is a methodology that aims to protect the confidentiality of individual data points while allowing for model training on the aggregated data. It introduces controlled noise or randomness which makes it more difficult to identify specific information about any individual

However, it can be challenging to use in practice when doing research on healthcare data in real world settings. Adding too much noise or randomness to the data used for training can negatively impact the performance of models, making it challenging to draw conclusions that are not confounded by the presence of non valuable data.

What is a cryptographic protocol?

A cryptographic protocol enables the secure exchange of information through a set of rules and procedures to protect data confidentiality and integrity. There are a wide range of cryptographic protocols.

A range of privacy enhancing techniques such as secure multiparty computation, differential privacy and secure aggregation can help mitigate the risks of cyberattacks on federated learning projects. Secure multiparty computation enables data scientists to perform research on distributed data without exposing or moving it from the server on which it is hosted. 

Secure aggregation is a cryptographic protocol that hides an individual center’s contributions without negatively impacting model performance. As the individual model updates are masked during the training process, the aggregator cannot learn anything from them.

After a round of computation is complete, the final results of model training are shared without the collaborators ever accessing the individual data points contributed by others.  This virtually increases the number of accumulated gradients in one update, which makes gradient and update-based attacks more difficult and prevents reconstructed samples from being attributed to specific collaborators. By hiding individual contributors in this way, secure aggregation makes it extremely difficult for attackers to be able to reconstruct the underlying data. 

Until recently, it was commonly thought that secure aggregation prevented linking data samples back to their respective sources. However, as a leader in the field, Owkin is always pushing boundaries and asking: how can we make federated learning more secure?

Testing the limits of federated learning using secure aggregation

Owkin experts tested various attacks relying only on aggregated models using an ‘honest but curious’ approach. For the first time, we demonstrated that the nature of federated learning updates does allow one to link samples, despite the use of secure aggregation, assuming the model has a fully connected first layer and that cross-silo federated learning is utilized - which is the setting under which all multicentric healthcare projects take place.

We discovered an unforeseen attack, which we named SRATTA (Sample Re-ATTribution Attack of Secure Aggregation in Federated Learning). Under realistic assumptions, STRATTA can recover data samples from different sources and group data samples coming from the same source. Sample recovery is a specific type of reconstitution attack which has been already explored in a federated learning setting in the scientific literature to date. But the ability to group samples (despite the use of secure aggregation) is novel.

Although mathematically speaking secure aggregation ensures you can’t see if a contribution comes from a single center - you can cluster samples from the same center and make an educated guess where they come from.

What is honest-but-curious?

In cryptography, an ‘honest-but-curious’ cyberattacker is a participant in the machine learning process who follows the protocol, but wants to learn as much as possible from the data they’re theoretically allowed to see during the protocol. They post-process this data as much as possible to dig for information that other participants may not think was reachable. This is in contrast to a ‘malicious’ cyberattacker that seeks to gain unauthorized access to private data for personal gain or malicious intent. The key difference lies in their intentions and in both cases it is critical to implement privacy enhancing measures to protect confidential data from potential cyberattackers.

Fig 2: How to recover samples from federated learning updates.

SRATTA makes it possible for a cyberattacker to cluster samples that come from the same center. Thanks to secure aggregation, it’s still impossible to directly reattribute the center’s identity, but it allows a cyberattacker to make educated guesses based on assumptions from the clustered data. 

How to protect sensitive data from attacks in federated learning

A cyberattacker could theoretically only use SRATTA under 3 specific conditions:

  • The machine learning model must start with a fully connected layer
  • The number of local updates performed before each aggregation must be greater than 2
  • The cyberattacker needs to have a precise idea of the data shape

By creating specific research settings that avoid these conditions, it’s possible to prevent the possibility of this attack from occurring in the first place. Understanding SRATTA also makes it possible for individual data collaborators to proactively put additional measures in place to protect their data. 

Data collaborators can defend against SRATTA by tracking which samples are activating for each neuron and if too few samples (hyperparameter q in Fig 3 below) intervene in the update for this neuron, to not access its update and censor it (weight update set to 0, which effectively excludes it from training). We have demonstrated in our paper that this strategy does not decrease model performance.

Fig 3: Neuron tracking can prevent attacks against secure aggregation in federated learning.

At Owkin, we continue to explore the future of federation in real healthcare settings, while securing patient privacy and confidentiality through continuous security testing and experimentation. 

Learn more about SRATTA at ICML 2023

We would like to thank all participants in the SRATTA project for their contributions:

Authors
Tanguy Marchand
Régis Loeb
Ulysse Marteau-Ferey
Jean Ogier du Terrail
Arthur Pignet
Liz Allen
Testimonial
No items found.
Defending against attacks on federated learning using secure aggregation

No items found.
No items found.
No items found.