Poisoning Attacks & Robust Aggregation

Data poisoning attack is when an attacker corrupts training data to cause undesirable behaviour in the trained model. This behaviour can be of various kinds, including but not limited to inserting backdoors, decreasing overall accuracy, misclassifying certain inputs, and so on.

We were the first to show the practical impact of these attacks in federated learning in 2016. Since then there have been many attempts at finding optimal defenses against poisoning attacks. In our recent work, we show that optimal solutions for filtering outliers in training data, like Byzantine robust aggregation, have intractable running time for practical ML models.

Relevant Publications
Attacking Byzantine Robust Aggregation in High Dimensions
Sarthak Choudhary*, Aashish Kolluri*, Prateek Saxena
IEEE Symposium on Security and Privacy (S&P 2024). Oakland, CA, May 2024.
AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems
Shiqi Shen, Shruti Tople, Prateek Saxena
ACM Conference on Computer Security Applications (ACSAC 2016). Los Angeles, CA, Dec 2016.
PDF