MAPS
Machine learning and Algorithms for Practical Security

Project team:
Teodora Baluta,
Aashish Kolluri,
Kareem Shehata,
Ivica Nikolić,
Louise Xu,
Mallika Prabhakar
Machine learning tools have become widely accessible over the past decade, but their security remains an ongoing challenge. OWASP has summarized the ‘Top 10’ practical problems in machine learning (ML) security. However, research in each sub-problem is an ongoing race between attacks and defenses. Does this cat-and-mouse race have an end? Are there optimal defense strategies such that all attacks bounded by certain costs become impractical?
The MAPS project aims to answer these questions in a principled manner by identifying the inherent limitations of current schemes and drawing from cryptographically hard problems to establish robust security guarantees. Specifically, we study four main areas:
- the practical impact of data poisoning attacks in federated settings and the computational limitations of robust aggregation defenses against such attacks;
- watermarking schemes for AI-generated content that is provably secure against all possible attacks;
- defenses against model inversion attacks, including a cryptographic primitive that prevents the recovery of sensitive inputs;
- we investigate verification of desired properties of ML systems and practical differential privacy in federated networks and GNNs.
MAPS is a project under the KISP NUS Lab.
-
Poisoning Attacks & Robust Aggregation
Data poisoning is an integrity attack on ML model training, wherein data certain corrupted samples force the trained model to surreptitiously contain backdoors, misclassify certain inputs, and more. We were the first to show the practical impact of these attacks in federated learning in 2016. A generic defense for various... [Read More] -
Watermarking AI
How can one verify whether a data sample (text/image/sound) comes from a generative AI model? One way is to insert “watermarks” into AI-generated content at source, which can later be verified using a secret key by the end user’s device. Doing so has many security applications: Limiting nefarious use of... [Read More] -
Defeating Model Inversion
Another prominent attack on the OWASP Top 10 list is model inversion. Given the output of the ML model, it recovers the inputs approximately well. This is serious security and privacy concern. Consider a face authentication service. When you enroll for face authentication, it uses a face recognition model to... [Read More] -
Robust Verification and privacy
There are more examples of principled approaches to analyzing machine learning security in our work. We applied quantitative verification for fairness and adversarial robustness for neural networks. We have analyzed when membership inference tests are conclusive. Our work shows the importance of causal reasoning here, as opposed to drawing conclusions... [Read More] -
Others
Can we learn from observations rules that are useful for security analysis such as taint analysis rules? In this line of work, we use ML to power analyses useful for security analysis.
Publications
Recent
Attacking Byzantine Robust Aggregation in High Dimensions
Sarthak Choudhary*, Aashish Kolluri*, Prateek Saxena
IEEE Symposium on Security and Privacy (S&P 2024). Oakland, CA,
May 2024.
On Inversion Attacks and Countermeasures for Leaked Vector Representations
Louise Xu, Mallika Prabhakar, Prateek Saxena
In Review, 2024.
CLUE-Mark: Watermarking Diffusion Models using CLWE
Kareem Shehata, Aashish Kolluri, Prateek Saxena
In Review, 2024.
Unforgeability in Stochastic Gradient Descent
Teodora Baluta, Ivica Nikolic, Racchit Jain, Divesh Aggarwal, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2023). Copenhagen, DK,
Nov 2023.
LPGNet: Link Private Graph Networks for Node Classification
Aashish Kolluri, Teodora Baluta, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2022). Los Angeles, CA,
Nov 2022.
Membership Inference Attacks and Generalization: A Causal Perspective
Teodora Baluta, Shiqi Shen, S. Hitarth, Shruti Tople, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2022). Los Angeles, CA,
Nov 2022.
Private Hierarchical Clustering in Federated Networks
Aashish Kolluri, Teodora Baluta, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2021). Korea,
Nov 2021.
Scalable Quantitative Verification For Deep Neural Networks
Teodora Baluta, Zheng Leong Chua, Kuldeep S. Meel, Prateek Saxena
International Conference on Software Engineering (ICSE 2021). Madrid, Spain,
May 2021.