MAPS
Machine learning and Algorithms for Practical Security

Machine learning tools have become widely accessible over the past decade, but their security remains an ongoing challenge. OWASP has summarized the ‘Top 10’ practical problems in machine learning (ML) security. However, research in each sub-problem is an ongoing race between attacks and defenses. Does this cat-and-mouse race have an end? Are there optimal defense strategies such that all attacks bounded by certain costs become impractical?
The MAPS project aims to answer these questions in a principled manner by identifying the inherent limitations of current schemes and drawing from cryptographically hard problems to establish robust security guarantees. Specifically, we study four main areas:
- the practical impact of data poisoning attacks in federated settings and the computational limitations of robust aggregation defenses against such attacks;
- watermarking schemes for AI-generated content that is provably secure against all possible attacks;
- we investigate verification of desired properties of ML systems and practical differential privacy in federated networks and GNNs.
MAPS is a project under the KISP NUS Lab.
-
Poisoning Attacks & Provable Defenses
Data poisoning attack is when an attacker corrupts training data to cause undesirable behaviour in the trained model. This behaviour can be of various kinds, including but not limited to inserting backdoors, decreasing overall accuracy, misclassifying certain inputs, and so on. [Read More] -
Cryptographic Primitives for ML Security
ML systems are increasingly deployed in security-critical applications, but traditional defenses often rely on heuristics that lack formal guarantees. In this project, we explore how cryptographic primitives can provide provable guarantees for model integrity, provenance, and privacy. [Read More] -
Privacy-Preserving Graph Learning
Graph-structured data is ubiquitous in many applications, ranging from social networks to traffic prediction. Hence, graph learning tasks such as node classification have become increasingly important. However, graph data often encode sensitive information and are often distributed across multiple machines to comply with residency and privacy laws. Motivated by these... [Read More] -
Quantitative Verification of Privacy Properties
As machine learning is increasingly used in safety-critical and privacy-sensitive applications, it is crucial to analze the models’ security properties in a principled manner. [Read More]
Publications
Recent
Current members
We gratefully acknowledge the support of Crystal Center.