Robust Verification and privacy
There are more examples of principled approaches to analyzing machine learning security in our work. We applied quantitative verification for fairness and adversarial robustness for neural networks. We have analyzed when membership inference tests are conclusive. Our work shows the importance of causal reasoning here, as opposed to drawing conclusions purely with observations of statistical correlation. We have worked on making differential privacy more practical for fully-distributed graph processing, e.g. for hierarchical clustering and training GNNs. Lastly, we are studying data repudiation, i.e., convincing a verifier that a data point was not used in training.
Relevant Publications
Unforgeability in Stochastic Gradient Descent
Teodora Baluta, Ivica Nikolic, Racchit Jain, Divesh Aggarwal, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2023). Copenhagen, DK,
Nov 2023.
LPGNet: Link Private Graph Networks for Node Classification
Aashish Kolluri, Teodora Baluta, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2022). Los Angeles, CA,
Nov 2022.
Membership Inference Attacks and Generalization: A Causal Perspective
Teodora Baluta, Shiqi Shen, S. Hitarth, Shruti Tople, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2022). Los Angeles, CA,
Nov 2022.
Private Hierarchical Clustering in Federated Networks
Aashish Kolluri, Teodora Baluta, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2021). Korea,
Nov 2021.
Scalable Quantitative Verification For Deep Neural Networks
Teodora Baluta, Zheng Leong Chua, Kuldeep S. Meel, Prateek Saxena
International Conference on Software Engineering (ICSE 2021). Madrid, Spain,
May 2021.
Quantitative verification of neural networks and its security applications
Teodora Baluta, Shiqi Shen, Shweta Shinde, Kuldeep S. Meel, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2019). London, UK,
Nov 2019.