Quantitative Verification of Privacy Properties
As machine learning is increasingly used in safety-critical and privacy-sensitive applications, it is crucial to analze the models’ security properties in a principled manner.
We develop techniques for quantitative verification that go beyond qualitative yes/no testing, and estimate the likelihood that a neural network satisfies a given property. For example, we applied quantitative verification to check fairness and adversarial robustness of neural networks, in a sound and scalable manner. We also quantitatively analyzed the root causes of membership inference attacks through a causal framework, showcasing the importance of causal reasoning as opposed to drawing conclusions purely with observations of statistical correlation.
Relevant Publications
Membership Inference Attacks and Generalization: A Causal Perspective
Teodora Baluta, Shiqi Shen, S. Hitarth, Shruti Tople, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2022). Los Angeles, CA,
Nov 2022.
Scalable Quantitative Verification For Deep Neural Networks
Teodora Baluta, Zheng Leong Chua, Kuldeep S. Meel, Prateek Saxena
International Conference on Software Engineering (ICSE 2021). Madrid, Spain,
May 2021.
Quantitative verification of neural networks and its security applications
Teodora Baluta, Shiqi Shen, Shweta Shinde, Kuldeep S. Meel, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2019). London, UK,
Nov 2019.