Quantitative Verification of Security Properties

As machine learning is increasingly used in safety-critical and privacy-sensitive applications, it is crucial to analze the models’ security properties in a principled manner.

We develop techniques for quantitative verification that go beyond qualitative yes/no testing, and estimate the likelihood that a neural network satisfies a given property. For example, we applied quantitative verification to check fairness and adversarial robustness of neural networks, in a sound and scalable manner. We also quantitatively analyzed the root causes of membership inference attacks through a causal framework, showcasing the importance of causal reasoning as opposed to drawing conclusions purely with observations of statistical correlation. We have also developed a framework for testing model provenance for LLMs using hypothesis testing with only query access to these models.

Relevant Publications
Model Provenance Testing for Large Language Models
Ivica Nikolić, Teodora Baluta, Prateek Saxena
Conference on Neural Information Processing Systems (NeurIPS 2025). San Diego, CA, Dec 2025.
PDF
Membership Inference Attacks and Generalization: A Causal Perspective
Teodora Baluta, Shiqi Shen, S. Hitarth, Shruti Tople, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2022). Los Angeles, CA, Nov 2022.
PDF GitHub
Scalable Quantitative Verification For Deep Neural Networks
Teodora Baluta, Zheng Leong Chua, Kuldeep S. Meel, Prateek Saxena
International Conference on Software Engineering (ICSE 2021). Madrid, Spain, May 2021.
Quantitative verification of neural networks and its security applications
Teodora Baluta, Shiqi Shen, Shweta Shinde, Kuldeep S. Meel, Prateek Saxena
ACM Conference on Computer and Communications Security (CCS 2019). London, UK, Nov 2019.