Quantitative Verification of Security Properties
As machine learning is increasingly used in safety-critical and privacy-sensitive applications, it is crucial to analze the models’ security properties in a principled manner.
We develop techniques for quantitative verification that go beyond qualitative yes/no testing, and estimate the likelihood that a neural network satisfies a given property. For example, we applied quantitative verification to check fairness and adversarial robustness of neural networks, in a sound and scalable manner. We also quantitatively analyzed the root causes of membership inference attacks through a causal framework, showcasing the importance of causal reasoning as opposed to drawing conclusions purely with observations of statistical correlation. We have also developed a framework for testing model provenance for LLMs using hypothesis testing with only query access to these models.