Others
We also present principled approaches to analyzing machine learning security in many other aspects, such as robust verification and privacy. For example, we applied quantitative verification to check fairness and adversarial robustness of neural networks, in a sound and scalable manner. We also analyzed the root causes of membership inference attacks through a causal framework, showcasing the importance of causal reasoning as opposed to drawing conclusions purely with observations of statistical correlation. We worked on making differential privacy more practical for fully-distributed graph processing, e.g. for hierarchical clustering and training GNNs. Lastly, we are studying data repudiation, i.e., convincing a verifier that a data point was not used in training. We also used ML for security analysis, such as learning taint rules to perform dynamic binary taint analysis