[Colloquium] Huiying Li Dissertation Defense/Mar 6, 2023

Megan Woodward meganwoodward at uchicago.edu
Fri Mar 3 09:17:11 CST 2023


This is an announcement of Huiying Li's Dissertation Defense.
===============================================
Candidate: Huiying Li

Date: Monday, March 06, 2023

Time: 10:30 am CST

Location: JCL 298

Title: Revealing and Mitigating Vulnerabilities of Deep Neural Networks in the Wild

Abstract:
Although Deep Neural Networks (DNNs) are widely used in applications such as facial or iris recognition and language translation, there is growing concern about their feasibility in safety-critical or security-critical contexts. Researchers have found that DNNs can be manipulated by poison attacks like backdoor attacks and are vulnerable to evasion attacks like adversarial attacks. Attackers can compromise DNN models by injecting backdoors during the training phase or by adding imperceptible perturbations to model inputs via adversarial attacks. To ensure secure and reliable deep learning systems, it is crucial to identify and mitigate these vulnerabilities. Despite active efforts within the adversarial machine learning community to identify vulnerabilities in deep neural networks (DNNs), there remains a significant gap between current research and the practical deployment of these systems in the real world. According to recent studies, model practitioners often do not anticipate potential attacks on their models in the near future. This is largely due to the fact that previous research on machine learning security has oversimplified threat models, which do not accurately reflect real-world scenarios.

In this dissertation, I seek to reveal and mitigate DNN vulnerabilities in practical settings by designing and measuring attacks and defenses against DNNs under realistic threat models. Particularly, my dissertation consists of three components that target different practical constraints. The first two components focus on uncovering and mitigating DNN backdoor attacks under practical constraints, one during the attack injection phase and one after the attack injection. As training a production model from scratch is resource-intensive, entities often train their models through transfer learning, which breaks existing backdoors. I propose a more advanced backdoor attack called latent backdoor that can survive transfer learning. Next, although existing work assumes models are static and the injected backdoors stay in place permanently, production models are usually periodically updated to address data distribution drifts. I conduct a comprehensive study to understand how backdoor attacks behave on these time-varying models and propose a smart training strategy that can reduce backdoor survivability significantly with negligible overhead. The last component focuses on defending black-box adversarial attacks in real-world MLaaS platforms. To resolve the challenge that today's MLaaS platforms receive millions of queries per day, I design and implement a scalable and robust defense system against black-box adversarial attacks on DNNs. Finally, I summarize my work on revealing and mitigating real-world DNN attacks under practical constraints and discuss my insights in this area. I hope my work can bridge the gap between the exploration of DNN attacks and defenses and their application in real-world systems and inspire further research on DNN vulnerabilities under real-world scenarios.

Advisors: Ben Zhao and Heather Zheng

Committee Members: Ben Zhao, Heather Zheng, and Rana Hanocka




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20230303/a8a41339/attachment.html>


More information about the Colloquium mailing list