[Colloquium] Reminder - Huiying Li Candidacy Exam/Jun 3, 2022

Megan Woodward meganwoodward at uchicago.edu
Fri Jun 3 08:20:35 CDT 2022


This is an announcement of Huiying Li's Candidacy Exam.
===============================================
Candidate: Huiying Li

Date: Friday, June 03, 2022

Time:  1 pm CST

Remote Location: https://zoom.us/j/95957583270?pwd=cnN2cXduWi9lUTlGM3hMdHVBY2ZIQT09  Meeting ID: 959 5758 3270 Passcode: 94QXmw

Location: JCL 346

Title: Discover, Understand and Mitigate Attacks on Deep Neural Networks

Abstract: Deep Neural Networks (DNNs) are playing an essential role in our daily life. They are commonly used in our daily life including security and safety crucial applications like face authentication, self-driving algorithms, as well as financial services. Researchers have found that DNNs are vulnerable to a bunch of attacks, especially evasion attacks and poisoning attacks. My research focus is to reveal, analyze and mitigate these vulnerabilities of DNNs to make them more secure and robust. In this thesis proposal, I will introduce my work on discovering, understanding and mitigating DNN attacks.
I will first present Blacklight, a scalable defense system for Deep Neural Networks against query-based black-box adversarial attacks. Query-based black-box adversarial attack is a type of evasion attacks where the attacker crafts an adversarial example by sending queries to the target model and getting output from it. The fundamental insight driving our design is that, to compute adversarial examples, these attacks perform iterative optimization over the network, producing image queries highly similar in the input space. The key challenge is the defense should efficiently scale to industry production systems with millions of queries per day. Blacklight overcome this challenge by applying probabilistic fingerprinting to detect highly similar images. By rejecting all detected queries, Blacklight prevents any attack to complete, even when attackers persist to submit queries after account ban or query rejection.
Next, I consider the problem of DNN backdoor attacks, a stealthy yet strong poisoning attacks. Backdoors are hidden malicious behavior that are injected into models by poisoning training data. Here, I will present our recent work latent backdoor attack, a more powerful and stealthy variant of backdoor attacks that functions under transfer learning. Our proposed latent backdoor attacks embed incomplete backdoors into a “Teacher” model for transfer learning, which will be automatically inherited by multiple “Student” models through transfer learning.
Finally, I will briefly introduce my ongoing/future work on understanding backdoor survivability on time-varying models and its research plan. Production models are usually time-varying models whose model weights are updated over time to handle drifts in data distribution over time. We observe that backdoors are being forgotten by time-varying models once the poison stops. Thus, to better protect DNN models from backdoor attacks, we need to understand how backdoors behave on these time-varying models. We need to empirically quantify the “survivability” of a backdoor against model updates, and examine how attack parameters, model update strategies, and data drift behaviors affect backdoor survivability.

Advisors: Ben Zhao and Heather Zheng

Committee Members: Ben Zhao, Heather Zheng, and Rana Hanocka

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20220603/d3f5561d/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Li, Huiying-Candidacy Exam.pdf
Type: application/pdf
Size: 152717 bytes
Desc: Li, Huiying-Candidacy Exam.pdf
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20220603/d3f5561d/attachment-0001.pdf>


More information about the Colloquium mailing list