<div dir="ltr">Hi all,<div><br></div><div>The Zoom link is available for anyone who prefers to join remotely.</div><div><a href="https://www.google.com/url?q=https://uchicago.zoom.us/j/6432681009?pwd%3DbUllY1JOVE9objEwUE5QMkIySjUrZz09&sa=D&source=calendar&ust=1730216558879674&usg=AOvVaw3KFYCX1W9n0S_dqj4BTvie" target="_blank" style="color:rgb(26,115,232);font-family:Roboto,Arial,sans-serif;font-size:14px;letter-spacing:0.2px">https://uchicago.zoom.us/j/6432681009?pwd=bUllY1JOVE9objEwUE5QMkIySjUrZz09</a><br></div><div><br></div><div>Best,</div><div>Xiao</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Oct 14, 2024 at 12:08 PM via cs <<a href="mailto:cs@mailman.cs.uchicago.edu">cs@mailman.cs.uchicago.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">This is an announcement of Xiao Zhang's Candidacy Exam.<br>
===============================================<br>
Candidate: Xiao Zhang<br>
<br>
Date: Thursday, October 24<br>
<br>
Time: 2:00 -3:00pm CT<br>
<br>
Location: JCL 223<br>
<br>
Title: Representation Learning from and for Generative Models<br>
<br>
Abstract: In this talk, I will present my research on attempting to connect self-supervised representation learning and generative modeling, two crucial concepts in modern computer vision. I'll demonstrate how generative models acquire strong visual representations and how improving representation learning can further enhance image generation quality. Real-world images have complex visual structures, and for generative models to recreate them, they need to encode these visual representations internally. We validate this by developing a scalable compression technique that extracts meaningful low-dimensional semantic representations from all layers of deep generative models. This also helps us interpret the internal workings of these models and reveals that their computational pathways resemble the 'what' and 'where' visual processing paths found in human perception. To further enhance representation learning in generative models, we identify a key design flaw in residual connections that hinders generative feature learning. We address this with a new network design, decayed residual connections, which gradually reduces the influence of skip connections in residual networks, promoting low-rank representations in the bottleneck. This design significantly boosts feature learning in masked autoencoders and improves the generation quality of diffusion models, all without adding new parameters.<br>
<br>
Advisors: Michael Maire<br>
<br>
Committee members: <br>
Michael Maire, Rebecca Willett, David Forsyth, Greg Shakhnarovich, Anand Bhattad<br>
<br>
When unsubscribing, use your <a href="mailto:cnetid@cs.uchicago.edu" target="_blank">cnetid@cs.uchicago.edu</a> address if your <a href="mailto:cnetid@uchicago.edu" target="_blank">cnetid@uchicago.edu</a> does not work.<br>
<br>
cs mailing list - <a href="mailto:cs@mailman.cs.uchicago.edu" target="_blank">cs@mailman.cs.uchicago.edu</a><br>
Edit Options and/or Unsubscribe: <a href="https://mailman.cs.uchicago.edu/mailman/listinfo/cs" rel="noreferrer" target="_blank">https://mailman.cs.uchicago.edu/mailman/listinfo/cs</a><br>
More information here: <a href="https://howto.cs.uchicago.edu/techstaff:mailinglist" rel="noreferrer" target="_blank">https://howto.cs.uchicago.edu/techstaff:mailinglist</a></blockquote></div>