<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-size:small"><div><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;color:rgb(80,0,80);margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b>    </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">    Tuesday, February 8th at<b> <span style="background-color:rgb(255,255,0)">11:00 am CT</span></b></font></font><br></font></p><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;color:rgb(80,0,80);margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b><span style="background-color:rgb(255,255,0)"><br></span></b></font></font></font></p><div class="gmail_default"><b style="font-family:arial,sans-serif">Where:       </b><span style="background-color:rgb(255,255,0)"><font color="#500050" style="font-family:arial,sans-serif">Talk will be given </font><font color="#0000ff" face="verdana, sans-serif" style="font-weight:bold"><u>live, in-person</u></font><font color="#0000ff" style="font-family:arial,sans-serif;font-weight:bold"> </font><font color="#000000" style="font-family:arial,sans-serif">at</font></span></div><p class="MsoNormal" style="margin:0in;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif">                   TTIC, 6045 S. Kenwood Avenue</font></p><p class="MsoNormal" style="margin:0in;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif">                   5th Floor, Room 530<b><span style="color:black"> </span></b></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif" color="#000000"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="color:rgb(0,0,0);vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b>       </font></font><font style="color:rgb(80,0,80)">Zoom Virtual Talk (</font><a href="https://uchicagogroup.zoom.us/webinar/register/WN_6xGmjB73Qz2znt5Bm7IsDA" target="_blank"><b><font color="#0000ff">register in advance here</font></b></a><font style="color:rgb(80,0,80)">)</font></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><font color="#000000"><b>Who: </b> </font><font color="#500050">    </font><font color="#000000">    </font></font></font></font>Qi Lei, Princeton University</p></div><div><br></div><div><br></div><div><div><font face="arial, sans-serif"><b>Title: </b>         Theoretical Foundations of Pre-trained Models</font></div><div><font face="arial, sans-serif"><br><b>Abstract:</b>  A pre-trained model refers to any model trained on broad data at scale and can be adapted (e.g., fine-tuned) to a wide range of downstream tasks. The rise of pre-trained models (e.g., BERT, GPT-3, CLIP, Codex, MAE) transforms applications in various domains and aligns with how humans learn. Humans and animals first establish their concepts or impressions from different data domains and data modalities. The learned concepts then help them learn specific tasks with minimal external instructions. Accordingly, we argue that a pre-trained model follows a similar procedure through the lens of deep representation learning. 1) Learn a data representation that filters out irrelevant information from the training tasks; 2) Transfer the data representation to downstream tasks with few labeled samples and simple models.</font></div><div><font face="arial, sans-serif"><br>This talk establishes some theoretical understanding for pre-trained models under different settings, ranging from supervised pre-training, meta-learning, and self-supervised learning to domain adaptation or domain generalization. I will discuss the sufficient (and sometimes necessary) conditions for pre-trained models to work based on the statistical relation between training and downstream tasks. The theoretical analyses partly answer how they work, when they fail, guide technical decisions for future work, and inspire new methods in pre-trained models.</font></div><div><font face="arial, sans-serif"><br><b>Bio: </b>Qi Lei is an associate research scholar at the ECE department of Princeton University. She received her Ph.D. from Oden Institute for Computational Engineering & Sciences at UT Austin. She visited the Institute for Advanced Study (IAS)/Princeton for the Theoretical Machine Learning Program from 2019-2020. Before that, she was a research fellow at Simons Institute for the Foundations of Deep Learning Program. Her research aims to develop sample- and computationally efficient machine learning algorithms and bridge the theoretical and empirical gap in machine learning. Qi has received several awards, including the Outstanding Dissertation Award, National Initiative for Modeling and Simulation Graduate Research Fellowship, Computing Innovative Fellowship, and Simons-Berkeley Research Fellowship.<br></font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif"><b>Host: </b><a href="mailto:nati@ttic.edu" target="_blank"><b>Nathan Srebro</b></a></font></div><div><br style="color:rgb(80,0,80)"></div></div></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></i><br></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Feb 8, 2022 at 10:12 AM Mary Marre <<a href="mailto:mmarre@ttic.edu">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div style="font-size:small"><div><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;color:rgb(80,0,80);margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b>    </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">    Tuesday, February 8th at<b> <span style="background-color:rgb(255,255,0)">11:00 am CT</span></b></font></font><br></font></p><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;color:rgb(80,0,80);margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b><span style="background-color:rgb(255,255,0)"><br></span></b></font></font></font></p><div><b style="font-family:arial,sans-serif">Where:       </b><span style="background-color:rgb(255,255,0)"><font color="#500050" style="font-family:arial,sans-serif">Talk will be given </font><font color="#0000ff" face="verdana, sans-serif" style="font-weight:bold"><u>live, in-person</u></font><font color="#0000ff" style="font-family:arial,sans-serif;font-weight:bold"> </font><font color="#000000" style="font-family:arial,sans-serif">at</font></span></div><p class="MsoNormal" style="margin:0in;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif">                   TTIC, 6045 S. Kenwood Avenue</font></p><p class="MsoNormal" style="margin:0in;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif">                   5th Floor, Room 530<b><span style="color:black"> </span></b></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif" color="#000000"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="color:rgb(0,0,0);vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b>       </font></font><font style="color:rgb(80,0,80)">Zoom Virtual Talk (</font><a href="https://uchicagogroup.zoom.us/webinar/register/WN_6xGmjB73Qz2znt5Bm7IsDA" target="_blank"><b><font color="#0000ff">register in advance here</font></b></a><font style="color:rgb(80,0,80)">)</font></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><font color="#000000"><b>Who: </b> </font><font color="#500050">    </font><font color="#000000">    </font></font></font></font>Qi Lei, Princeton University</p></div><div><br></div><div><br></div><div><div><font face="arial, sans-serif"><b>Title: </b>         Theoretical Foundations of Pre-trained Models</font></div><div><font face="arial, sans-serif"><br><b>Abstract:</b>  A pre-trained model refers to any model trained on broad data at scale and can be adapted (e.g., fine-tuned) to a wide range of downstream tasks. The rise of pre-trained models (e.g., BERT, GPT-3, CLIP, Codex, MAE) transforms applications in various domains and aligns with how humans learn. Humans and animals first establish their concepts or impressions from different data domains and data modalities. The learned concepts then help them learn specific tasks with minimal external instructions. Accordingly, we argue that a pre-trained model follows a similar procedure through the lens of deep representation learning. 1) Learn a data representation that filters out irrelevant information from the training tasks; 2) Transfer the data representation to downstream tasks with few labeled samples and simple models.</font></div><div><font face="arial, sans-serif"><br>This talk establishes some theoretical understanding for pre-trained models under different settings, ranging from supervised pre-training, meta-learning, and self-supervised learning to domain adaptation or domain generalization. I will discuss the sufficient (and sometimes necessary) conditions for pre-trained models to work based on the statistical relation between training and downstream tasks. The theoretical analyses partly answer how they work, when they fail, guide technical decisions for future work, and inspire new methods in pre-trained models.</font></div><div><font face="arial, sans-serif"><br><b>Bio: </b>Qi Lei is an associate research scholar at the ECE department of Princeton University. She received her Ph.D. from Oden Institute for Computational Engineering & Sciences at UT Austin. She visited the Institute for Advanced Study (IAS)/Princeton for the Theoretical Machine Learning Program from 2019-2020. Before that, she was a research fellow at Simons Institute for the Foundations of Deep Learning Program. Her research aims to develop sample- and computationally efficient machine learning algorithms and bridge the theoretical and empirical gap in machine learning. Qi has received several awards, including the Outstanding Dissertation Award, National Initiative for Modeling and Simulation Graduate Research Fellowship, Computing Innovative Fellowship, and Simons-Berkeley Research Fellowship.<br></font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif"><b>Host: </b><a href="mailto:nati@ttic.edu" target="_blank"><b>Nathan Srebro</b></a></font></div><div><br></div></div><div><br></div><div><br></div></div><div><div dir="ltr"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></i><br></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Feb 7, 2022 at 4:01 PM Mary Marre <<a href="mailto:mmarre@ttic.edu" target="_blank">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div style="font-size:small"><div><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;color:rgb(80,0,80);margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b>    </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">    Tuesday, February 8th at<b> <span style="background-color:rgb(255,255,0)">11:00 am CT</span></b></font></font><br></font></p><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;color:rgb(80,0,80);margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b><span style="background-color:rgb(255,255,0)"><br></span></b></font></font></font></p><div><b style="font-family:arial,sans-serif">Where:       </b><span style="background-color:rgb(255,255,0)"><font color="#500050" style="font-family:arial,sans-serif">Talk will be given </font><font color="#0000ff" style="font-weight:bold" face="verdana, sans-serif"><u>live, in-person</u></font><font color="#0000ff" style="font-family:arial,sans-serif;font-weight:bold"> </font><font color="#000000" style="font-family:arial,sans-serif">at</font></span></div><p class="MsoNormal" style="margin:0in;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif">                   TTIC, 6045 S. Kenwood Avenue</font></p><p class="MsoNormal" style="margin:0in;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif">                   5th Floor, Room 530<b><span style="color:black"> </span></b></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif" color="#000000"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="color:rgb(0,0,0);vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b>       </font></font><font style="color:rgb(80,0,80)">Zoom Virtual Talk (</font><a href="https://uchicagogroup.zoom.us/webinar/register/WN_6xGmjB73Qz2znt5Bm7IsDA" target="_blank"><b><font color="#0000ff">register in advance here</font></b></a><font style="color:rgb(80,0,80)">)</font></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><font color="#000000"><b>Who: </b> </font><font color="#500050">    </font><font color="#000000">    </font></font></font></font>Qi Lei, Princeton University</p></div><div><br></div><div><br></div><div><div><font face="arial, sans-serif"><b>Title: </b>         Theoretical Foundations of Pre-trained Models</font></div><div><font face="arial, sans-serif"><br><b>Abstract:</b>  A pre-trained model refers to any model trained on broad data at scale and can be adapted (e.g., fine-tuned) to a wide range of downstream tasks. The rise of pre-trained models (e.g., BERT, GPT-3, CLIP, Codex, MAE) transforms applications in various domains and aligns with how humans learn. Humans and animals first establish their concepts or impressions from different data domains and data modalities. The learned concepts then help them learn specific tasks with minimal external instructions. Accordingly, we argue that a pre-trained model follows a similar procedure through the lens of deep representation learning. 1) Learn a data representation that filters out irrelevant information from the training tasks; <span>2</span>) Transfer the data representation to downstream tasks with few labeled samples and simple models.</font></div><div><font face="arial, sans-serif"><br>This talk establishes some theoretical understanding for pre-trained models under different settings, ranging from supervised pre-training, meta-learning, and self-supervised learning to domain adaptation or domain generalization. I will discuss the sufficient (and sometimes necessary) conditions for pre-trained models to work based on the statistical relation between training and downstream tasks. The theoretical analyses partly answer how they work, when they fail, guide technical decisions for future work, and inspire new methods in pre-trained models.</font></div><div><font face="arial, sans-serif"><br><b>Bio: </b>Qi Lei is an associate research scholar at the ECE department of Princeton University. She received her Ph.D. from Oden Institute for Computational Engineering & Sciences at UT Austin. She visited the Institute for Advanced Study (IAS)/Princeton for the Theoretical Machine Learning Program from 2019-2020. Before that, she was a research fellow at Simons Institute for the Foundations of Deep Learning Program. Her research aims to develop sample- and computationally efficient machine learning algorithms and bridge the theoretical and empirical gap in machine learning. Qi has received several awards, including the Outstanding Dissertation Award, National Initiative for Modeling and Simulation Graduate Research Fellowship, Computing Innovative Fellowship, and Simons-Berkeley Research Fellowship.<br></font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif"><b>Host: </b><a href="mailto:nati@ttic.edu" target="_blank"><b>Nathan Srebro</b></a></font></div><div><br></div></div><div><br></div><div><br></div><div><br></div></div><div><div dir="ltr"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></i><br></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 3, 2022 at 6:11 PM Mary Marre <<a href="mailto:mmarre@ttic.edu" target="_blank">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div style="font-size:small"><div><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;color:rgb(80,0,80);margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b><span class="gmail_default"></span>When:</b>    </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">    Tuesday, February 8th at<b> <span style="background-color:rgb(255,255,0)">11:00 am CT</span></b></font></font><br></font></p><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;color:rgb(80,0,80);margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b><span style="background-color:rgb(255,255,0)"><br></span></b></font></font></font></p><div><font face="arial, sans-serif"><b>Where:       </b><font color="#500050">Talk will be given </font><font color="#0000ff" style="font-weight:bold"><u>live, in-person</u></font><font color="#0000ff" style="font-weight:bold"> </font><font color="#000000">at</font></font></div><p class="MsoNormal" style="margin:0in;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif">                   TTIC, 6045 S. Kenwood Avenue</font></p><p class="MsoNormal" style="margin:0in;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif">                   5th Floor, Room 530<b><span style="color:black"> </span></b></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif" color="#000000"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="color:rgb(0,0,0);vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b>      </font></font><font style="color:rgb(80,0,80)">Zoom Virtual Talk (</font><a href="https://uchicagogroup.zoom.us/webinar/register/WN_6xGmjB73Qz2znt5Bm7IsDA" target="_blank"><b><font color="#0000ff">register in advance here</font></b></a><font style="color:rgb(80,0,80)">)</font></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><font color="#000000"><b>Who: </b> </font><font color="#500050">    </font><font color="#000000">   </font></font></font></font>Qi Lei, Princeton University</p></div><div><br></div><div><br></div><div><div><font face="arial, sans-serif"><b>Title: </b>       Theoretical Foundations of Pre-trained Models</font></div><div><font face="arial, sans-serif"><br><b>Abstract:</b>  A pre-trained model refers to any model trained on broad data at scale and can be adapted (e.g., fine-tuned) to a wide range of downstream tasks. The rise of pre-trained models (e.g., BERT, GPT-3, CLIP, Codex, MAE) transforms applications in various domains and aligns with how humans learn. Humans and animals first establish their concepts or impressions from different data domains and data modalities. The learned concepts then help them learn specific tasks with minimal external instructions. Accordingly, we argue that a pre-trained model follows a similar procedure through the lens of deep representation learning. 1) Learn a data representation that filters out irrelevant information from the training tasks; 2) Transfer the data representation to downstream tasks with few labeled samples and simple models.</font></div><div><font face="arial, sans-serif"><br>This talk establishes some theoretical understanding for pre-trained models under different settings, ranging from supervised pre-training, meta-learning, and self-supervised learning to domain adaptation or domain generalization. I will discuss the sufficient (and sometimes necessary) conditions for pre-trained models to work based on the statistical relation between training and downstream tasks. The theoretical analyses partly answer how they work, when they fail, guide technical decisions for future work, and inspire new methods in pre-trained models.</font></div><div><font face="arial, sans-serif"><br><b>Bio: </b>Qi Lei is an associate research scholar at the ECE department of Princeton University. She received her Ph.D. from Oden Institute for Computational Engineering & Sciences at UT Austin. She visited the Institute for Advanced Study (IAS)/Princeton for the Theoretical Machine Learning Program from 2019-2020. Before that, she was a research fellow at Simons Institute for the Foundations of Deep Learning Program. Her research aims to develop sample- and computationally efficient machine learning algorithms and bridge the theoretical and empirical gap in machine learning. Qi has received several awards, including the Outstanding Dissertation Award, National Initiative for Modeling and Simulation Graduate Research Fellowship, Computing Innovative Fellowship, and Simons-Berkeley Research Fellowship.<br></font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif"><b>Host: </b><a href="mailto:nati@ttic.edu" target="_blank"><b>Nathan Srebro</b></a></font></div><div><br></div><div><br></div></div><div><br></div></div><div><div dir="ltr"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></i><br></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Feb 1, 2022 at 9:47 PM Mary Marre <<a href="mailto:mmarre@ttic.edu" target="_blank">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-size:small"><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;color:rgb(80,0,80);margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b><span class="gmail_default" style="font-size:small"></span>When:</b>    </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">  Tuesday, February 8th at<b> <span style="background-color:rgb(255,255,0)">11:00 am CT</span></b></font></font><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif" color="#000000"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="color:rgb(0,0,0);vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b>     </font></font><font style="color:rgb(80,0,80)">Zoom Virtual Talk (</font><a href="https://uchicagogroup.zoom.us/webinar/register/WN_6xGmjB73Qz2znt5Bm7IsDA" target="_blank"><b><font color="#0000ff">register in advance here</font></b></a><font style="color:rgb(80,0,80)">)</font></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><font color="#000000"><b>Who: </b> </font><font color="#500050">    </font><font color="#000000">   </font></font></font></font>Qi Lei, Princeton University</p></div><div style="font-size:small"><br></div><div style="font-size:small"><br></div><div><div><font face="arial, sans-serif"><b>Title: </b>       Theoretical Foundations of Pre-trained Models</font></div><div><font face="arial, sans-serif"><br><b>Abstract:</b>  A pre-trained model refers to any model trained on broad data at scale and can be adapted (e.g., fine-tuned) to a wide range of downstream tasks. The rise of pre-trained models (e.g., BERT, GPT-3, CLIP, Codex, MAE) transforms applications in various domains and aligns with how humans learn. Humans and animals first establish their concepts or impressions from different data domains and data modalities. The learned concepts then help them learn specific tasks with minimal external instructions. Accordingly, we argue that a pre-trained model follows a similar procedure through the lens of deep representation learning. 1) Learn a data representation that filters out irrelevant information from the training tasks; 2) Transfer the data representation to downstream tasks with few labeled samples and simple models.</font></div><div><font face="arial, sans-serif"><br>This talk establishes some theoretical understanding for pre-trained models under different settings, ranging from supervised pre-training, meta-learning, and self-supervised learning to domain adaptation or domain generalization. I will discuss the sufficient (and sometimes necessary) conditions for pre-trained models to work based on the statistical relation between training and downstream tasks. The theoretical analyses partly answer how they work, when they fail, guide technical decisions for future work, and inspire new methods in pre-trained models.</font></div><div><font face="arial, sans-serif"><br><b>Bio: </b>Qi Lei is an associate research scholar at the ECE department of Princeton University. She received her Ph.D. from Oden Institute for Computational Engineering & Sciences at UT Austin. She visited the Institute for Advanced Study (IAS)/Princeton for the Theoretical Machine Learning Program from 2019-2020. Before that, she was a research fellow at Simons Institute for the Foundations of Deep Learning Program. Her research aims to develop sample- and computationally efficient machine learning algorithms and bridge the theoretical and empirical gap in machine learning. Qi has received several awards, including the Outstanding Dissertation Award, National Initiative for Modeling and Simulation Graduate Research Fellowship, Computing Innovative Fellowship, and Simons-Berkeley Research Fellowship.<br></font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif"><b>Host: </b><a href="mailto:nati@ttic.edu" target="_blank"><b>Nathan Srebro</b></a></font></div><div style="font-size:small"><br></div><div style="font-size:small"><br></div><div style="font-size:small"><br></div></div><div><div dir="ltr"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></i><br></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div></div>
</blockquote></div></div>
</blockquote></div></div>
</blockquote></div></div>
</blockquote></div></div>