<div dir="ltr"><div class="gmail_default" style="font-family:georgia,serif;font-size:small"><font color="#000000" face="georgia, serif"><b>When:</b>         Monday, March 27th<b> at <span style="background-color:rgb(255,255,0)">11:30AM CT  </span></b></font><div dir="ltr" style="font-family:Arial,Helvetica,sans-serif"><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font color="#000000" face="georgia, serif"> <br><b>Where:</b><b>  </b>     Talk will be given<span style="background-color:rgb(255,255,0)"> </span><span style="background-color:rgb(255,255,0)"><font style="font-weight:bold"><u>live, in-person</u></font><font style="font-weight:bold"> </font></span>at</font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font color="#000000" face="georgia, serif">                       TTIC, 6045 S. Kenwood Avenue</font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font color="#000000" face="georgia, serif">                       5th Floor, Room 530<b>              </b>   </font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font color="#000000" face="georgia, serif"><br></font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="georgia, serif" color="#000000"><b><span class="gmail_default"></span>Virtually:</b>     via Panopto (<a href="https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=54653fb2-45ec-4212-8d8b-af790078a0cf" target="_blank">Livestream</a>)</font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="georgia, serif" color="#000000"><br></font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="georgia, serif"><font color="#000000"><b>Who:          </b>Ohad Shamir, </font>Weizmann Institute</font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="georgia, serif" color="#000000"><b><br></b></font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="georgia, serif" color="#000000"><b>Title:</b>           Implicit bias in machine learning</font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="georgia, serif" color="#000000"><br></font></p><font face="georgia, serif" color="#000000"><b>Abstract:  </b> Most practical algorithms for supervised machine learning boil down to optimizing the average performance over a training dataset. However, it is increasingly recognized that although the optimization objective is the same, the manner in which it is optimized plays a decisive role in the properties of the resulting predictor. For example, when training large neural networks, there are generally many weight combinations that will perfectly fit the training data. However, gradient-based training methods somehow tend to reach those which, for example, do not overfit; are brittle to adversarially crafted examples; or have other interesting properties. In this talk, I'll describe several recent theoretical and empirical results related to this question.</font></div></div><div><br></div><span>-- </span><br><div dir="ltr" data-smartmail="gmail_signature"><div dir="ltr"><b style="background-color:rgb(255,255,255)"><font color="#3d85c6">Brandie Jones </font></b><div><div><div><font color="#3d85c6"><b><i>Executive </i></b></font><b style="color:rgb(61,133,198)"><i>Administrative Assistant</i></b></div></div><div><span style="background-color:rgb(255,255,255)"><font color="#3d85c6">Toyota Technological Institute</font></span></div><div><span style="background-color:rgb(255,255,255)"><font color="#3d85c6">6045 S. Kenwood Avenue</font></span></div><div><span style="background-color:rgb(255,255,255)"><font color="#3d85c6">Chicago, IL  60637</font></span></div></div><div><span style="background-color:rgb(255,255,255)"><font color="#3d85c6"><a href="http://www.ttic.edu" target="_blank">www.ttic.edu</a> </font></span></div><div><font color="#3d85c6"><div style="background-color:rgb(238,238,238)"><div style="color:rgb(32,33,36)"><span style="color:rgb(61,133,198)">Working Remotely on Tuesdays</span></div></div></font></div></div></div></div>