<div dir="ltr"><p dir="ltr" style="color:rgb(0,0,0);font-family:-webkit-standard;line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(47,47,47);background-color:transparent;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><span style="font-weight:700;color:rgb(0,0,0);font-family:arial,helvetica,sans-serif">When: </span><span style="color:rgb(0,0,0);font-family:arial,helvetica,sans-serif">    Wednesday, December 4th, 2019 at 11:00am</span><b>
</b></span></p><div><font color="#000000" face="arial, helvetica, sans-serif"><br></font></div><div style="font-weight:bold"><font color="#000000" face="arial, helvetica, sans-serif">Where:<span style="font-weight:400">    </span><span style="font-weight:400">TTIC</span><span style="font-weight:400">, 6045 S Kenwood Avenue, 5th Floor, Room 526</span></font></div><div style="font-weight:bold"><font color="#000000" face="arial, helvetica, sans-serif"><span style="font-weight:400"><br></span></font></div><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="color:rgb(47,47,47);background-color:transparent;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><font style="color:rgb(34,34,34);white-space:normal"><span style="color:rgb(0,0,0)"><b>Who:</b></span><span style="color:rgb(0,0,0)"><b> </b>      </span></font></span>Surbhi Goel, University of Texas</p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><font color="#000000"><br></font></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><font face="arial, helvetica, sans-serif"><b><font color="#000000">Title:</font><font color="#2f2f2f">        </font></b></font></span>E<span style="font-size:13px;color:rgb(38,50,56);font-family:Roboto,sans-serif">xploring Surrogate Losses for Learning Neural Networks</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><font face="arial, helvetica, sans-serif"><br></font></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><font face="arial, helvetica, sans-serif" color="#000000"><b>Abstract: </b></font></span><font face="arial, sans-serif">Developing provably efficient algorithms for learning commonly used neural network architectures continues to be a core challenge in machine learning. The underlying difficulty arises from the highly non-convex nature of the optimization problems posed by neural networks. In this talk, I will discuss the power of convex surrogate losses for tackling this underlying non-convexity. I will focus on the setting of ReLU regression and show how the convex surrogate allows us to get approximate guarantees in the challenging agnostic model. I will further show how these techniques give positive results for simple convolutional and fully connected architectures.</font></p><div><div><font face="arial, helvetica, sans-serif" color="#000000"><br></font></div></div><div><font color="#000000"><b>Host:   </b><a href="mailto:nati@ttic.edu" target="_blank">Nati Srebro</a></font></div><div></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><span style="font-size:13px;color:rgb(0,0,0)"><br></span></div><div><span style="font-size:13px;color:rgb(0,0,0)">Thank you,</span></div><div><span style="font-size:13px;color:rgb(0,0,0)"><br></span></div><font color="#000000">Amanda Kolstad</font><br style="font-size:13px;color:rgb(0,0,0)"><span style="color:rgb(0,0,0);font-size:x-small">Executive Assistant<br><a href="http://www.ttic.edu" target="_blank">Toyota Technological Institute at Chicago</a><br><font color="#999999">6045 S. Kenwood Avenue<br>Chicago, IL 60637</font></span><br></div></div></div></div></div></div></div></div>