<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-size:small"><div class="gmail_default"><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b> </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit"> Monday, January 11th at<b> 11:10 am CT</b></font></font><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b> </font></font></font><font color="#000000" style="font-family:arial,sans-serif">Zoom Virtual Talk (</font><b style="font-family:arial,sans-serif"><font color="#0000ff"><a href="https://uchicagogroup.zoom.us/webinar/register/WN_96FR8EkDRxS7o7c4oO4nxw" target="_blank">register in advance here</a></font></b><font color="#000000" style="font-family:arial,sans-serif">)</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Who: </b> </font></font></font>Sitan Chen, MIT</p></div><div class="gmail_default"><br></div><div class="gmail_default"><br></div><div class="gmail_default"><div><b>Title: </b>Learning When Gradient Descent Fails <font face="arial, sans-serif"><br></font><br><b>Abstract: </b>What are the most powerful algorithms for taking labeled examples and producing classifiers with good out-of-sample performance? Stochastic gradient descent is the practitioner's tool of choice, and an explosion of work in recent years has sought out theoretical justification for the striking empirical successes of this approach. Yet one can also ask an orthogonal question: are there natural learning problems where one can do even better, and provably so?</div><br>In this talk, I will first overview my work on supervised learning in the presence of corruptions, focusing on the classic problem of learning a halfspace under Massart noise. This is a setting where an adversary can arbitrarily corrupt a randomly chosen fraction of labels, and for which minimizing any convex surrogate provably fails. We give a simple and practical algorithm, which is also the first proper learning algorithm for this problem to work without any assumptions on the input distribution. Time permitting, I will mention some of our related results on generalized linear models and contextual bandits.<br><br>In the second half, I will present progress on the well-studied problem of learning a neural network over Gaussian space: given normally distributed examples labeled by an unknown teacher network, output a network with high test accuracy. All previous works on this pertain to depth-two networks and make some assumption about the weights. We give the first algorithm that works for arbitrary depths, makes no such assumptions, and runs in time polynomial in the dimension for any bounded-size network. Notably, a large class of algorithms, including gradient descent on an overparametrized student network, provably cannot achieve such a guarantee.</div><div class="gmail_default"><br>Based on joint works with Adam Klivans, Frederic Koehler, Raghu Meka, Ankur Moitra, and Morris Yau.<div><br></div><div><div dir="ltr"><div><b>Bio</b>: <span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Sitan Chen is a fifth-year graduate student in theoretical computer science at MIT advised by Ankur Moitra</span><span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">. His research focuses on designing provable algorithms for fundamental learning problems, with a focus on robustness, multi-index regression, and mixture models. He has been supported by a Paul and Daisy Soros Fellowship and an MIT Presidential Fellowship.</span></div><div><span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><br></span></div></div></div><div><b>Host: </b><a href="mailto:madhurt@ttic.edu" target="_blank">Madhur Tulsiani</a></div><div><br style="color:rgb(80,0,80)"></div><div><br></div><div><br></div></div></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><font face="arial, helvetica, sans-serif">Mary C. Marre</font><div><font face="arial, helvetica, sans-serif">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">6045 S. Kenwood Avenue</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Room 517</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL 60637</font></i></div><div><i><font face="arial, helvetica, sans-serif">p:(773) 834-1757</font></i></div><div><i><font face="arial, helvetica, sans-serif">f: (773) 357-6970</font></i></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jan 11, 2021 at 10:00 AM Mary Marre <<a href="mailto:mmarre@ttic.edu">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div style="font-size:small"><div><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b> </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit"> Monday, January 11th at<b> 11:10 am CT</b></font></font><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b> </font></font></font><font color="#000000" style="font-family:arial,sans-serif">Zoom Virtual Talk (</font><b style="font-family:arial,sans-serif"><font color="#0000ff"><a href="https://uchicagogroup.zoom.us/webinar/register/WN_96FR8EkDRxS7o7c4oO4nxw" target="_blank">register in advance here</a></font></b><font color="#000000" style="font-family:arial,sans-serif">)</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Who: </b> </font></font></font>Sitan Chen, MIT</p></div><div><br></div><div><br></div><div><div><b>Title: </b>Learning When Gradient Descent Fails <font face="arial, sans-serif"><br></font><br><b>Abstract: </b>What are the most powerful algorithms for taking labeled examples and producing classifiers with good out-of-sample performance? Stochastic gradient descent is the practitioner's tool of choice, and an explosion of work in recent years has sought out theoretical justification for the striking empirical successes of this approach. Yet one can also ask an orthogonal question: are there natural learning problems where one can do even better, and provably so?</div><br>In this talk, I will first overview my work on supervised learning in the presence of corruptions, focusing on the classic problem of learning a halfspace under Massart noise. This is a setting where an adversary can arbitrarily corrupt a randomly chosen fraction of labels, and for which minimizing any convex surrogate provably fails. We give a simple and practical algorithm, which is also the first proper learning algorithm for this problem to work without any assumptions on the input distribution. Time permitting, I will mention some of our related results on generalized linear models and contextual bandits.<br><br>In the second half, I will present progress on the well-studied problem of learning a neural network over Gaussian space: given normally distributed examples labeled by an unknown teacher network, output a network with high test accuracy. All previous works on this pertain to depth-two networks and make some assumption about the weights. We give the first algorithm that works for arbitrary depths, makes no such assumptions, and runs in time polynomial in the dimension for any bounded-size network. Notably, a large class of algorithms, including gradient descent on an overparametrized student network, provably cannot achieve such a guarantee.</div><div><br>Based on joint works with Adam Klivans, Frederic Koehler, Raghu Meka, Ankur Moitra, and Morris Yau.<div><br></div><div><div dir="ltr"><div><b>Bio</b>: <span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Sitan Chen is a fifth-year graduate student in theoretical computer science at MIT advised by Ankur Moitra</span><span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">. His research focuses on designing provable algorithms for fundamental learning problems, with a focus on robustness, multi-index regression, and mixture models. He has been supported by a Paul and Daisy Soros Fellowship and an MIT Presidential Fellowship.</span></div><div><span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><br></span></div></div></div><div><b>Host: </b><a href="mailto:madhurt@ttic.edu" target="_blank">Madhur Tulsiani</a></div><div><br></div><div><br></div></div></div><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><font face="arial, helvetica, sans-serif">Mary C. Marre</font><div><font face="arial, helvetica, sans-serif">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">6045 S. Kenwood Avenue</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Room 517</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL 60637</font></i></div><div><i><font face="arial, helvetica, sans-serif">p:(773) 834-1757</font></i></div><div><i><font face="arial, helvetica, sans-serif">f: (773) 357-6970</font></i></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jan 10, 2021 at 2:24 PM Mary Marre <<a href="mailto:mmarre@ttic.edu" target="_blank">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div style="font-size:small"><div><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b> </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit"> Monday, January 11th at<b> 11:10 am CT</b></font></font><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b> </font></font></font><font color="#000000" style="font-family:arial,sans-serif">Zoom Virtual Talk (</font><b style="font-family:arial,sans-serif"><font color="#0000ff"><a href="https://uchicagogroup.zoom.us/webinar/register/WN_96FR8EkDRxS7o7c4oO4nxw" target="_blank">register in advance here</a></font></b><font color="#000000" style="font-family:arial,sans-serif">)</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Who: </b> </font></font></font>Sitan Chen, MIT</p></div><div><br></div><div><br></div><div><div><b>Title: </b>Learning When Gradient Descent Fails <font face="arial, sans-serif"><br></font><br><b>Abstract: </b>What are the most powerful algorithms for taking labeled examples and producing classifiers with good out-of-sample performance? Stochastic gradient descent is the practitioner's tool of choice, and an explosion of work in recent years has sought out theoretical justification for the striking empirical successes of this approach. Yet one can also ask an orthogonal question: are there natural learning problems where one can do even better, and provably so?</div><br>In this talk, I will first overview my work on supervised learning in the presence of corruptions, focusing on the classic problem of learning a halfspace under Massart noise. This is a setting where an adversary can arbitrarily corrupt a randomly chosen fraction of labels, and for which minimizing any convex surrogate provably fails. We give a simple and practical algorithm, which is also the first proper learning algorithm for this problem to work without any assumptions on the input distribution. Time permitting, I will mention some of our related results on generalized linear models and contextual bandits.<br><br>In the second half, I will present progress on the well-studied problem of learning a neural network over Gaussian space: given normally distributed examples labeled by an unknown teacher network, output a network with high test accuracy. All previous works on this pertain to depth-two networks and make some assumption about the weights. We give the first algorithm that works for arbitrary depths, makes no such assumptions, and runs in time polynomial in the dimension for any bounded-size network. Notably, a large class of algorithms, including gradient descent on an overparametrized student network, provably cannot achieve such a guarantee.</div><div><br>Based on joint works with Adam Klivans, Frederic Koehler, Raghu Meka, Ankur Moitra, and Morris Yau.<div><br></div><div><div dir="ltr"><div><b>Bio</b>: <span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Sitan Chen is a fifth-year graduate student in theoretical computer science at MIT advised by Ankur Moitra</span><span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">. His research focuses on designing provable algorithms for fundamental learning problems, with a focus on robustness, multi-index regression, and mixture models. He has been supported by a Paul and Daisy Soros Fellowship and an MIT Presidential Fellowship.</span></div><div><span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><br></span></div></div></div><div><b>Host: </b><a href="mailto:madhurt@ttic.edu" target="_blank">Madhur Tulsiani</a><br></div><div><br></div><div><br></div><div><br></div></div></div><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><font face="arial, helvetica, sans-serif">Mary C. Marre</font><div><font face="arial, helvetica, sans-serif">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">6045 S. Kenwood Avenue</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Room 517</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL 60637</font></i></div><div><i><font face="arial, helvetica, sans-serif">p:(773) 834-1757</font></i></div><div><i><font face="arial, helvetica, sans-serif">f: (773) 357-6970</font></i></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Jan 5, 2021 at 11:59 AM Mary Marre <<a href="mailto:mmarre@ttic.edu" target="_blank">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-size:small"><div><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b> </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit"> Monday, January 11th at<b> 11:10 am CT</b></font></font><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b> </font></font></font><font color="#000000" style="font-family:arial,sans-serif">Zoom Virtual Talk (</font><b style="font-family:arial,sans-serif"><font color="#0000ff"><a href="https://uchicagogroup.zoom.us/webinar/register/WN_96FR8EkDRxS7o7c4oO4nxw" target="_blank">register in advance here</a></font></b><font color="#000000" style="font-family:arial,sans-serif">)</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Who: </b> </font></font></font>Sitan Chen, MIT</p></div><div><br></div><div><br></div><div><div><b>Title: </b>Learning When Gradient Descent Fails <font face="arial, sans-serif"><br></font><br><b>Abstract: </b>What are the most powerful algorithms for taking labeled examples and producing classifiers with good out-of-sample performance? Stochastic gradient descent is the practitioner's tool of choice, and an explosion of work in recent years has sought out theoretical justification for the striking empirical successes of this approach. Yet one can also ask an orthogonal question: are there natural learning problems where one can do even better, and provably so?</div><br>In this talk, I will first overview my work on supervised learning in the presence of corruptions, focusing on the classic problem of learning a halfspace under Massart noise. This is a setting where an adversary can arbitrarily corrupt a randomly chosen fraction of labels, and for which minimizing any convex surrogate provably fails. We give a simple and practical algorithm, which is also the first proper learning algorithm for this problem to work without any assumptions on the input distribution. Time permitting, I will mention some of our related results on generalized linear models and contextual bandits.<br><br>In the second half, I will present progress on the well-studied problem of learning a neural network over Gaussian space: given normally distributed examples labeled by an unknown teacher network, output a network with high test accuracy. All previous works on this pertain to depth-two networks and make some assumption about the weights. We give the first algorithm that works for arbitrary depths, makes no such assumptions, and runs in time polynomial in the dimension for any bounded-size network. Notably, a large class of algorithms, including gradient descent on an overparametrized student network, provably cannot achieve such a guarantee.</div><div><br>Based on joint works with Adam Klivans, Frederic Koehler, Raghu Meka, Ankur Moitra, and Morris Yau.<div><br></div><div><div dir="ltr"><div><b>Bio</b>: <span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Sitan Chen is a fifth-year graduate student in theoretical computer science at MIT advised by Ankur Moitra</span><span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">. His research focuses on designing provable algorithms for fundamental learning problems, with a focus on robustness, multi-index regression, and mixture models. He has been supported by a Paul and Daisy Soros Fellowship and an MIT Presidential Fellowship.</span></div><div><span style="font-family:arial,sans-serif;color:black;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><br></span></div></div></div><div><b>Host: </b><a href="mailto:madhurt@ttic.edu" target="_blank">Madhur Tulsiani</a><br></div><div><br></div><div><br></div><div><br></div></div></div><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><font face="arial, helvetica, sans-serif">Mary C. Marre</font><div><font face="arial, helvetica, sans-serif">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">6045 S. Kenwood Avenue</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Room 517</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL 60637</font></i></div><div><i><font face="arial, helvetica, sans-serif">p:(773) 834-1757</font></i></div><div><i><font face="arial, helvetica, sans-serif">f: (773) 357-6970</font></i></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
</blockquote></div></div>
</blockquote></div></div>
</blockquote></div></div>