<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-size:small"><div class="gmail_default"><p style="color:rgb(80,0,80);font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b>    </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">    Thursday, March 3rd at<b> <span style="background-color:rgb(255,255,0)">11:00 am CT</span></b></font></font><br></font></p><p style="color:rgb(80,0,80);font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><br></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="color:rgb(0,0,0);vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b>       </font></font>Zoom Virtual Talk (<b><a href="https://uchicagogroup.zoom.us/webinar/register/WN_LKrFtVcBQeGqoexxPFsJew" target="_blank"><font color="#0000ff">register in advance here</font></a></b>)</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><font color="#000000"><b>Who: </b> </font><font color="#500050">    </font><font color="#000000">    </font></font></font></font><span style="color:rgb(34,34,34)">Hao Peng, University of Washington</span></p></div><div><div dir="ltr"><div dir="ltr"><br></div><div dir="ltr"><div><b><br></b></div><div><b>Title:</b>          Towards Efficient and Generalizable Natural Language Processing</div><div><br></div></div><div dir="ltr"><font face="arial, sans-serif"><b>Abstract:</b> <span id="gmail-m_4646206754371319907gmail-m_-9204546047999178607gmail-m_-2576008533024500954m_-7660309947316228296gmail-m_-972625177996099932docs-internal-guid-54b731ce-7fff-178d-6bf2-2d95ba592af8"><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Large-scale deep learning models have become the foundation of today’s natural language processing (NLP). Despite their recent, tremendous success, they struggle with generalization in real-world settings, like their predecessors. Besides, their sheer scale brings new challenges—the increasing computational cost heightens the barriers to entry to NLP research.</span></div><br><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The first part of the talk will discuss innovations in neural architectures that can help address the efficiency concerns of today’s NLP. I will present algorithms that reduce state-of-the-art NLP models’ overhead from quadratic to linear in input lengths without hurting accuracy. Second, I will turn to inductive biases grounded in the inherent structure of natural language sentences, which can help machine learning models generalize. I will discuss the integration of discrete, symbolic structure prediction into modern deep learning. </span></div><br><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">I will conclude with future directions towards making cutting-edge NLP more efficient, and improving NLP’s generalization to serve today’s language technology applications and those to come in the future.</span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Bio:</b></span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><span id="gmail-m_4646206754371319907gmail-m_-9204546047999178607gmail-m_-2576008533024500954m_-7660309947316228296gmail-m_-972625177996099932docs-internal-guid-94b2366a-7fff-af2d-d1e4-855c93955370" style="white-space:normal"><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Hao Peng is a final year PhD student in Computer Science & Engineering at the University of Washington, advised by Noah A. Smith. His research focuses on building efficient, generalizable, and interpretable machine learning models for natural language processing. His research has been presented at top-tier natural language processing and machine learning venues, and recognized with a Google PhD fellowship and an honorable mention for the best paper at ACL 2018.</span></div></span></span></div></span></font><br></div><div><b>Host:</b> <a href="mailto:klivescu@ttic.edu" target="_blank"><b>Karen Livescu</b></a></div><div><br style="color:rgb(80,0,80)"></div></div></div></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></i><br></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Mar 3, 2022 at 10:08 AM Mary Marre <<a href="mailto:mmarre@ttic.edu">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div style="font-size:small"><div><p style="color:rgb(80,0,80);font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b>    </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">    Thursday, March 3rd at<b> <span style="background-color:rgb(255,255,0)">11:00 am CT</span></b></font></font><br></font></p><p style="color:rgb(80,0,80);font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><br></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="color:rgb(0,0,0);vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b>       </font></font>Zoom Virtual Talk (<b><a href="https://uchicagogroup.zoom.us/webinar/register/WN_LKrFtVcBQeGqoexxPFsJew" target="_blank"><font color="#0000ff">register in advance here</font></a></b>)</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><font color="#000000"><b>Who: </b> </font><font color="#500050">    </font><font color="#000000">    </font></font></font></font><span style="color:rgb(34,34,34)">Hao Peng, University of Washington</span></p></div><div><div dir="ltr"><div dir="ltr"><br></div><div dir="ltr"><div><b><br></b></div><div><b>Title:</b>          Towards Efficient and Generalizable Natural Language Processing</div><div><br></div></div><div dir="ltr"><font face="arial, sans-serif"><b>Abstract:</b> <span id="gmail-m_4646206754371319907gmail-m_-9204546047999178607gmail-m_-2576008533024500954m_-7660309947316228296gmail-m_-972625177996099932docs-internal-guid-54b731ce-7fff-178d-6bf2-2d95ba592af8"><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Large-scale deep learning models have become the foundation of today’s natural language processing (NLP). Despite their recent, tremendous success, they struggle with generalization in real-world settings, like their predecessors. Besides, their sheer scale brings new challenges—the increasing computational cost heightens the barriers to entry to NLP research.</span></div><br><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The first part of the talk will discuss innovations in neural architectures that can help address the efficiency concerns of today’s NLP. I will present algorithms that reduce state-of-the-art NLP models’ overhead from quadratic to linear in input lengths without hurting accuracy. Second, I will turn to inductive biases grounded in the inherent structure of natural language sentences, which can help machine learning models generalize. I will discuss the integration of discrete, symbolic structure prediction into modern deep learning. </span></div><br><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">I will conclude with future directions towards making cutting-edge NLP more efficient, and improving NLP’s generalization to serve today’s language technology applications and those to come in the future.</span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Bio:</b></span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><span id="gmail-m_4646206754371319907gmail-m_-9204546047999178607gmail-m_-2576008533024500954m_-7660309947316228296gmail-m_-972625177996099932docs-internal-guid-94b2366a-7fff-af2d-d1e4-855c93955370" style="white-space:normal"><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Hao Peng is a final year PhD student in Computer Science & Engineering at the University of Washington, advised by Noah A. Smith. His research focuses on building efficient, generalizable, and interpretable machine learning models for natural language processing. His research has been presented at top-tier natural language processing and machine learning venues, and recognized with a Google PhD fellowship and an honorable mention for the best paper at ACL 2018.</span></div></span></span></div></span></font><br></div><div><b>Host:</b> <a href="mailto:klivescu@ttic.edu" target="_blank"><b>Karen Livescu</b></a></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div></div></div><div><div dir="ltr"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></i><br></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 2, 2022 at 5:51 PM Mary Marre <<a href="mailto:mmarre@ttic.edu" target="_blank">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div style="font-size:small"><div><p style="color:rgb(80,0,80);font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b>    </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">    Thursday, March 3rd at<b> <span style="background-color:rgb(255,255,0)">11:00 am CT</span></b></font></font><br></font></p><p style="color:rgb(80,0,80);font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><br></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="color:rgb(0,0,0);vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b>       </font></font>Zoom Virtual <span>Talk</span> (<b><a href="https://uchicagogroup.zoom.us/webinar/register/WN_LKrFtVcBQeGqoexxPFsJew" target="_blank"><font color="#0000ff">register in advance here</font></a></b>)</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><font color="#000000"><b>Who: </b> </font><font color="#500050">    </font><font color="#000000">    </font></font></font></font><span style="color:rgb(34,34,34)">Hao Peng, University of Washington</span></p></div><div><div dir="ltr"><div dir="ltr"><br></div><div dir="ltr"><div><b><br></b></div><div><b>Title:</b>          Towards Efficient and Generalizable Natural Language Processing</div><div><br></div></div><div dir="ltr"><font face="arial, sans-serif"><b>Abstract:</b> <span id="gmail-m_4646206754371319907gmail-m_-9204546047999178607gmail-m_-2576008533024500954m_-7660309947316228296gmail-m_-972625177996099932docs-internal-guid-54b731ce-7fff-178d-6bf2-2d95ba592af8"><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Large-scale deep learning models have become the foundation of today’s natural language processing (NLP). Despite their recent, tremendous success, they struggle with generalization in real-world settings, like their predecessors. Besides, their sheer scale brings new challenges—the increasing computational cost heightens the barriers to entry to NLP research.</span></div><br><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The first part of the <span>talk</span> will discuss innovations in neural architectures that can help address the efficiency concerns of today’s NLP. I will present algorithms that reduce state-of-the-art NLP models’ overhead from quadratic to linear in input lengths without hurting accuracy. Second, I will turn to inductive biases grounded in the inherent structure of natural language sentences, which can help machine learning models generalize. I will discuss the integration of discrete, symbolic structure prediction into modern deep learning. </span></div><br><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">I will conclude with future directions towards making cutting-edge NLP more efficient, and improving NLP’s generalization to serve today’s language technology applications and those to come in the future.</span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Bio:</b></span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><span id="gmail-m_4646206754371319907gmail-m_-9204546047999178607gmail-m_-2576008533024500954m_-7660309947316228296gmail-m_-972625177996099932docs-internal-guid-94b2366a-7fff-af2d-d1e4-855c93955370" style="white-space:normal"><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Hao Peng is a final year PhD student in Computer Science & Engineering at the University of Washington, advised by Noah A. Smith. His research focuses on building efficient, generalizable, and interpretable machine learning models for natural language processing. His research has been presented at top-tier natural language processing and machine learning venues, and recognized with a Google PhD fellowship and an honorable mention for the best paper at ACL 2018.</span></div></span></span></div></span></font><br></div><div><b>Host:</b> <a href="mailto:klivescu@ttic.edu" target="_blank"><b>Karen Livescu</b></a></div><div><br></div><div><br></div><div><br></div></div></div></div><div><div dir="ltr"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></i><br></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 25, 2022 at 10:09 AM Mary Marre <<a href="mailto:mmarre@ttic.edu" target="_blank">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div style="font-size:small"><p style="color:rgb(80,0,80);font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><font face="arial, sans-serif" color="#000000"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b>    </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">    Thursday, March 3rd at<b> <span style="background-color:rgb(255,255,0)">11:00 am CT</span></b></font></font><br></font></p><p style="color:rgb(80,0,80);font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><br></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="color:rgb(0,0,0);vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b>       </font></font>Zoom Virtual Talk (<b><a href="https://uchicagogroup.zoom.us/webinar/register/WN_LKrFtVcBQeGqoexxPFsJew" target="_blank"><font color="#0000ff">register in advance here</font></a></b>)</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;color:rgb(80,0,80);line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><font color="#000000"><b>Who: </b> </font><font color="#500050">    </font><font color="#000000">    </font></font></font></font><span style="color:rgb(34,34,34)">Hao Peng, University of Washington</span></p></div><div><div dir="ltr"><div dir="ltr" style="font-size:small"><br></div><div dir="ltr" style="font-size:small"><div><b><br></b></div><div><b>Title:</b>          Towards Efficient and Generalizable Natural Language Processing</div><div><br></div></div><div dir="ltr"><font face="arial, sans-serif"><b>Abstract:</b> <span id="gmail-m_4646206754371319907gmail-m_-9204546047999178607gmail-m_-2576008533024500954m_-7660309947316228296gmail-m_-972625177996099932docs-internal-guid-54b731ce-7fff-178d-6bf2-2d95ba592af8"><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Large-scale deep learning models have become the foundation of today’s natural language processing (NLP). Despite their recent, tremendous success, they struggle with generalization in real-world settings, like their predecessors. Besides, their sheer scale brings new challenges—the increasing computational cost heightens the barriers to entry to NLP research.</span></div><br><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The first part of the talk will discuss innovations in neural architectures that can help address the efficiency concerns of today’s NLP. I will present algorithms that reduce state-of-the-art NLP models’ overhead from quadratic to linear in input lengths without hurting accuracy. Second, I will turn to inductive biases grounded in the inherent structure of natural language sentences, which can help machine learning models generalize. I will discuss the integration of discrete, symbolic structure prediction into modern deep learning. </span></div><br><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">I will conclude with future directions towards making cutting-edge NLP more efficient, and improving NLP’s generalization to serve today’s language technology applications and those to come in the future.</span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Bio:</b></span></div><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><span id="gmail-m_4646206754371319907gmail-m_-9204546047999178607gmail-m_-2576008533024500954m_-7660309947316228296gmail-m_-972625177996099932docs-internal-guid-94b2366a-7fff-af2d-d1e4-855c93955370" style="white-space:normal"><div style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><span>Hao</span> <span>Peng</span> is a final year PhD student in Computer Science & Engineering at the University of Washington, advised by Noah A. Smith. His research focuses on building efficient, generalizable, and interpretable machine learning models for natural language processing. His research has been presented at top-tier natural language processing and machine learning venues, and recognized with a Google PhD fellowship and an honorable mention for the best paper at ACL 2018.</span></div></span></span></div></span></font><br></div><div style="font-size:small"><b>Host:</b> <a href="mailto:klivescu@ttic.edu" target="_blank"><b>Karen Livescu</b></a></div><div style="font-size:small"><br></div><div style="font-size:small"><br></div><div style="font-size:small"><br></div><div style="font-size:small"><br></div><div style="font-size:small"><br></div></div></div></div><div><div dir="ltr"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></i><br></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div></div>
</blockquote></div></div>
</blockquote></div></div>
</blockquote></div></div>