<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-size:small"><div dir="ltr"><div><b style="font-family:verdana,sans-serif;font-size:large;color:rgb(80,0,80);background-color:rgb(207,226,243)"><span class="gmail-il">Thesis</span> Defense: Mingda Chen, TTIC</b><br></div></div><div dir="ltr"><div><div style="color:rgb(80,0,80);font-family:arial,helvetica,sans-serif"><br></div><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><b><span style="color:black">When:  </span></b><span style="color:black">     Friday<b>,</b> July 8th at <b style="background-color:rgb(255,255,0)">12<span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">:</span><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">00 - 2:00 pm CT</span></b></span></font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><span style="color:black"><b>Virtually: </b>  </span><b><u><span style="color:blue"><a href="https://uchicago.zoom.us/j/98474647709?pwd=SnFHdzh0VFkvT1k2UndLdkJMTmpadz09" target="_blank">attend virtually here</a></span></u></b></font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><b><span style="color:black">Who: </span></b><span style="color:black">       <span class="gmail_default"> Mingda Chen</span>, TTIC</span></font><span style="font-family:Arial,sans-serif;font-size:12pt"></span></p></div><span style="color:rgb(80,0,80)"><div><font color="#000000"><br></font></div><div><font color="#000000"><b><br></b></font></div><div><div style="color:rgb(34,34,34)"><b><span class="gmail-il">Thesis</span> title: </b>Leveraging Natural Supervision for Language Representation Learning and Generation</div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)"><b>Abstract: </b>Recent breakthroughs in Natural Language Processing (NLP) have been driven by language models trained on a massive amount of plain text. While powerful, deriving supervision from textual resources is still an open question. For example, language model pretraining often neglects the rich, freely-available structures in textual data. In this <span class="gmail-il">thesis</span>, we describe three lines of work that seek to improve the training and evaluation of neural models using naturally-occurring supervision.</div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">We first investigate self-supervised training losses to help enhance the performance of pretrained language models for various NLP tasks. Specifically, we alter the sentence prediction loss to make it better suited to other pretraining losses and more challenging to solve. We design an intermediate finetuning step that uses self-supervised training to promote models' ability in cross-task generalization.  </div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">Then we describe methods to leverage the structures in Wikipedia and paraphrases. In particular, we propose training losses to exploit hyperlinks, article structures, and article category graphs for entity-, discourse-, entailment-related knowledge. We propose a framework that uses paraphrase pairs to disentangle semantics and syntax in sentence representations. We extend the framework for a novel generation task that controls the syntax of output text with a sentential exemplar.</div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">Lastly, we discuss our work on tailoring textual resources for establishing challenging evaluation tasks. We introduce three datasets by defining novel tasks using various fan-contributed websites, including a long-form data-to-text generation dataset, a screenplay summarization dataset, and a long-form story generation dataset. These datasets have unique characteristics offering challenges to future work in their respective task settings. </div><div style="color:rgb(34,34,34)"><b><br></b></div><div style="color:rgb(34,34,34)"><b><span class="gmail-il">Thesis</span> committee:</b> Karen Livescu, Sam Wiseman, Luke Zettlemoyer</div><br><b style="font-family:arial,sans-serif"><font color="#000000"><span class="gmail-il">Thesis</span> Advisor:</font></b><font color="#500050" style="font-family:arial,sans-serif"> </font><a href="mailto:kgimpel@ttic.edu" target="_blank" style="font-family:arial,sans-serif"><b><font color="#0000ff">Kevin Gimpel</font></b></a></div><div><br></div><div><br></div></span></div></div></div><div dir="ltr"><div style="font-size:small"><div dir="ltr"><div><br></div></div><div dir="ltr"><span style="color:rgb(80,0,80)"><div><br></div></span></div></div><div><div dir="ltr"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></i><br></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jun 24, 2022 at 4:43 PM Mary Marre <<a href="mailto:mmarre@ttic.edu" target="_blank">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-size:small"><div dir="ltr"><div><b style="font-family:verdana,sans-serif;font-size:large;color:rgb(80,0,80);background-color:rgb(207,226,243)">Thesis Defense: Mingda Chen, TTIC</b><br></div></div><div dir="ltr"><div><div style="color:rgb(80,0,80);font-family:arial,helvetica,sans-serif"><br></div><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><b><span style="color:black">When:  </span></b><span style="color:black">     Friday<b>,</b> July 8th at <b style="background-color:rgb(255,255,0)">12<span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">:</span><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">00 - 2:00 pm CT</span></b></span></font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif" style="background-color:rgb(255,255,255)"> </font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><span style="color:black"><b>Virtually: </b>  </span><b><u><span style="color:blue"><a href="https://uchicago.zoom.us/j/98474647709?pwd=SnFHdzh0VFkvT1k2UndLdkJMTmpadz09" target="_blank">attend virtually here</a></span></u></b></font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><b><span style="color:black">Who: </span></b><span style="color:black">       <span class="gmail_default"> Mingda Chen</span>, TTIC</span></font><span style="font-family:Arial,sans-serif;font-size:12pt"></span></p></div><span style="color:rgb(80,0,80)"><div><font color="#000000"><br></font></div><div><font color="#000000"><b><br></b></font></div><div><div style="color:rgb(34,34,34)"><b>Thesis title: </b>Leveraging Natural Supervision for Language Representation Learning and Generation</div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)"><b>Abstract: </b>Recent breakthroughs in Natural Language Processing (NLP) have been driven by language models trained on a massive amount of plain text. While powerful, deriving supervision from textual resources is still an open question. For example, language model pretraining often neglects the rich, freely-available structures in textual data. In this thesis, we describe three lines of work that seek to improve the training and evaluation of neural models using naturally-occurring supervision.</div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">We first investigate self-supervised training losses to help enhance the performance of pretrained language models for various NLP tasks. Specifically, we alter the sentence prediction loss to make it better suited to other pretraining losses and more challenging to solve. We design an intermediate finetuning step that uses self-supervised training to promote models' ability in cross-task generalization.  </div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">Then we describe methods to leverage the structures in Wikipedia and paraphrases. In particular, we propose training losses to exploit hyperlinks, article structures, and article category graphs for entity-, discourse-, entailment-related knowledge. We propose a framework that uses paraphrase pairs to disentangle semantics and syntax in sentence representations. We extend the framework for a novel generation task that controls the syntax of output text with a sentential exemplar.</div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">Lastly, we discuss our work on tailoring textual resources for establishing challenging evaluation tasks. We introduce three datasets by defining novel tasks using various fan-contributed websites, including a long-form data-to-text generation dataset, a screenplay summarization dataset, and a long-form story generation dataset. These datasets have unique characteristics offering challenges to future work in their respective task settings. </div><div style="color:rgb(34,34,34)"><b><br></b></div><div style="color:rgb(34,34,34)"><b>Thesis committee:</b> Karen Livescu, Sam Wiseman, Luke Zettlemoyer</div><br><b style="font-family:arial,sans-serif"><font color="#000000">Thesis Advisor:</font></b><font color="#500050" style="font-family:arial,sans-serif"> </font><a href="mailto:kgimpel@ttic.edu" style="font-family:arial,sans-serif" target="_blank"><b><font color="#0000ff">Kevin Gimpel</font></b></a></div><div><br></div><div><br></div><div><br></div><div><br></div></span></div></div><div><div dir="ltr"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></i><br></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div></div>
</blockquote></div>
</div>