[Theory] REMINDER: 3/3 Talks at TTIC: Hao Peng, University of Washington

Mary Marre mmarre at ttic.edu
Thu Mar 3 10:08:52 CST 2022


*When:*        Thursday, March 3rd at* 11:00 am CT*


*Where:*       Zoom Virtual Talk (*register in advance here
<https://uchicagogroup.zoom.us/webinar/register/WN_LKrFtVcBQeGqoexxPFsJew>*)


*Who: *         Hao Peng, University of Washington


*Title:*          Towards Efficient and Generalizable Natural Language
Processing

*Abstract:*
Large-scale deep learning models have become the foundation of today’s
natural language processing (NLP). Despite their recent, tremendous
success, they struggle with generalization in real-world settings, like
their predecessors. Besides, their sheer scale brings new challenges—the
increasing computational cost heightens the barriers to entry to NLP
research.

The first part of the talk will discuss innovations in neural architectures
that can help address the efficiency concerns of today’s NLP. I will
present algorithms that reduce state-of-the-art NLP models’ overhead from
quadratic to linear in input lengths without hurting accuracy. Second, I
will turn to inductive biases grounded in the inherent structure of natural
language sentences, which can help machine learning models generalize. I
will discuss the integration of discrete, symbolic structure prediction
into modern deep learning.

I will conclude with future directions towards making cutting-edge NLP more
efficient, and improving NLP’s generalization to serve today’s language
technology applications and those to come in the future.

*Bio:*
Hao Peng is a final year PhD student in Computer Science & Engineering at
the University of Washington, advised by Noah A. Smith. His research
focuses on building efficient, generalizable, and interpretable machine
learning models for natural language processing. His research has been
presented at top-tier natural language processing and machine learning
venues, and recognized with a Google PhD fellowship and an honorable
mention for the best paper at ACL 2018.

*Host:* *Karen Livescu* <klivescu at ttic.edu>






Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Chicago, IL  60637*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Wed, Mar 2, 2022 at 5:51 PM Mary Marre <mmarre at ttic.edu> wrote:

> *When:*        Thursday, March 3rd at* 11:00 am CT*
>
>
> *Where:*       Zoom Virtual Talk (*register in advance here
> <https://uchicagogroup.zoom.us/webinar/register/WN_LKrFtVcBQeGqoexxPFsJew>*
> )
>
>
> *Who: *         Hao Peng, University of Washington
>
>
> *Title:*          Towards Efficient and Generalizable Natural Language
> Processing
>
> *Abstract:*
> Large-scale deep learning models have become the foundation of today’s
> natural language processing (NLP). Despite their recent, tremendous
> success, they struggle with generalization in real-world settings, like
> their predecessors. Besides, their sheer scale brings new challenges—the
> increasing computational cost heightens the barriers to entry to NLP
> research.
>
> The first part of the talk will discuss innovations in neural
> architectures that can help address the efficiency concerns of today’s NLP.
> I will present algorithms that reduce state-of-the-art NLP models’ overhead
> from quadratic to linear in input lengths without hurting accuracy. Second,
> I will turn to inductive biases grounded in the inherent structure of
> natural language sentences, which can help machine learning models
> generalize. I will discuss the integration of discrete, symbolic structure
> prediction into modern deep learning.
>
> I will conclude with future directions towards making cutting-edge NLP
> more efficient, and improving NLP’s generalization to serve today’s
> language technology applications and those to come in the future.
>
> *Bio:*
> Hao Peng is a final year PhD student in Computer Science & Engineering at
> the University of Washington, advised by Noah A. Smith. His research
> focuses on building efficient, generalizable, and interpretable machine
> learning models for natural language processing. His research has been
> presented at top-tier natural language processing and machine learning
> venues, and recognized with a Google PhD fellowship and an honorable
> mention for the best paper at ACL 2018.
>
> *Host:* *Karen Livescu* <klivescu at ttic.edu>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Chicago, IL  60637*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Fri, Feb 25, 2022 at 10:09 AM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When:*        Thursday, March 3rd at* 11:00 am CT*
>>
>>
>> *Where:*       Zoom Virtual Talk (*register in advance here
>> <https://uchicagogroup.zoom.us/webinar/register/WN_LKrFtVcBQeGqoexxPFsJew>*
>> )
>>
>>
>> *Who: *         Hao Peng, University of Washington
>>
>>
>> *Title:*          Towards Efficient and Generalizable Natural Language
>> Processing
>>
>> *Abstract:*
>> Large-scale deep learning models have become the foundation of today’s
>> natural language processing (NLP). Despite their recent, tremendous
>> success, they struggle with generalization in real-world settings, like
>> their predecessors. Besides, their sheer scale brings new challenges—the
>> increasing computational cost heightens the barriers to entry to NLP
>> research.
>>
>> The first part of the talk will discuss innovations in neural
>> architectures that can help address the efficiency concerns of today’s NLP.
>> I will present algorithms that reduce state-of-the-art NLP models’ overhead
>> from quadratic to linear in input lengths without hurting accuracy. Second,
>> I will turn to inductive biases grounded in the inherent structure of
>> natural language sentences, which can help machine learning models
>> generalize. I will discuss the integration of discrete, symbolic structure
>> prediction into modern deep learning.
>>
>> I will conclude with future directions towards making cutting-edge NLP
>> more efficient, and improving NLP’s generalization to serve today’s
>> language technology applications and those to come in the future.
>>
>> *Bio:*
>> Hao Peng is a final year PhD student in Computer Science & Engineering
>> at the University of Washington, advised by Noah A. Smith. His research
>> focuses on building efficient, generalizable, and interpretable machine
>> learning models for natural language processing. His research has been
>> presented at top-tier natural language processing and machine learning
>> venues, and recognized with a Google PhD fellowship and an honorable
>> mention for the best paper at ACL 2018.
>>
>> *Host:* *Karen Livescu* <klivescu at ttic.edu>
>>
>>
>>
>>
>>
>> Mary C. Marre
>> Faculty Administrative Support
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue*
>> *Chicago, IL  60637*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20220303/6e7d855d/attachment-0001.html>


More information about the Theory mailing list