[Theory] REMINDER: 1/14 Talks at TTIC: Wenhu Chen, UCSB

Mary Marre mmarre at ttic.edu
Wed Jan 13 14:00:00 CST 2021


*When:*      Thursday, January 14th at* 11:10 am CT*



*Where:*     Zoom Virtual Talk (*register in advance here
<https://uchicagogroup.zoom.us/webinar/register/WN_6aa2ERkDRa2FZf-er2NklA>*)



*Who: *       Wenhu Chen, UCSB


*Title:*  Knowledge-Grounded Natural Language Processing

*Abstract: *One of the ultimate goals of artificial intelligence is to
build a knowledgeable virtual assistant that can understand natural
language queries and seek over the Web to provide information to humans.
Building a virtual assistant requires the models’ capability in two
aspects: 1) reasoning over the massive Web knowledge to derive supporting
facts, 2) grounding on the supporting facts to generate natural language.

In the first part of the talk, I will discuss how to build neural models
that can automatically deduce logical rules to reason over structured web
knowledge (knowledge graph). However, as the Web knowledge is highly
heterogeneous and distributed in both structured and unstructured forms,
using only the structured knowledge could lead to severe coverage issues.
To address these issues, I will further demonstrate how to build a unified
model that can reason over both structured and unstructured web knowledge
and integrate their information to derive the supporting fact. In the
second part of the talk, I will describe how to utilize large-scale web
data to pre-train knowledge-grounded text generation models, which can
generalize well to different domains to produce natural language highly
consistent with the given supporting facts.

Finally, I will conclude my talk by proposing future directions for
knowledge-grounded natural language processing.

*Bio: *Wenhu Chen is a fourth-year Ph.D. student at the University of
California, Santa Barbara, advised by William Yang Wang and Xifeng Yan. His
research interest covers natural language processing, deep learning,
knowledge representation. Specifically, he aims at developing models that
can ground and reason over external world knowledge to understand human
language and communicate with humans. He is also interested in multi-modal
problems like visual question answering and image/video captioning. He has
interned in multiple companies including Google Research, Microsoft AI &
Research, Samsung Research America, eBay Research. He publishes and serves
as Program Committee for ACL, NAACL, EMNLP, ICLR, NeurIPS, and CVPR. He was
recognized as the top reviewer in NeurIPS 2019. He received the WACV
best-student paper honorable mention in 2021.

*Host: * Kevin Gimpel <kgimpel at ttic.edu>


Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 517*
*Chicago, IL  60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Fri, Jan 8, 2021 at 9:04 AM Mary Marre <mmarre at ttic.edu> wrote:

> *When:*      Thursday, January 14th at* 11:10 am CT*
>
>
>
> *Where:*     Zoom Virtual Talk (*register in advance here
> <https://uchicagogroup.zoom.us/webinar/register/WN_6aa2ERkDRa2FZf-er2NklA>*
> )
>
>
>
> *Who: *       Wenhu Chen, UCSB
>
>
> *Title:*  Knowledge-Grounded Natural Language Processing
>
> *Abstract: *One of the ultimate goals of artificial intelligence is to
> build a knowledgeable virtual assistant that can understand natural
> language queries and seek over the Web to provide information to humans.
> Building a virtual assistant requires the models’ capability in two
> aspects: 1) reasoning over the massive Web knowledge to derive supporting
> facts, 2) grounding on the supporting facts to generate natural language.
>
> In the first part of the talk, I will discuss how to build neural models
> that can automatically deduce logical rules to reason over structured web
> knowledge (knowledge graph). However, as the Web knowledge is highly
> heterogeneous and distributed in both structured and unstructured forms,
> using only the structured knowledge could lead to severe coverage issues.
> To address these issues, I will further demonstrate how to build a unified
> model that can reason over both structured and unstructured web knowledge
> and integrate their information to derive the supporting fact. In the
> second part of the talk, I will describe how to utilize large-scale web
> data to pre-train knowledge-grounded text generation models, which can
> generalize well to different domains to produce natural language highly
> consistent with the given supporting facts.
>
> Finally, I will conclude my talk by proposing future directions for
> knowledge-grounded natural language processing.
>
> *Bio: *Wenhu Chen is a fourth-year Ph.D. student at the University of
> California, Santa Barbara, advised by William Yang Wang and Xifeng Yan. His
> research interest covers natural language processing, deep learning,
> knowledge representation. Specifically, he aims at developing models that
> can ground and reason over external world knowledge to understand human
> language and communicate with humans. He is also interested in multi-modal
> problems like visual question answering and image/video captioning. He has
> interned in multiple companies including Google Research, Microsoft AI &
> Research, Samsung Research America, eBay Research. He publishes and serves
> as Program Committee for ACL, NAACL, EMNLP, ICLR, NeurIPS, and CVPR. He was
> recognized as the top reviewer in NeurIPS 2019. He received the WACV
> best-student paper honorable mention in 2021.
>
> *Host: * Kevin Gimpel <kgimpel at ttic.edu>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 517*
> *Chicago, IL  60637*
> *p:(773) 834-1757*
> *f: (773) 357-6970*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20210113/d21d620b/attachment.html>


More information about the Theory mailing list