[Colloquium] NOW: 2/25 Talks at TTIC: Swabha Swayamdipta, AI2

Mary Marre mmarre at ttic.edu
Thu Feb 25 11:10:09 CST 2021


*When:*      Thursday, February 25th at* 11:10 am CT*



*Where:*     Zoom Virtual Talk (*register in advance here
<https://uchicagogroup.zoom.us/webinar/register/WN_tgnv3lKkQImWdKRJ-ZVV_Q>*)



*Who: *       Swabha Swayamdipta, AI2



*Title: * Addressing Biases for Robust, Generalizable AI

*Abstract:* Artificial Intelligence has made unprecedented progress in the
past decade. However, there still remains a large gap between the
decision-making capabilities of humans and machines. In this talk, I will
investigate two factors to explain why. First, I will discuss the presence
of undesirable biases in datasets, which ultimately hurt generalization. I
will then present bias mitigation algorithms that boost the ability of AI
models to generalize to unseen data. Second, I will explore task-specific
prior knowledge which aids robust generalization, but is often ignored when
training modern AI architectures. Throughout this discussion, I will focus
my attention on language applications, and will show how certain underlying
structures can provide useful inductive biases for inferring meaning in
natural language. I will conclude with a discussion of how the broader
framework of dataset and model biases will play a critical role in the
societal impact of AI, going forward.

*Bio:* Swabha Swayamdipta is a postdoctoral investigator at the Allen
Institute for AI, working with Yejin Choi. Her research focuses on natural
language processing, where she explores dataset and linguistic structural
biases, and model interpretability. Swabha received her Ph.D. from Carnegie
Mellon University, under the supervision of Noah A. Smith and Chris Dyer.
During most of her Ph.D. she was a visiting student at the University of
Washington. She holds a Masters degree from Columbia University, where she
was advised by Owen Rambow. Her research has been published at leading NLP
and machine learning conferences, and has received an honorable mention for
the best paper at ACL 2020.

*Host:* Kevin Gimpel <kgimpel at ttic.edu>

Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 517*
*Chicago, IL  60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Thu, Feb 25, 2021 at 10:09 AM Mary Marre <mmarre at ttic.edu> wrote:

> *When:*      Thursday, February 25th at* 11:10 am CT*
>
>
>
> *Where:*     Zoom Virtual Talk (*register in advance here
> <https://uchicagogroup.zoom.us/webinar/register/WN_tgnv3lKkQImWdKRJ-ZVV_Q>*
> )
>
>
>
> *Who: *       Swabha Swayamdipta, AI2
>
>
>
> *Title: * Addressing Biases for Robust, Generalizable AI
>
> *Abstract:* Artificial Intelligence has made unprecedented progress in
> the past decade. However, there still remains a large gap between the
> decision-making capabilities of humans and machines. In this talk, I will
> investigate two factors to explain why. First, I will discuss the presence
> of undesirable biases in datasets, which ultimately hurt generalization. I
> will then present bias mitigation algorithms that boost the ability of AI
> models to generalize to unseen data. Second, I will explore task-specific
> prior knowledge which aids robust generalization, but is often ignored when
> training modern AI architectures. Throughout this discussion, I will focus
> my attention on language applications, and will show how certain underlying
> structures can provide useful inductive biases for inferring meaning in
> natural language. I will conclude with a discussion of how the broader
> framework of dataset and model biases will play a critical role in the
> societal impact of AI, going forward.
>
> *Bio:* Swabha Swayamdipta is a postdoctoral investigator at the Allen
> Institute for AI, working with Yejin Choi. Her research focuses on natural
> language processing, where she explores dataset and linguistic structural
> biases, and model interpretability. Swabha received her Ph.D. from
> Carnegie Mellon University, under the supervision of Noah A. Smith and
> Chris Dyer. During most of her Ph.D. she was a visiting student at the
> University of Washington. She holds a Masters degree from Columbia
> University, where she was advised by Owen Rambow. Her research has been
> published at leading NLP and machine learning conferences, and has received
> an honorable mention for the best paper at ACL 2020.
>
> *Host:* Kevin Gimpel <kgimpel at ttic.edu>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 517*
> *Chicago, IL  60637*
> *p:(773) 834-1757*
> *f: (773) 357-6970*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Wed, Feb 24, 2021 at 4:11 PM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When:*      Thursday, February 25th at* 11:10 am CT*
>>
>>
>>
>> *Where:*     Zoom Virtual Talk (*register in advance here
>> <https://uchicagogroup.zoom.us/webinar/register/WN_tgnv3lKkQImWdKRJ-ZVV_Q>*
>> )
>>
>>
>>
>> *Who: *       Swabha Swayamdipta, AI2
>>
>>
>>
>> *Title: * Addressing Biases for Robust, Generalizable AI
>>
>> *Abstract:* Artificial Intelligence has made unprecedented progress in
>> the past decade. However, there still remains a large gap between the
>> decision-making capabilities of humans and machines. In this talk, I will
>> investigate two factors to explain why. First, I will discuss the presence
>> of undesirable biases in datasets, which ultimately hurt generalization. I
>> will then present bias mitigation algorithms that boost the ability of AI
>> models to generalize to unseen data. Second, I will explore task-specific
>> prior knowledge which aids robust generalization, but is often ignored when
>> training modern AI architectures. Throughout this discussion, I will focus
>> my attention on language applications, and will show how certain underlying
>> structures can provide useful inductive biases for inferring meaning in
>> natural language. I will conclude with a discussion of how the broader
>> framework of dataset and model biases will play a critical role in the
>> societal impact of AI, going forward.
>>
>> *Bio:* Swabha Swayamdipta is a postdoctoral investigator at the Allen
>> Institute for AI, working with Yejin Choi. Her research focuses on natural
>> language processing, where she explores dataset and linguistic structural
>> biases, and model interpretability. Swabha received her Ph.D. from
>> Carnegie Mellon University, under the supervision of Noah A. Smith and
>> Chris Dyer. During most of her Ph.D. she was a visiting student at the
>> University of Washington. She holds a Masters degree from Columbia
>> University, where she was advised by Owen Rambow. Her research has been
>> published at leading NLP and machine learning conferences, and has received
>> an honorable mention for the best paper at ACL 2020.
>>
>> *Host:* Kevin Gimpel <kgimpel at ttic.edu>
>>
>>
>>
>> Mary C. Marre
>> Faculty Administrative Support
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue*
>> *Room 517*
>> *Chicago, IL  60637*
>> *p:(773) 834-1757*
>> *f: (773) 357-6970*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>>
>> On Fri, Feb 19, 2021 at 9:47 AM Mary Marre <mmarre at ttic.edu> wrote:
>>
>>> *When:*      Thursday, February 25th at* 11:10 am CT*
>>>
>>>
>>>
>>> *Where:*     Zoom Virtual Talk (*register in advance here
>>> <https://uchicagogroup.zoom.us/webinar/register/WN_tgnv3lKkQImWdKRJ-ZVV_Q>*
>>> )
>>>
>>>
>>>
>>> *Who: *       Swabha Swayamdipta, AI2
>>>
>>>
>>>
>>> *Title: * Addressing Biases for Robust, Generalizable AI
>>>
>>> *Abstract:* Artificial Intelligence has made unprecedented progress in
>>> the past decade. However, there still remains a large gap between the
>>> decision-making capabilities of humans and machines. In this talk, I will
>>> investigate two factors to explain why. First, I will discuss the presence
>>> of undesirable biases in datasets, which ultimately hurt generalization. I
>>> will then present bias mitigation algorithms that boost the ability of AI
>>> models to generalize to unseen data. Second, I will explore task-specific
>>> prior knowledge which aids robust generalization, but is often ignored when
>>> training modern AI architectures. Throughout this discussion, I will focus
>>> my attention on language applications, and will show how certain underlying
>>> structures can provide useful inductive biases for inferring meaning in
>>> natural language. I will conclude with a discussion of how the broader
>>> framework of dataset and model biases will play a critical role in the
>>> societal impact of AI, going forward.
>>>
>>> *Bio:* Swabha Swayamdipta is a postdoctoral investigator at the Allen
>>> Institute for AI, working with Yejin Choi. Her research focuses on natural
>>> language processing, where she explores dataset and linguistic structural
>>> biases, and model interpretability. Swabha received her Ph.D. from
>>> Carnegie Mellon University, under the supervision of Noah A. Smith and
>>> Chris Dyer. During most of her Ph.D. she was a visiting student at the
>>> University of Washington. She holds a Masters degree from Columbia
>>> University, where she was advised by Owen Rambow. Her research has been
>>> published at leading NLP and machine learning conferences, and has received
>>> an honorable mention for the best paper at ACL 2020.
>>>
>>> *Host:* Kevin Gimpel <kgimpel at ttic.edu>
>>>
>>>
>>> Mary C. Marre
>>> Faculty Administrative Support
>>> *Toyota Technological Institute*
>>> *6045 S. Kenwood Avenue*
>>> *Room 517*
>>> *Chicago, IL  60637*
>>> *p:(773) 834-1757*
>>> *f: (773) 357-6970*
>>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20210225/adf7d65c/attachment-0001.html>


More information about the Colloquium mailing list