[Theory] NOW: 1/25 TTIC Colloquium: Percy Liang, Stanford University
Mary Marre
mmarre at ttic.edu
Mon Jan 25 11:07:00 CST 2021
*When:* Monday, January 25th at 11:10 am CT
*Where:* Zoom Virtual Talk (*register in advance here
<https://uchicagogroup.zoom.us/webinar/register/WN_4ihYHm5YQbi6dXgdeOD8BQ>*)
*Who: * Percy Liang, Stanford University
*Talk:* Surprises in the Quest for Robust Machine Learning
*Abstract: *Standard machine learning produces models that are accurate on
average but degrade dramatically on when the test distribution of interest
deviates from the training distribution. We consider three settings where
this happens: when test inputs are subject to adversarial attacks, when we
are concerned with performance on minority subpopulations, and when the
world simply changes (classic domain shift). Our aim is to produce methods
that are provably robust to such deviations. In this talk, I will provide
an overview of the work my group has done on this topic over the last three
years. We have found many surprises in our quest for robustness: for
example, that the "more data" and "bigger models" strategy that works so
well for average accuracy sometimes fails out-of-domain. On the other hand,
we have found that certain tools such as analysis of linear regression and
use of unlabeled data (e.g., robust self-training) have reliably delivered
promising results across a number of different settings.
*Bio: *
Percy Liang is an Associate Professor of Computer Science at Stanford
University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His
research spans machine learning and natural language processing, with the
goal of developing trustworthy agents that can communicate effectively with
people and improve over time through interaction. Specific topics include
question answering, dialogue, program induction, interactive learning, and
reliable machine learning. His awards include the IJCAI Computers and
Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research
Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).
*Host: **David McAllester* <mcallester at ttic.edu>
For more information on the colloquium series or to subscribe to the
mailing list, please see http://www.ttic.edu/colloquium.php
Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 517*
*Chicago, IL 60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*
On Mon, Jan 25, 2021 at 10:00 AM Mary Marre <mmarre at ttic.edu> wrote:
> *When:* Monday, January 25th at 11:10 am CT
>
>
>
> *Where:* Zoom Virtual Talk (*register in advance here
> <https://uchicagogroup.zoom.us/webinar/register/WN_4ihYHm5YQbi6dXgdeOD8BQ>*
> )
>
>
>
> *Who: * Percy Liang, Stanford University
>
>
>
> *Talk:* Surprises in the Quest for Robust Machine Learning
>
> *Abstract: *Standard machine learning produces models that are accurate
> on average but degrade dramatically on when the test distribution of
> interest deviates from the training distribution. We consider three
> settings where this happens: when test inputs are subject to adversarial
> attacks, when we are concerned with performance on minority subpopulations,
> and when the world simply changes (classic domain shift). Our aim is to
> produce methods that are provably robust to such deviations. In this talk,
> I will provide an overview of the work my group has done on this topic over
> the last three years. We have found many surprises in our quest for
> robustness: for example, that the "more data" and "bigger models" strategy
> that works so well for average accuracy sometimes fails out-of-domain. On
> the other hand, we have found that certain tools such as analysis of linear
> regression and use of unlabeled data (e.g., robust self-training) have
> reliably delivered promising results across a number of different settings.
>
> *Bio: *
> Percy Liang is an Associate Professor of Computer Science at Stanford
> University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His
> research spans machine learning and natural language processing, with the
> goal of developing trustworthy agents that can communicate effectively with
> people and improve over time through interaction. Specific topics include
> question answering, dialogue, program induction, interactive learning, and
> reliable machine learning. His awards include the IJCAI Computers and
> Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research
> Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).
>
>
> *Host: **David McAllester* <mcallester at ttic.edu>
> For more information on the colloquium series or to subscribe to the
> mailing list, please see http://www.ttic.edu/colloquium.php
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 517*
> *Chicago, IL 60637*
> *p:(773) 834-1757*
> *f: (773) 357-6970*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Sun, Jan 24, 2021 at 2:27 PM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When:* Monday, January 25th at 11:10 am CT
>>
>>
>>
>> *Where:* Zoom Virtual Talk (*register in advance here
>> <https://uchicagogroup.zoom.us/webinar/register/WN_4ihYHm5YQbi6dXgdeOD8BQ>*
>> )
>>
>>
>>
>> *Who: * Percy Liang, Stanford University
>>
>>
>>
>> *Talk:* Surprises in the Quest for Robust Machine Learning
>>
>> *Abstract: *Standard machine learning produces models that are accurate
>> on average but degrade dramatically on when the test distribution of
>> interest deviates from the training distribution. We consider three
>> settings where this happens: when test inputs are subject to adversarial
>> attacks, when we are concerned with performance on minority subpopulations,
>> and when the world simply changes (classic domain shift). Our aim is to
>> produce methods that are provably robust to such deviations. In this talk,
>> I will provide an overview of the work my group has done on this topic over
>> the last three years. We have found many surprises in our quest for
>> robustness: for example, that the "more data" and "bigger models" strategy
>> that works so well for average accuracy sometimes fails out-of-domain. On
>> the other hand, we have found that certain tools such as analysis of linear
>> regression and use of unlabeled data (e.g., robust self-training) have
>> reliably delivered promising results across a number of different settings.
>>
>> *Bio: *
>> Percy Liang is an Associate Professor of Computer Science at Stanford
>> University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His
>> research spans machine learning and natural language processing, with the
>> goal of developing trustworthy agents that can communicate effectively with
>> people and improve over time through interaction. Specific topics include
>> question answering, dialogue, program induction, interactive learning, and
>> reliable machine learning. His awards include the IJCAI Computers and
>> Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research
>> Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).
>>
>>
>> *Host: **David McAllester* <mcallester at ttic.edu>
>> For more information on the colloquium series or to subscribe to the
>> mailing list, please see http://www.ttic.edu/colloquium.php
>>
>>
>>
>> Mary C. Marre
>> Faculty Administrative Support
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue*
>> *Room 517*
>> *Chicago, IL 60637*
>> *p:(773) 834-1757*
>> *f: (773) 357-6970*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>>
>> On Mon, Jan 18, 2021 at 4:41 PM Mary Marre <mmarre at ttic.edu> wrote:
>>
>>> *When:* Monday, January 25th at 11:10 am CT
>>>
>>>
>>>
>>> *Where:* Zoom Virtual Talk (*register in advance here
>>> <https://uchicagogroup.zoom.us/webinar/register/WN_4ihYHm5YQbi6dXgdeOD8BQ>*
>>> )
>>>
>>>
>>>
>>> *Who: * Percy Liang, Stanford University
>>>
>>>
>>>
>>> *Talk:* Surprises in the Quest for Robust Machine Learning
>>>
>>> *Abstract: *Standard machine learning produces models that are accurate
>>> on average but degrade dramatically on when the test distribution of
>>> interest deviates from the training distribution. We consider three
>>> settings where this happens: when test inputs are subject to adversarial
>>> attacks, when we are concerned with performance on minority subpopulations,
>>> and when the world simply changes (classic domain shift). Our aim is to
>>> produce methods that are provably robust to such deviations. In this talk,
>>> I will provide an overview of the work my group has done on this topic over
>>> the last three years. We have found many surprises in our quest for
>>> robustness: for example, that the "more data" and "bigger models" strategy
>>> that works so well for average accuracy sometimes fails out-of-domain. On
>>> the other hand, we have found that certain tools such as analysis of linear
>>> regression and use of unlabeled data (e.g., robust self-training) have
>>> reliably delivered promising results across a number of different settings.
>>>
>>> *Bio: *
>>> Percy Liang is an Associate Professor of Computer Science at Stanford
>>> University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His
>>> research spans machine learning and natural language processing, with the
>>> goal of developing trustworthy agents that can communicate effectively with
>>> people and improve over time through interaction. Specific topics include
>>> question answering, dialogue, program induction, interactive learning, and
>>> reliable machine learning. His awards include the IJCAI Computers and
>>> Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research
>>> Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).
>>>
>>>
>>> *Host: **David McAllester* <mcallester at ttic.edu>
>>> For more information on the colloquium series or to subscribe to the
>>> mailing list, please see http://www.ttic.edu/colloquium.php
>>>
>>>
>>>
>>> Mary C. Marre
>>> Faculty Administrative Support
>>> *Toyota Technological Institute*
>>> *6045 S. Kenwood Avenue*
>>> *Room 517*
>>> *Chicago, IL 60637*
>>> *p:(773) 834-1757*
>>> *f: (773) 357-6970*
>>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20210125/fecd60ca/attachment-0001.html>
More information about the Theory
mailing list