[Theory] REMINDER: 1/18 Talks at TTIC: Sarah Wiegreffe, Georgia Institute of Technology

Mary Marre mmarre at ttic.edu
Tue Jan 18 10:00:00 CST 2022


*When:*      Tuesday, January 18th at* 11:00 am CT*



*Where:*     Zoom Virtual Talk (*register in advance here
<https://uchicagogroup.zoom.us/webinar/register/WN_oI3fDnOfTPKe_jbXvcSOHw>*)


*Who: *       Sarah Wiegreffe, Georgia Institute of Technology



*Title:*        Explaining Machine Learning Systems *for *and *with *Natural
Language


*Abstract: *The widespread adoption of deep learning in the field of
natural language processing (NLP) has led to applications with real-world
consequences, such as fact-checking, fake news detection, and medical
decision support. However, the increasing size and nonlinearity of deep
learning systems results in an opacity that hinders efforts to understand
them by practitioners and lay users alike.

In this talk, I will outline my foundational contributions to the field of
explainable NLP, particularly in providing textual explanations meaningful
to human users. I will introduce two definitions of meaning—faithfulness
and human acceptability—and test suites for measuring each. I will then
present lessons learned analyzing textual explanations produced by deep
learning models, and propose methods to improve their quality. Finally, I
will conclude with future directions for the field.

*Host*: *Karen Livescu* <klivescu at ttic.edu>



Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Chicago, IL  60637*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Mon, Jan 17, 2022 at 3:00 PM Mary Marre <mmarre at ttic.edu> wrote:

> *When:*      Tuesday, January 18th at* 11:00 am CT*
>
>
>
> *Where:*     Zoom Virtual Talk (*register in advance here
> <https://uchicagogroup.zoom.us/webinar/register/WN_oI3fDnOfTPKe_jbXvcSOHw>*
> )
>
>
> *Who: *       Sarah Wiegreffe, Georgia Institute of Technology
>
>
>
> *Title:*        Explaining Machine Learning Systems *for *and *with *Natural
> Language
>
>
> *Abstract: *The widespread adoption of deep learning in the field of
> natural language processing (NLP) has led to applications with real-world
> consequences, such as fact-checking, fake news detection, and medical
> decision support. However, the increasing size and nonlinearity of deep
> learning systems results in an opacity that hinders efforts to understand
> them by practitioners and lay users alike.
>
> In this talk, I will outline my foundational contributions to the field of
> explainable NLP, particularly in providing textual explanations meaningful
> to human users. I will introduce two definitions of meaning—faithfulness
> and human acceptability—and test suites for measuring each. I will then
> present lessons learned analyzing textual explanations produced by deep
> learning models, and propose methods to improve their quality. Finally, I
> will conclude with future directions for the field.
>
> *Host*: *Karen Livescu* <klivescu at ttic.edu>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Chicago, IL  60637*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Wed, Jan 12, 2022 at 10:35 AM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When:*      Tuesday, January 18th at* 11:00 am CT*
>>
>>
>>
>> *Where:*     Zoom Virtual Talk (*register in advance here
>> <https://uchicagogroup.zoom.us/webinar/register/WN_oI3fDnOfTPKe_jbXvcSOHw>*
>> )
>>
>>
>> *Who: *       Sarah Wiegreffe, Georgia Institute of Technology
>>
>>
>>
>> *Title:*        Explaining Machine Learning Systems *for *and *with *Natural
>> Language
>>
>>
>> *Abstract: *The widespread adoption of deep learning in the field of
>> natural language processing (NLP) has led to applications with real-world
>> consequences, such as fact-checking, fake news detection, and medical
>> decision support. However, the increasing size and nonlinearity of deep
>> learning systems results in an opacity that hinders efforts to understand
>> them by practitioners and lay users alike.
>>
>> In this talk, I will outline my foundational contributions to the field
>> of explainable NLP, particularly in providing textual explanations
>> meaningful to human users. I will introduce two definitions of
>> meaning—faithfulness and human acceptability—and test suites for measuring
>> each. I will then present lessons learned analyzing textual explanations
>> produced by deep learning models, and propose methods to improve their
>> quality. Finally, I will conclude with future directions for the field.
>>
>> *Host*: *Karen Livescu* <klivescu at ttic.edu>
>>
>>
>>
>> Mary C. Marre
>> Faculty Administrative Support
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue*
>> *Chicago, IL  60637*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20220118/b8167ee6/attachment-0001.html>


More information about the Theory mailing list