[Colloquium] NOW: 2/13 Talks at TTIC: Ariel Holtzman, University of Washington

Mary Marre mmarre at ttic.edu
Mon Feb 13 11:30:10 CST 2023


*When:*        Monday, February 13, 2023 at* 11:30** a**m CT   *


*Where:       *Talk will be given *live, in-person* at

                   TTIC, 6045 S. Kenwood Avenue

                   5th Floor, Room 530


*Virtually:*  *via* Panopto (*livestream
<https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=2f4d19c4-1559-40aa-aed9-afa101738b02>*
)


*Who: *         Ariel Holtzman, University of Washington


------------------------------

*Title:* Controlling Large Language Models: Generating (Useful) Text from
Models We Don’t Fully Understand

*Abstract:* Generative language models have recently exploded in
popularity, with services such as ChatGPT deployed to millions of users.
These neural models are fascinating, useful, and incredibly mysterious:
rather than designing what we want them to do, we nudge them in the right
direction and must discover what they are capable of. But how can we rely
on such inscrutable systems?

This talk will describe a number of key characteristics we want from
generative models of text, such as coherence and correctness, and show how
we can design algorithms to more reliably generate text with these
properties. We will also highlight some of the challenges of using such
models, including the need to discover and name new and often unexpected
emergent behavior. Finally, we will discuss the implications this has for
the grand challenge of understanding models at a level where we can safely
control their behavior.

*Bio:* Ari Holtzman is a PhD student at the University of Washington. His
research has focused broadly on generative models of text: how we can use
them and how can we understand them better. His research interests have
spanned everything from dialogue, including winning the first Amazon Alexa
Prize in 2017, to fundamental research on text generation, such as
proposing Nucleus Sampling, a decoding algorithm used broadly in deployed
systems such as the GPT-3 API and academic research. Ari completed an
interdisciplinary degree at NYU combining Computer Science and the
Philosophy of Language.

*Host:* Karen Livescu <klivescu at ttic.edu>


Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue, Rm 517*
*Chicago, IL  60637*
*773-834-1757*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Mon, Feb 13, 2023 at 10:23 AM Mary Marre <mmarre at ttic.edu> wrote:

> *When:*        Monday, February 13, 2023 at* 11:30** a**m CT   *
>
>
> *Where:       *Talk will be given *live, in-person* at
>
>                    TTIC, 6045 S. Kenwood Avenue
>
>                    5th Floor, Room 530
>
>
> *Virtually:*  *via* Panopto (*livestream
> <https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=2f4d19c4-1559-40aa-aed9-afa101738b02>*
> )
>
>
> *Who: *         Ariel Holtzman, University of Washington
>
>
> ------------------------------
>
> *Title:* Controlling Large Language Models: Generating (Useful) Text from
> Models We Don’t Fully Understand
>
> *Abstract:* Generative language models have recently exploded in
> popularity, with services such as ChatGPT deployed to millions of users.
> These neural models are fascinating, useful, and incredibly mysterious:
> rather than designing what we want them to do, we nudge them in the right
> direction and must discover what they are capable of. But how can we rely
> on such inscrutable systems?
>
> This talk will describe a number of key characteristics we want from
> generative models of text, such as coherence and correctness, and show how
> we can design algorithms to more reliably generate text with these
> properties. We will also highlight some of the challenges of using such
> models, including the need to discover and name new and often unexpected
> emergent behavior. Finally, we will discuss the implications this has for
> the grand challenge of understanding models at a level where we can safely
> control their behavior.
>
> *Bio:* Ari Holtzman is a PhD student at the University of Washington. His
> research has focused broadly on generative models of text: how we can use
> them and how can we understand them better. His research interests have
> spanned everything from dialogue, including winning the first Amazon Alexa
> Prize in 2017, to fundamental research on text generation, such as
> proposing Nucleus Sampling, a decoding algorithm used broadly in deployed
> systems such as the GPT-3 API and academic research. Ari completed an
> interdisciplinary degree at NYU combining Computer Science and the
> Philosophy of Language.
>
> *Host:* Karen Livescu <klivescu at ttic.edu>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue, Rm 517*
> *Chicago, IL  60637*
> *773-834-1757*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Sun, Feb 12, 2023 at 2:21 PM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When:*        Monday, February 13, 2023 at* 11:30** a**m CT   *
>>
>>
>> *Where:       *Talk will be given *live, in-person* at
>>
>>                    TTIC, 6045 S. Kenwood Avenue
>>
>>                    5th Floor, Room 530
>>
>>
>> *Virtually:*  *via* Panopto (*livestream
>> <https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=2f4d19c4-1559-40aa-aed9-afa101738b02>*
>> )
>>
>>
>> *Who: *         Ariel Holtzman, University of Washington
>>
>>
>> ------------------------------
>>
>> *Title:* Controlling Large Language Models: Generating (Useful) Text
>> from Models We Don’t Fully Understand
>>
>> *Abstract:* Generative language models have recently exploded in
>> popularity, with services such as ChatGPT deployed to millions of users.
>> These neural models are fascinating, useful, and incredibly mysterious:
>> rather than designing what we want them to do, we nudge them in the right
>> direction and must discover what they are capable of. But how can we rely
>> on such inscrutable systems?
>>
>> This talk will describe a number of key characteristics we want from
>> generative models of text, such as coherence and correctness, and show how
>> we can design algorithms to more reliably generate text with these
>> properties. We will also highlight some of the challenges of using such
>> models, including the need to discover and name new and often unexpected
>> emergent behavior. Finally, we will discuss the implications this has for
>> the grand challenge of understanding models at a level where we can safely
>> control their behavior.
>>
>> *Bio:* Ari Holtzman is a PhD student at the University of Washington.
>> His research has focused broadly on generative models of text: how we can
>> use them and how can we understand them better. His research interests have
>> spanned everything from dialogue, including winning the first Amazon Alexa
>> Prize in 2017, to fundamental research on text generation, such as
>> proposing Nucleus Sampling, a decoding algorithm used broadly in deployed
>> systems such as the GPT-3 API and academic research. Ari completed an
>> interdisciplinary degree at NYU combining Computer Science and the
>> Philosophy of Language.
>>
>> *Host:* Karen Livescu <klivescu at ttic.edu>
>>
>>
>> Mary C. Marre
>> Faculty Administrative Support
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue, Rm 517*
>> *Chicago, IL  60637*
>> *773-834-1757*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>>
>> On Mon, Feb 6, 2023 at 4:57 PM Mary Marre <mmarre at ttic.edu> wrote:
>>
>>> *When:*        Monday, February 13, 2023 at* 11:30** a**m CT   *
>>>
>>>
>>> *Where:       *Talk will be given *live, in-person* at
>>>
>>>                    TTIC, 6045 S. Kenwood Avenue
>>>
>>>                    5th Floor, Room 530
>>>
>>>
>>> *Virtually:*  *via* Panopto (*livestream
>>> <https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=2f4d19c4-1559-40aa-aed9-afa101738b02>*
>>> )
>>>
>>>
>>> *Who: *         Ariel Holtzman, University of Washington
>>>
>>>
>>> ------------------------------
>>>
>>> *Title:* Controlling Large Language Models: Generating (Useful) Text
>>> from Models We Don’t Fully Understand
>>>
>>> *Abstract:* Generative language models have recently exploded in
>>> popularity, with services such as ChatGPT deployed to millions of users.
>>> These neural models are fascinating, useful, and incredibly mysterious:
>>> rather than designing what we want them to do, we nudge them in the right
>>> direction and must discover what they are capable of. But how can we rely
>>> on such inscrutable systems?
>>>
>>> This talk will describe a number of key characteristics we want from
>>> generative models of text, such as coherence and correctness, and show how
>>> we can design algorithms to more reliably generate text with these
>>> properties. We will also highlight some of the challenges of using such
>>> models, including the need to discover and name new and often unexpected
>>> emergent behavior. Finally, we will discuss the implications this has for
>>> the grand challenge of understanding models at a level where we can safely
>>> control their behavior.
>>>
>>> *Bio:* Ari Holtzman is a PhD student at the University of Washington.
>>> His research has focused broadly on generative models of text: how we can
>>> use them and how can we understand them better. His research interests have
>>> spanned everything from dialogue, including winning the first Amazon Alexa
>>> Prize in 2017, to fundamental research on text generation, such as
>>> proposing Nucleus Sampling, a decoding algorithm used broadly in deployed
>>> systems such as the GPT-3 API and academic research. Ari completed an
>>> interdisciplinary degree at NYU combining Computer Science and the
>>> Philosophy of Language.
>>>
>>> *Host:* Karen Livescu <klivescu at ttic.edu>
>>>
>>>
>>>
>>> Mary C. Marre
>>> Faculty Administrative Support
>>> *Toyota Technological Institute*
>>> *6045 S. Kenwood Avenue, Rm 517*
>>> *Chicago, IL  60637*
>>> *773-834-1757*
>>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20230213/7fd390d0/attachment-0001.html>


More information about the Colloquium mailing list