[Colloquium] REMINDER: 2/8 Talks at TTIC: Yonatan Belinkov, MIT

Mary Marre via Colloquium colloquium at mailman.cs.uchicago.edu
Wed Feb 7 14:08:01 CST 2018


 When:     Thursday, February 8th at *11:00am*

Where:    TTIC, 6045 S Kenwood Avenue, 5th Floor, Room 526

Who:       Yonatan Belinkov, MIT


Title:       Internal Representations in Deep Learning for Language and
Speech Processing

Abstract: Language technology has become pervasive in everyday life,
powering applications like Apple’s Siri or Google’s Assistant. Neural
networks are a key component in these systems thanks to their ability to
model large amounts of data. Contrary to traditional systems, models based
on deep neural networks (a.k.a. deep learning) can be trained in an
end-to-end fashion on input-output pairs, such as a sentence in one
language and its translation in another language, or a speech utterance and
its transcription. The end-to-end training paradigm simplifies the
engineering process while giving the model flexibility to optimize for the
desired task. This, however, often comes at the expense of model
interpretability: understanding the role of different parts of the deep
neural network is difficult, and such models are often perceived as
“black-box”. In this work, we study deep learning models for two core
language technology tasks: machine translation and speech recognition. We
advocate an approach that attempts to decode the information encoded in
such models while they are being trained. We perform a range of experiments
comparing different modules, layers, and representations in the end-to-end
models. Our analyses illuminate the inner workings of end-to-end machine
translation and speech recognition systems, explain how they capture
different language properties, and suggest potential directions for
improving them. The methodology is also applicable to other tasks in the
language domain and beyond.

Bio:
Yonatan Belinkov is a PhD candidate at the MIT Computer Science and
Artificial Intelligence Laboratory (CSAIL), working on language and speech
processing. His recent research interests focus on representations of
language in neural network models. His research has been published at ACL,
EMNLP, TACL, ICLR, and NIPS. He received an SM degree from MIT in 2014 and
prior to that a BSc in Mathematics and an MA in Arabic Studies, both from
Tel Aviv University.

Host: Karen Livescu <klivescu at ttic.edu>



Mary C. Marre
Administrative Assistant
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 504*
*Chicago, IL  60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*

On Thu, Feb 1, 2018 at 5:17 PM, Mary Marre <mmarre at ttic.edu> wrote:

> When:     Thursday, February 8th at *11:00am*
>
> Where:    TTIC, 6045 S Kenwood Avenue, 5th Floor, Room 526
>
> Who:       Yonatan Belinkov, MIT
>
>
> Title:       Internal Representations in Deep Learning for Language and
> Speech Processing
>
> Abstract: Language technology has become pervasive in everyday life,
> powering applications like Apple’s Siri or Google’s Assistant. Neural
> networks are a key component in these systems thanks to their ability to
> model large amounts of data. Contrary to traditional systems, models based
> on deep neural networks (a.k.a. deep learning) can be trained in an
> end-to-end fashion on input-output pairs, such as a sentence in one
> language and its translation in another language, or a speech utterance and
> its transcription. The end-to-end training paradigm simplifies the
> engineering process while giving the model flexibility to optimize for the
> desired task. This, however, often comes at the expense of model
> interpretability: understanding the role of different parts of the deep
> neural network is difficult, and such models are often perceived as
> “black-box”. In this work, we study deep learning models for two core
> language technology tasks: machine translation and speech recognition. We
> advocate an approach that attempts to decode the information encoded in
> such models while they are being trained. We perform a range of experiments
> comparing different modules, layers, and representations in the end-to-end
> models. Our analyses illuminate the inner workings of end-to-end machine
> translation and speech recognition systems, explain how they capture
> different language properties, and suggest potential directions for
> improving them. The methodology is also applicable to other tasks in the
> language domain and beyond.
>
> Bio:
> Yonatan Belinkov is a PhD candidate at the MIT Computer Science and
> Artificial Intelligence Laboratory (CSAIL), working on language and speech
> processing. His recent research interests focus on representations of
> language in neural network models. His research has been published at ACL,
> EMNLP, TACL, ICLR, and NIPS. He received an SM degree from MIT in 2014 and
> prior to that a BSc in Mathematics and an MA in Arabic Studies, both from
> Tel Aviv University.
>
> Host: Karen Livescu <klivescu at ttic.edu>
>
>
> Mary C. Marre
> Administrative Assistant
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 504*
> *Chicago, IL  60637*
> *p:(773) 834-1757 <(773)%20834-1757>*
> *f: (773) 357-6970 <(773)%20357-6970>*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20180207/7bdb074b/attachment-0001.html>


More information about the Colloquium mailing list