[Theory] NOW: 12/7 Thesis Defense: Qingming Tang, TTIC

Mary Marre mmarre at ttic.edu
Wed Dec 7 12:57:54 CST 2022


*When*:     Wednesday, December 7th from *1:00 - 3:00 pm CT*

*Virtually*: attend virtually *here
<https://uchicagogroup.zoom.us/meeting/register/tJYodemsrzwjHdZUCfy-hT3ul111h5LP-JIO>*

*Who*:       Qingming Tang, TTIC

------------------------------

*Thesis Title*:   Representation Learning for Speech Data

*Abstract*: Supervised learning has been the dominant approach for training
deep neural networks for learning good representations. However, one
limiting factor to scale supervised learning is the lack of enough
annotated data. Motivated by this challenge, it is natural to explore
methods that learn generic information from a large amount of unlabeled
data. I describe the broad study of representation learning for speech data
I have conducted. Unlike most other works that focus on one or a few
learning settings, this thesis studies multiple settings:  supervised
learning with auxiliary losses, unsupervised learning, semi-supervised
learning, and multi-view learning. Besides different learning problems, I
also explore multiple approaches for representation learning. For example,
I will present our bidirectional contextual encoder for learning speech
representations (introduced in 2018), using informative prior distributions
to assist autoencoding-style representation learning, a few techniques for
variational representation learning models, and multi-view masked
reconstruction for self-supervised learning.

*Thesis Advisor*: *Karen Livescu* <klivescu at ttic.edu>
Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue, Rm 517*
*Chicago, IL  60637*
*773-834-1757*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Wed, Dec 7, 2022 at 12:16 PM Mary Marre <mmarre at ttic.edu> wrote:

> *When*:     Wednesday, December 7th from *1:00 - 3:00 pm CT*
>
> *Virtually*: attend virtually *here
> <https://uchicagogroup.zoom.us/meeting/register/tJYodemsrzwjHdZUCfy-hT3ul111h5LP-JIO>*
>
> *Who*:       Qingming Tang, TTIC
>
> ------------------------------
>
> *Thesis Title*:   Representation Learning for Speech Data
>
> *Abstract*: Supervised learning has been the dominant approach for
> training deep neural networks for learning good representations. However,
> one limiting factor to scale supervised learning is the lack of enough
> annotated data. Motivated by this challenge, it is natural to explore
> methods that learn generic information from a large amount of unlabeled
> data. I describe the broad study of representation learning for speech data
> I have conducted. Unlike most other works that focus on one or a few
> learning settings, this thesis studies multiple settings:  supervised
> learning with auxiliary losses, unsupervised learning, semi-supervised
> learning, and multi-view learning. Besides different learning problems, I
> also explore multiple approaches for representation learning. For example,
> I will present our bidirectional contextual encoder for learning speech
> representations (introduced in 2018), using informative prior distributions
> to assist autoencoding-style representation learning, a few techniques for
> variational representation learning models, and multi-view masked
> reconstruction for self-supervised learning.
>
> *Thesis Advisor*: *Karen Livescu* <klivescu at ttic.edu>
>
>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue, Rm 517*
> *Chicago, IL  60637*
> *773-834-1757*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Tue, Dec 6, 2022 at 3:06 PM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When*:     Wednesday, December 7th from *1:00 - 3:00 pm CT*
>>
>> *Virtually*: attend virtually *here
>> <https://uchicagogroup.zoom.us/meeting/register/tJYodemsrzwjHdZUCfy-hT3ul111h5LP-JIO>*
>>
>> *Who*:       Qingming Tang, TTIC
>>
>> ------------------------------
>>
>> *Thesis Title*:   Representation Learning for Speech Data
>>
>> *Abstract*: Supervised learning has been the dominant approach for
>> training deep neural networks for learning good representations. However,
>> one limiting factor to scale supervised learning is the lack of enough
>> annotated data. Motivated by this challenge, it is natural to explore
>> methods that learn generic information from a large amount of unlabeled
>> data. I describe the broad study of representation learning for speech data
>> I have conducted. Unlike most other works that focus on one or a few
>> learning settings, this thesis studies multiple settings:  supervised
>> learning with auxiliary losses, unsupervised learning, semi-supervised
>> learning, and multi-view learning. Besides different learning problems, I
>> also explore multiple approaches for representation learning. For example,
>> I will present our bidirectional contextual encoder for learning speech
>> representations (introduced in 2018), using informative prior distributions
>> to assist autoencoding-style representation learning, a few techniques for
>> variational representation learning models, and multi-view masked
>> reconstruction for self-supervised learning.
>>
>> *Thesis Advisor*: *Karen Livescu* <klivescu at ttic.edu>
>>
>>
>>
>>
>> Mary C. Marre
>> Faculty Administrative Support
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue, Rm 517*
>> *Chicago, IL  60637*
>> *773-834-1757*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>>
>> On Wed, Nov 30, 2022 at 10:41 AM Mary Marre <mmarre at ttic.edu> wrote:
>>
>>> *When*:     Wednesday, December 7th from *1:00 - 3:00 pm CT*
>>>
>>> *Virtually*: attend virtually *here
>>> <https://uchicagogroup.zoom.us/meeting/register/tJYodemsrzwjHdZUCfy-hT3ul111h5LP-JIO>*
>>>
>>> *Who*:       Qingming Tan, TTIC
>>>
>>> ------------------------------
>>>
>>> *Thesis Title*:   Representation Learning for Speech Data
>>>
>>> *Abstract*: Supervised learning has been the dominant approach for
>>> training deep neural networks for learning good representations. However,
>>> one limiting factor to scale supervised learning is the lack of enough
>>> annotated data. Motivated by this challenge, it is natural to explore
>>> methods that learn generic information from a large amount of unlabeled
>>> data. I describe the broad study of representation learning for speech data
>>> I have conducted. Unlike most other works that focus on one or a few
>>> learning settings, this thesis studies multiple settings:  supervised
>>> learning with auxiliary losses, unsupervised learning, semi-supervised
>>> learning, and multi-view learning. Besides different learning problems, I
>>> also explore multiple approaches for representation learning. For example,
>>> I will present our bidirectional contextual encoder for learning speech
>>> representations (introduced in 2018), using informative prior distributions
>>> to assist autoencoding-style representation learning, a few techniques for
>>> variational representation learning models, and multi-view masked
>>> reconstruction for self-supervised learning.
>>>
>>> *Thesis Advisor*: *Karen Livescu* <klivescu at ttic.edu>
>>>
>>>
>>>
>>> Mary C. Marre
>>> Faculty Administrative Support
>>> *Toyota Technological Institute*
>>> *6045 S. Kenwood Avenue, Rm 517*
>>> *Chicago, IL  60637*
>>> *773-834-1757*
>>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20221207/377a0ca5/attachment-0001.html>


More information about the Theory mailing list