[Theory] REMINDER: 3/29 Talks at TTIC: Wei Hu, Princeton University
Mary Marre
mmarre at ttic.edu
Mon Mar 29 10:00:00 CDT 2021
*When:* Monday, March 29th at* 11:10 am CT*
*Where:* Zoom Virtual Talk (*register in advance here
<https://uchicagogroup.zoom.us/webinar/register/WN_WE0hdZNgQGmC-3e5vde_zA>*)
*Who: * Wei Hu, Princeton University
*Title*: Opening the Black Box: Towards Theoretical Understanding
of Deep Learning
*Abstract*: Despite the phenomenal empirical successes of deep learning in
many application domains, its underlying mathematical mechanisms remain
poorly understood. Mysteriously, deep neural networks in practice can often
fit training data almost perfectly and generalize remarkably well to unseen
test data, despite highly non-convex optimization landscapes and
significant over-parameterization. A solid theory not only can help us
understand such mysteries, but also will be the key to improving the
practice of deep learning and making it more principled, reliable, and
easy-to-use.
In this talk, I will present our recent progress on building the
theoretical foundations of deep learning, by opening the black box of the
interactions among data, model architecture, and training algorithm. First,
I will study the effect of making the neural network deeper, and will show that
gradient descent on deep linear neural networks induces an implicit bias
towards low-rank solutions, which leads to an improved method for the
classical low-rank matrix completion problem. Next, turning to nonlinear
deep neural networks, I will talk about a line of studies on wide neural
networks, where by drawing a connection to the neural tangent kernels, we
can answer various questions such as how training loss is minimized, why
trained network can generalize well, and why certain component in the
network architecture is useful; we also use theoretical insights to design
a new simple and effective method for training on noisily labeled datasets.
In closing, I will discuss key questions going forward towards building
practically relevant theoretical foundations of modern machine learning.
*Bio*: Wei Hu is a PhD candidate in the Department of Computer Science at
Princeton University, advised by Sanjeev Arora. Previously, he obtained his
bachelor's degree in Computer Science from Tsinghua University. He has also
spent time as a research intern at research labs of Google and Microsoft.
His current research interest is broadly in the theoretical foundations of
modern machine learning. In particular, his main focus is on obtaining
solid theoretical understanding of deep learning, as well as using
theoretical insights to design practical and principled machine learning
methods. He is a recipient of the Siebel Scholarship Class of 2021.
*Host:* *Avrim Blum <avrim at ttic.edu>*
Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 517*
*Chicago, IL 60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*
On Sun, Mar 28, 2021 at 3:44 PM Mary Marre <mmarre at ttic.edu> wrote:
> *When:* Monday, March 29th at* 11:10 am CT*
>
>
>
> *Where:* Zoom Virtual Talk (*register in advance here
> <https://uchicagogroup.zoom.us/webinar/register/WN_WE0hdZNgQGmC-3e5vde_zA>*
> )
>
>
>
> *Who: * Wei Hu, Princeton University
>
>
> *Title*: Opening the Black Box: Towards Theoretical Understanding
> of Deep Learning
>
> *Abstract*: Despite the phenomenal empirical successes of deep learning
> in many application domains, its underlying mathematical mechanisms remain
> poorly understood. Mysteriously, deep neural networks in practice can often
> fit training data almost perfectly and generalize remarkably well to unseen
> test data, despite highly non-convex optimization landscapes and
> significant over-parameterization. A solid theory not only can help us
> understand such mysteries, but also will be the key to improving the
> practice of deep learning and making it more principled, reliable, and
> easy-to-use.
>
> In this talk, I will present our recent progress on building the
> theoretical foundations of deep learning, by opening the black box of the
> interactions among data, model architecture, and training algorithm. First,
> I will study the effect of making the neural network deeper, and will show that
> gradient descent on deep linear neural networks induces an implicit bias
> towards low-rank solutions, which leads to an improved method for the
> classical low-rank matrix completion problem. Next, turning to nonlinear
> deep neural networks, I will talk about a line of studies on wide neural
> networks, where by drawing a connection to the neural tangent kernels, we
> can answer various questions such as how training loss is minimized, why
> trained network can generalize well, and why certain component in the
> network architecture is useful; we also use theoretical insights to design
> a new simple and effective method for training on noisily labeled datasets.
> In closing, I will discuss key questions going forward towards building
> practically relevant theoretical foundations of modern machine learning.
>
> *Bio*: Wei Hu is a PhD candidate in the Department of Computer Science at
> Princeton University, advised by Sanjeev Arora. Previously, he obtained his
> bachelor's degree in Computer Science from Tsinghua University. He has also
> spent time as a research intern at research labs of Google and Microsoft.
> His current research interest is broadly in the theoretical foundations of
> modern machine learning. In particular, his main focus is on obtaining
> solid theoretical understanding of deep learning, as well as using
> theoretical insights to design practical and principled machine learning
> methods. He is a recipient of the Siebel Scholarship Class of 2021.
>
> *Host:* *Avrim Blum <avrim at ttic.edu>*
>
>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 517*
> *Chicago, IL 60637*
> *p:(773) 834-1757*
> *f: (773) 357-6970*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Mon, Mar 22, 2021 at 5:15 PM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When:* Monday, March 29th at* 11:10 am CT*
>>
>>
>>
>> *Where:* Zoom Virtual Talk (*register in advance here
>> <https://uchicagogroup.zoom.us/webinar/register/WN_WE0hdZNgQGmC-3e5vde_zA>*
>> )
>>
>>
>>
>> *Who: * Wei Hu, Princeton University
>>
>>
>> *Title*: Opening the Black Box: Towards Theoretical
>> Understanding of Deep Learning
>>
>> *Abstract*: Despite the phenomenal empirical successes of deep learning
>> in many application domains, its underlying mathematical mechanisms remain
>> poorly understood. Mysteriously, deep neural networks in practice can often
>> fit training data almost perfectly and generalize remarkably well to unseen
>> test data, despite highly non-convex optimization landscapes and
>> significant over-parameterization. A solid theory not only can help us
>> understand such mysteries, but also will be the key to improving the
>> practice of deep learning and making it more principled, reliable, and
>> easy-to-use.
>>
>> In this talk, I will present our recent progress on building the
>> theoretical foundations of deep learning, by opening the black box of
>> the interactions among data, model architecture, and training algorithm. First,
>> I will study the effect of making the neural network deeper, and will show that
>> gradient descent on deep linear neural networks induces an implicit bias
>> towards low-rank solutions, which leads to an improved method for the
>> classical low-rank matrix completion problem. Next, turning to nonlinear
>> deep neural networks, I will talk about a line of studies on wide neural
>> networks, where by drawing a connection to the neural tangent kernels, we
>> can answer various questions such as how training loss is minimized, why
>> trained network can generalize well, and why certain component in the
>> network architecture is useful; we also use theoretical insights to design
>> a new simple and effective method for training on noisily labeled datasets.
>> In closing, I will discuss key questions going forward towards building
>> practically relevant theoretical foundations of modern machine learning.
>>
>> *Bio*: Wei Hu is a PhD candidate in the Department of Computer Science
>> at Princeton University, advised by Sanjeev Arora. Previously, he obtained
>> his bachelor's degree in Computer Science from Tsinghua University. He has
>> also spent time as a research intern at research labs of Google and
>> Microsoft. His current research interest is broadly in the theoretical
>> foundations of modern machine learning. In particular, his main focus is on
>> obtaining solid theoretical understanding of deep learning, as well as
>> using theoretical insights to design practical and principled machine
>> learning methods. He is a recipient of the Siebel Scholarship Class of 2021.
>>
>> *Host:* *Avrim Blum* <avrim at ttic.edu>
>>
>>
>>
>> Mary C. Marre
>> Faculty Administrative Support
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue*
>> *Room 517*
>> *Chicago, IL 60637*
>> *p:(773) 834-1757*
>> *f: (773) 357-6970*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20210329/496109eb/attachment-0001.html>
More information about the Theory
mailing list