[Colloquium] REMINDER: 10/19 Machine Learning Seminar Series: Sewoong Oh, Univ. of Illinois, Urbana Champaign

Mary Marre via Colloquium colloquium at mailman.cs.uchicago.edu
Thu Oct 18 15:58:01 CDT 2018


*When:         *Friday October 19, 11am-12pm
*Where:        *Room 526, TTIC 6045 S Kenwood Avenue
*Who:           *Sewoong Oh, University of Illinois, Urbana Champaign

*Title:           *The Power of Multiple Samples in Generative Adversarial
Networks

*Abstract: *We bring the tools from Blackwell’s seminal result on comparing
two stochastic experiments from 1953, to shine a new light on a modern
 application of great interest: Generative Adversarial Networks (GAN).
Binary hypothesis testing is at the center of training GANs, where a
trained neural network (called a critic) determines whether a given sample
is from the real data or the generated (fake) data. By jointly training the
generator and the critic, the hope is that eventually the trained generator
will generate realistic samples. One of the major challenges in GAN is
known as “mode collapse”; the lack of diversity in the samples generated by
thus trained generators. We propose a new training framework, where the
critic is fed with multiple samples jointly (which we call packing), as
opposed to each sample separately as done in standard GAN training. With
this simple but fundamental departure from existing GANs, experimental
results show that the diversity of the generated samples improve
significantly. We analyze  this practical gain by first providing a formal
mathematical definition of mode collapse and making a fundamental
connection between the idea of packing and the intensity of mode collapse.
Precisely, we show that the packed critic naturally penalizes mode
collapse, thus encouraging generators with less mode collapse. The analyses
critically rely on operational interpretation of hypothesis testing and
corresponding data processing inequalities, which lead to sharp analyses
with simple proofs. For this talk, I will assume no prior background on GANs

For more information on the machine learning seminar series (MLSS), please
request to join the group at https://groups.google.com/a/ttic.edu/d/forum/
mlss. If you are interested in presenting in the seminar, please send an
email to suriya at ttic.edu.




Mary C. Marre
Administrative Assistant
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 517*
*Chicago, IL  60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Wed, Oct 17, 2018 at 3:33 PM Mary Marre <mmarre at ttic.edu> wrote:

> *When:         *Friday October 19, 11am-12pm
> *Where:        *Room 526, TTIC 6045 S Kenwood Avenue
> *Who:           *Sewoong Oh, University of Illinois, Urbana Champaign
>
> *Title:           *The Power of Multiple Samples in Generative
> Adversarial Networks
>
> *Abstract: *We bring the tools from Blackwell’s seminal result on
> comparing two stochastic experiments from 1953, to shine a new light on a
> modern  application of great interest: Generative Adversarial Networks
> (GAN). Binary hypothesis testing is at the center of training GANs, where a
> trained neural network (called a critic) determines whether a given sample
> is from the real data or the generated (fake) data. By jointly training the
> generator and the critic, the hope is that eventually the trained generator
> will generate realistic samples. One of the major challenges in GAN is
> known as “mode collapse”; the lack of diversity in the samples generated by
> thus trained generators. We propose a new training framework, where the
> critic is fed with multiple samples jointly (which we call packing), as
> opposed to each sample separately as done in standard GAN training. With
> this simple but fundamental departure from existing GANs, experimental
> results show that the diversity of the generated samples improve
> significantly. We analyze  this practical gain by first providing a formal
> mathematical definition of mode collapse and making a fundamental
> connection between the idea of packing and the intensity of mode collapse.
> Precisely, we show that the packed critic naturally penalizes mode
> collapse, thus encouraging generators with less mode collapse. The analyses
> critically rely on operational interpretation of hypothesis testing and
> corresponding data processing inequalities, which lead to sharp analyses
> with simple proofs. For this talk, I will assume no prior background on GANs
>
> For more information on the machine learning seminar series (MLSS), please
> request to join the group at https://groups.google.com/a/ttic.edu/d/forum/
> mlss. If you are interested in presenting in the seminar, please send an
> email to suriya at ttic.edu.
>
>
>
> Mary C. Marre
> Administrative Assistant
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 517*
> *Chicago, IL  60637*
> *p:(773) 834-1757*
> *f: (773) 357-6970*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20181018/eec8020e/attachment.html>


More information about the Colloquium mailing list