[Theory] REMINDER: 6/11 Thesis Defense: Mohammadreza Mostajabi, TTIC
Mary Marre
mmarre at ttic.edu
Mon Jun 10 15:30:29 CDT 2019
*When:* Tuesday, June 11th at 2*:00pm*
*Where: * TTIC, 6045 S. Kenwood Avenue, 5th Floor, Room 526
*Who: * Mohammadreza Mostajabi, TTIC
*Title: *Learning Rich Representations for Structured Prediction
Tasks
*Abstract: *We describe an approach to learning rich representations for
images, that enable simple and effective predictors in a range of vision
tasks involving spatially structured maps. Examples of tasks where one can
leverage our approach include semantic segmentation, depth estimation and
image colorization. Our key idea is to map small image elements (pixels or
superpixels) to feature representations extracted from a sequence of nested
regions of increasing extent. These regions are obtained by "zooming out"
from the superpixel all the way to scene-level resolution, and hence we
call these zoomout features. Applied to semantic segmentation and other
structured prediction tasks, our approach exploits statistical structure in
the image and in the label space without setting up explicit structured
prediction mechanisms, and thus avoids complex and expensive inference.
Instead image elements are classified by a feedforward multilayer network
with skip-layer connections spanning the zoomout levels.
We describe extensive experiments showing the effectiveness of our simple
architecture design. When used in conjunction with modern neural
architectures such as ResNet, DenseNet and NASNet (to which it is
complementary) our approach achieve competitive accuracy on segmentation
benchmarks. Finally, we introduce data-driven regularization functions for
the supervised training of CNNs. Our innovation takes the form of a
regularizer derived by learning an autoencoder over the set of annotations.
This approach further complements our zoom-out representation, leveraging
an improved representation of label space to inform our extraction of
features from images.
*Thesis Advisor: * Greg Shakhnarovich <greg at ttic.edu>
Mary C. Marre
Administrative Assistant
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 517*
*Chicago, IL 60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*
On Fri, May 31, 2019 at 11:12 AM Mary Marre <mmarre at ttic.edu> wrote:
> *When:* Tuesday, June 11th at 2*:00pm*
>
> *Where: * TTIC, 6045 S. Kenwood Avenue, 5th Floor, Room 526
>
> *Who: * Mohammadreza Mostajabi, TTIC
>
>
> *Title: *Learning Rich Representations for Structured Prediction
> Tasks
>
> *Abstract: *We describe an approach to learning rich representations for
> images, that enable simple and effective predictors in a range of vision
> tasks involving spatially structured maps. Examples of tasks where one can
> leverage our approach include semantic segmentation, depth estimation and
> image colorization. Our key idea is to map small image elements (pixels or
> superpixels) to feature representations extracted from a sequence of nested
> regions of increasing extent. These regions are obtained by "zooming out"
> from the superpixel all the way to scene-level resolution, and hence we
> call these zoomout features. Applied to semantic segmentation and other
> structured prediction tasks, our approach exploits statistical structure in
> the image and in the label space without setting up explicit structured
> prediction mechanisms, and thus avoids complex and expensive inference.
> Instead image elements are classified by a feedforward multilayer network
> with skip-layer connections spanning the zoomout levels.
>
> We describe extensive experiments showing the effectiveness of our simple
> architecture design. When used in conjunction with modern neural
> architectures such as ResNet, DenseNet and NASNet (to which it is
> complementary) our approach achieve competitive accuracy on segmentation
> benchmarks. Finally, we introduce data-driven regularization functions for
> the supervised training of CNNs. Our innovation takes the form of a
> regularizer derived by learning an autoencoder over the set of annotations.
> This approach further complements our zoom-out representation, leveraging
> an improved representation of label space to inform our extraction of
> features from images.
>
>
>
> *Thesis Advisor: * Greg Shakhnarovich <greg at ttic.edu>
>
>
>
>
> Mary C. Marre
> Administrative Assistant
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 517*
> *Chicago, IL 60637*
> *p:(773) 834-1757*
> *f: (773) 357-6970*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20190610/f265435c/attachment.html>
More information about the Theory
mailing list