[Colloquium] Reminder: Shen/MS Presentation/Apr 23, 2015

Margaret Jaffey margaret at cs.uchicago.edu
Wed Apr 22 10:49:16 CDT 2015


This is a reminder about Jiajun's MS Presentation tomorrow.

------------------------------------------------------------------------------
Date:  Thursday, April 23, 2015

Time:  10:00 AM

Place:  Ryerson 255

M.S. Candidate:  Jiajun Shen

M.S. Paper Title: HIERARCHICAL STATISTICAL MODEL FOR UNSUPERVISED
REPRESENTATION LEARNING

Abstract:
The idea of learning multiple levels of representations has gained
significant success by beat- ing benchmarks in areas like computer
vision and speech recognition. By building deep architectures that
contain multiple representation layers, the deep learning methods
offer more flexibility than shallow architectures. They present
systematic ways to build up more complex structures in the higher
layers by combining simpler ones learned in the lower layers. While
deep learning methods have shown promise in a variety of machine
learning tasks, the training of deep architectures remains
computationally intensive and often requires a large quantity of
labeled data as well as careful fine-tuning. It is desirable to build
a hierarchical framework that takes advantage of large amounts of
easily-obtained unlabeled data for representation learning. Moreover,
it would be advantageous if the framework can be trained in a much
faster and easier way yet retains competitive performance in
classification tasks with limited labeled data. In this work, we
developed a hierarchical statistical model that can achieve multiple
levels of features that represent complex structures by learning from
the unlabeled data. We present a novel extension of the part-based
framework of Bernstein and Amit that incorporates an intermediate
layer of feature extraction and a pooling operation layer that pools
binary feature outputs in the feature space. With the ability to
detect more complex structures and better generalization of the
concepts, we hope the intermediate layer will shorten the gap between
the atomic parts layer and the object model layer. We provide a way to
learn a pooling operation that combines the similar filter outputs
together and produces locally invariant outputs. In addition, our
model retains the ability to incorporate a rotatable part layer from
Ng et al. into our framework to make the learned representations
explicitly rotatable. Our model adheres to the statistical principle
of likelihood at every layer of the framework, which is readily
interpretable and can be trained easily using the EM algorithm. We
applied our model to the MNIST dataset and the MNIST-Rotation dataset
and were able to achieve competitive results in classification tasks
with a small amount of labeled data.

Jiajun's advisor is Prof. Yali Amit

Login to the Computer Science Department website for details:
 https://www.cs.uchicago.edu/phd/ms_announcements#jiajun

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Margaret P. Jaffey            margaret at cs.uchicago.edu
Department of Computer Science
Student Support Rep (Ry 156)               (773) 702-6011
The University of Chicago      http://www.cs.uchicago.edu
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


More information about the Colloquium mailing list