[Colloquium] REMINDER: 2/13 Talks at TTIC: Dinesh Jayaraman, UT Austin

Mary Marre via Colloquium colloquium at mailman.cs.uchicago.edu
Sun Feb 12 21:45:24 CST 2017


When:     Monday, February 13th at 11:00 am

Where:    TTIC, 6045 S Kenwood Avenue, 5th Floor, Room 526

Who:       Dinesh Jayaraman, UT Austin


Talk title: Embodied learning for visual recognition

Abstract: Visual recognition methods have made great strides in recent
years by exploiting large manually curated and labeled datasets specialized
to various tasks. My research focuses on asking: could we do better than
this painstakingly manually supervised approach? In particular, could
embodied visual agents teach themselves through interaction with and
experimentation in their environments?

In this talk, I will present approaches that we have developed to model the
learning and performance of visual tasks by agents that have the ability to
act and move in their worlds. I will showcase results that indicate that
computer vision systems could benefit greatly from action and motion in the
world, with continuous self-acquired feedback. In particular, it is
possible for embodied visual agents to learn generic image representations
from unlabeled video, improve scene and object categorization performance
through intelligent exploration, and even learn to direct their cameras to
be effective videographers.


Host: Greg Shakhnarovich <greg at ttic.edu>


Mary C. Marre
Administrative Assistant
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 504*
*Chicago, IL  60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*

On Tue, Feb 7, 2017 at 7:55 AM, Mary Marre <mmarre at ttic.edu> wrote:

> When:     Monday, February 13th at 11:00 am
>
> Where:    TTIC, 6045 S Kenwood Avenue, 5th Floor, Room 526
>
> Who:       Dinesh Jayaraman, UT Austin
>
>
> Talk title: Embodied learning for visual recognition
>
> Abstract: Visual recognition methods have made great strides in recent
> years by exploiting large manually curated and labeled datasets specialized
> to various tasks. My research focuses on asking: could we do better than
> this painstakingly manually supervised approach? In particular, could
> embodied visual agents teach themselves through interaction with and
> experimentation in their environments?
>
> In this talk, I will present approaches that we have developed to model
> the learning and performance of visual tasks by agents that have the
> ability to act and move in their worlds. I will showcase results that
> indicate that computer vision systems could benefit greatly from action and
> motion in the world, with continuous self-acquired feedback. In particular,
> it is possible for embodied visual agents to learn generic image
> representations from unlabeled video, improve scene and object
> categorization performance through intelligent exploration, and even learn
> to direct their cameras to be effective videographers.
>
>
> Host: Greg Shakhnarovich <greg at ttic.edu>
>
>
> Mary C. Marre
> Administrative Assistant
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 504*
> *Chicago, IL  60637*
> *p:(773) 834-1757 <(773)%20834-1757>*
> *f: (773) 357-6970 <(773)%20357-6970>*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20170212/6b71ad64/attachment.html>


More information about the Colloquium mailing list