[Colloquium] REMINDER: 11/2 TTIC Colloquium: Jeannette Bohg, Stanford University

Mary Marre mmarre at ttic.edu
Mon Nov 2 10:06:12 CST 2020


*When:*      Monday, November 2nd, at *11:10 am* CT



*Where:*     Zoom Virtual Talk (*register in advance here
<https://uchicagogroup.zoom.us/webinar/register/WN_mTNNuneYQQuXfXTVaI02og>*)



*Who: *       Jeannette Bohg, Stanford University



*Title:* Scaffolding, Abstraction and Learning from Demonstration - A Route
to Robot Learning


*Abstract:* Learning contact-rich, robotic manipulation skills is a
challenging problem due to the high-dimensionality of the state and action
space as well as uncertainty from noisy sensors and inaccurate motor
control. In this talk, I want to show how principles of human learning can
be transferred to robots to combat these factors and achieve more robust
manipulation in a variety of tasks. The first principle is scaffolding.
Humans actively exploit contact constraints in the environment. By adopting
a similar strategy, robots can also achieve more robust manipulation. In
this talk, I will present an approach that enables a robot to autonomously
modify its environment and thereby discover how to ease manipulation skill
learning. Specifically, we provide the robot with fixtures that it can
freely place within the environment. These fixtures provide hard
constraints that limit the outcome of robot actions. Thereby, they funnel
uncertainty from perception and motor control and scaffold manipulation
skill learning. We show that manipulation skill learning is dramatically
sped up through this way of scaffolding.



The second principle is abstraction - in this case of manipulation skills.
Humans have gradually developed language, mastered complex motor skills,
created and utilized sophisticated tools. The act of conceptualization is
fundamental to these abilities because it allows humans to mentally
represent, summarize and abstract diverse knowledge and skills. By means of
abstraction, concepts that we learn from a limited number of examples can
be extended to a potentially infinite set of new and unanticipated
situations. Abstract concepts can also be more easily taught to others by
demonstration- the third principle. I will present work that gives robots
the ability to acquire a variety of manipulation concepts that act as
mental representations of verbs in a natural language instruction. We
propose to use learning from human demonstrations of manipulation actions
as recorded in large-scale video data sets that are annotated with natural
language instructions. In extensive simulation experiments, we show that
the policy learned in the proposed way can perform a large percentage of
the 78 different manipulation tasks on which it was trained. We show that
the policy generalizes over variations of the environment. We also show
examples of successful generalization over novel but similar instructions.


*Bio:* Jeannette Bohg is an Assistant Professor of Computer Science at
Stanford University. She was a group leader at the Autonomous Motion
Department (AMD) of the MPI for Intelligent Systems until September 2017.
Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the
Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In
her thesis, she proposed novel methods towards multi-modal scene
understanding for robotic grasping. She also studied at Chalmers in
Gothenburg and at the Technical University in Dresden where she received
her Master in Art and Technology and her Diploma in Computer Science,
respectively. Her research focuses on perception and learning for
autonomous robotic manipulation and grasping. She is specifically
interesting in developing methods that are goal-directed, real-time and
multi-modal such that they can provide meaningful feedback for execution
and learning. Jeannette Bohg has received several awards, most notably the
2019 IEEE International Conference on Robotics and Automation (ICRA) Best
Paper Award, the 2019 IEEE Robotics and Automation Society Early Career
Award and the 2017 IEEE Robotics and Automation Letters (RA-L) Best Paper
Award.




*Host:* Matthew Walter <mwalter at ttic.edu>

For more information on the *colloquium* series or to subscribe to the
mailing list, please see http://www.ttic.edu/colloquium.php





Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 517*
*Chicago, IL  60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Sun, Nov 1, 2020 at 4:00 PM Mary Marre <mmarre at ttic.edu> wrote:

> *When:*      Monday, November 2nd, at *11:10 am* CT
>
>
>
> *Where:*     Zoom Virtual Talk (*register in advance here
> <https://uchicagogroup.zoom.us/webinar/register/WN_mTNNuneYQQuXfXTVaI02og>*
> )
>
>
>
> *Who: *       Jeannette Bohg, Stanford University
>
>
>
> *Title:*        Scaffolding, Abstraction and Learning from Demonstration
> - A Route to Robot Learning
>
>
> *Abstract:* Learning contact-rich, robotic manipulation skills is a
> challenging problem due to the high-dimensionality of the state and action
> space as well as uncertainty from noisy sensors and inaccurate motor
> control. In this talk, I want to show how principles of human learning can
> be transferred to robots to combat these factors and achieve more robust
> manipulation in a variety of tasks. The first principle is scaffolding.
> Humans actively exploit contact constraints in the environment. By adopting
> a similar strategy, robots can also achieve more robust manipulation. In
> this talk, I will present an approach that enables a robot to autonomously
> modify its environment and thereby discover how to ease manipulation skill
> learning. Specifically, we provide the robot with fixtures that it can
> freely place within the environment. These fixtures provide hard
> constraints that limit the outcome of robot actions. Thereby, they funnel
> uncertainty from perception and motor control and scaffold manipulation
> skill learning. We show that manipulation skill learning is dramatically
> sped up through this way of scaffolding.
>
>
>
> The second principle is abstraction - in this case of manipulation skills.
> Humans have gradually developed language, mastered complex motor skills,
> created and utilized sophisticated tools. The act of conceptualization is
> fundamental to these abilities because it allows humans to mentally
> represent, summarize and abstract diverse knowledge and skills. By means of
> abstraction, concepts that we learn from a limited number of examples can
> be extended to a potentially infinite set of new and unanticipated
> situations. Abstract concepts can also be more easily taught to others by
> demonstration- the third principle. I will present work that gives robots
> the ability to acquire a variety of manipulation concepts that act as
> mental representations of verbs in a natural language instruction. We
> propose to use learning from human demonstrations of manipulation actions
> as recorded in large-scale video data sets that are annotated with natural
> language instructions. In extensive simulation experiments, we show that
> the policy learned in the proposed way can perform a large percentage of
> the 78 different manipulation tasks on which it was trained. We show that
> the policy generalizes over variations of the environment. We also show
> examples of successful generalization over novel but similar instructions.
>
>
> *Bio:* Jeannette Bohg is an Assistant Professor of Computer Science at
> Stanford University. She was a group leader at the Autonomous Motion
> Department (AMD) of the MPI for Intelligent Systems until September 2017.
> Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the
> Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In
> her thesis, she proposed novel methods towards multi-modal scene
> understanding for robotic grasping. She also studied at Chalmers in
> Gothenburg and at the Technical University in Dresden where she received
> her Master in Art and Technology and her Diploma in Computer Science,
> respectively. Her research focuses on perception and learning for
> autonomous robotic manipulation and grasping. She is specifically
> interesting in developing methods that are goal-directed, real-time and
> multi-modal such that they can provide meaningful feedback for execution
> and learning. Jeannette Bohg has received several awards, most notably the
> 2019 IEEE International Conference on Robotics and Automation (ICRA) Best
> Paper Award, the 2019 IEEE Robotics and Automation Society Early Career
> Award and the 2017 IEEE Robotics and Automation Letters (RA-L) Best Paper
> Award.
>
>
>
>
> *Host:* Matthew Walter <mwalter at ttic.edu>
>
> For more information on the *colloquium* series or to subscribe to the
> mailing list, please see http://www.ttic.edu/colloquium.php
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 517*
> *Chicago, IL  60637*
> *p:(773) 834-1757*
> *f: (773) 357-6970*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Mon, Oct 26, 2020 at 3:31 PM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When:*      Monday, November 2nd, at *11:10 am* CT
>>
>>
>>
>> *Where:*     Zoom Virtual Talk (*register in advance here
>> <https://uchicagogroup.zoom.us/webinar/register/WN_mTNNuneYQQuXfXTVaI02og>*
>> )
>>
>>
>>
>> *Who: *       Jeannette Bohg, Stanford University
>>
>>
>>
>> *Title:*        Scaffolding, Abstraction and Learning from Demonstration
>> - A Route to Robot Learning
>>
>>
>> *Abstract:* Learning contact-rich, robotic manipulation skills is a
>> challenging problem due to the high-dimensionality of the state and action
>> space as well as uncertainty from noisy sensors and inaccurate motor
>> control. In this talk, I want to show how principles of human learning can
>> be transferred to robots to combat these factors and achieve more robust
>> manipulation in a variety of tasks. The first principle is scaffolding.
>> Humans actively exploit contact constraints in the environment. By adopting
>> a similar strategy, robots can also achieve more robust manipulation. In
>> this talk, I will present an approach that enables a robot to autonomously
>> modify its environment and thereby discover how to ease manipulation skill
>> learning. Specifically, we provide the robot with fixtures that it can
>> freely place within the environment. These fixtures provide hard
>> constraints that limit the outcome of robot actions. Thereby, they funnel
>> uncertainty from perception and motor control and scaffold manipulation
>> skill learning. We show that manipulation skill learning is dramatically
>> sped up through this way of scaffolding.
>>
>>
>>
>> The second principle is abstraction - in this case of manipulation
>> skills. Humans have gradually developed language, mastered complex motor
>> skills, created and utilized sophisticated tools. The act of
>> conceptualization is fundamental to these abilities because it allows
>> humans to mentally represent, summarize and abstract diverse knowledge and
>> skills. By means of abstraction, concepts that we learn from a limited
>> number of examples can be extended to a potentially infinite set of new and
>> unanticipated situations. Abstract concepts can also be more easily taught
>> to others by demonstration- the third principle. I will present work that
>> gives robots the ability to acquire a variety of manipulation concepts that
>> act as mental representations of verbs in a natural language instruction.
>> We propose to use learning from human demonstrations of manipulation
>> actions as recorded in large-scale video data sets that are annotated with
>> natural language instructions. In extensive simulation experiments, we show
>> that the policy learned in the proposed way can perform a large percentage
>> of the 78 different manipulation tasks on which it was trained. We show
>> that the policy generalizes over variations of the environment. We also
>> show examples of successful generalization over novel but similar
>> instructions.
>>
>>
>> *Bio:* Jeannette Bohg is an Assistant Professor of Computer Science at
>> Stanford University. She was a group leader at the Autonomous Motion
>> Department (AMD) of the MPI for Intelligent Systems until September 2017.
>> Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the
>> Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In
>> her thesis, she proposed novel methods towards multi-modal scene
>> understanding for robotic grasping. She also studied at Chalmers in
>> Gothenburg and at the Technical University in Dresden where she received
>> her Master in Art and Technology and her Diploma in Computer Science,
>> respectively. Her research focuses on perception and learning for
>> autonomous robotic manipulation and grasping. She is specifically
>> interesting in developing methods that are goal-directed, real-time and
>> multi-modal such that they can provide meaningful feedback for execution
>> and learning. Jeannette Bohg has received several awards, most notably the
>> 2019 IEEE International Conference on Robotics and Automation (ICRA) Best
>> Paper Award, the 2019 IEEE Robotics and Automation Society Early Career
>> Award and the 2017 IEEE Robotics and Automation Letters (RA-L) Best Paper
>> Award.
>>
>>
>>
>>
>> *Host:* Matthew Walter <mwalter at ttic.edu>
>>
>> For more information on the *colloquium* series or to subscribe to the
>> mailing list, please see http://www.ttic.edu/colloquium.php
>>
>>
>>
>> Mary C. Marre
>> Faculty Administrative Support
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue*
>> *Room 517*
>> *Chicago, IL  60637*
>> *p:(773) 834-1757*
>> *f: (773) 357-6970*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20201102/4247492e/attachment-0001.html>


More information about the Colloquium mailing list