[Theory] REMINDER: 2/19 Talks at TTIC: Gengshan Yang, Meta Reality Labs
Mary Marre
mmarre at ttic.edu
Sun Feb 18 23:18:09 CST 2024
*When:* Monday, February 19, 2024 at* 10:00** a**m CT *
*Where: *Talk will be given *live, in-person* at
TTIC, 6045 S. Kenwood Avenue
5th Floor, Room 530
*Virtually:* *via *Panopto (*livestream
<https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=f8671fe5-c78a-4bce-8f86-b1150046c417>*
)
* *limited access: see info below*
*Who: * Gengshan Yang, Meta Reality Labs
------------------------------
*Title*: Towards 4D Reconstruction in the Wild
*Abstract*: With the advances in VR/AR hardwares, we are entering an era of
virtual presence. To populate the virtual world, one solution is to build
digital copies of the real world in 4D (3D+time). However, existing methods
for 4D reconstruction often require specialized sensors or body templates,
making them less applicable to the diverse objects and scenes one may see
in everyday life. In light of this, our goal is to reconstruct 4D
structures from videos in the wild. Although the problem is challenging due
to its under-constrained nature, recent advances in differentiable graphics
and data-driven vision priors allow us to approach it in an
analysis-by-synthesis framework. Under the generic vision, motion, and
physics priors, our method searches for 4D structures that are faithful to
video inputs via gradient-based optimization. Based on this framework, we
present methods to reconstruct deformable objects and their surrounding
scenes in 4D from in-the-wild video footage, which can be transferred to VR
and robot platforms.
*Bio*: Gengshan Yang is a research scientist at Meta's Reality Labs in
Pittsburgh. He received his PhD in Robotics from Carnegie Mellon
University, advised by Prof. Deva Ramanan. He is also a recipient of the
2021 Qualcomm Innovation Fellowship. His research is focused on 3D computer
vision, particularly on inferring structures (e.g., 3D, motion,
segmentation, physics) from videos.
*Host: **David McAllester* <mcallester at ttic.edu>
*Access to this livestream is limited to TTIC / UChicago (press panopto
link and sign in to your UChicago account with CNetID).
Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue, Rm 517*
*Chicago, IL 60637*
*773-834-1757*
*mmarre at ttic.edu <mmarre at ttic.edu>*
On Mon, Feb 12, 2024 at 10:44 PM Mary Marre <mmarre at ttic.edu> wrote:
> *When:* Monday, February 19, 2024 at* 10:00** a**m CT *
>
>
> *Where: *Talk will be given *live, in-person* at
>
> TTIC, 6045 S. Kenwood Avenue
>
> 5th Floor, Room 530
>
>
> *Virtually:* *via *Panopto (*livestream
> <https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=f8671fe5-c78a-4bce-8f86-b1150046c417>*
> )
>
> * *limited access: see info below*
>
>
> *Who: * Gengshan Yang, Meta Reality Labs
> ------------------------------
> *Title*: Towards 4D Reconstruction in the Wild
>
> *Abstract*: With the advances in VR/AR hardwares, we are entering an era
> of virtual presence. To populate the virtual world, one solution is to
> build digital copies of the real world in 4D (3D+time). However, existing
> methods for 4D reconstruction often require specialized sensors or body
> templates, making them less applicable to the diverse objects and scenes
> one may see in everyday life. In light of this, our goal is to reconstruct
> 4D structures from videos in the wild. Although the problem is challenging
> due to its under-constrained nature, recent advances in differentiable
> graphics and data-driven vision priors allow us to approach it in an
> analysis-by-synthesis framework. Under the generic vision, motion, and
> physics priors, our method searches for 4D structures that are faithful to
> video inputs via gradient-based optimization. Based on this framework, we
> present methods to reconstruct deformable objects and their surrounding
> scenes in 4D from in-the-wild video footage, which can be transferred to VR
> and robot platforms.
>
> *Bio*: Gengshan Yang is a research scientist at Meta's Reality Labs in
> Pittsburgh. He received his PhD in Robotics from Carnegie Mellon
> University, advised by Prof. Deva Ramanan. He is also a recipient of the
> 2021 Qualcomm Innovation Fellowship. His research is focused on 3D computer
> vision, particularly on inferring structures (e.g., 3D, motion,
> segmentation, physics) from videos.
>
> *Host: **David McAllester* <mcallester at ttic.edu>
>
> *Access to this livestream is limited to TTIC / UChicago (press panopto
> link and sign in to your UChicago account with CNetID).
>
>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue, Rm 517*
> *Chicago, IL 60637*
> *773-834-1757*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20240218/e68e0040/attachment.html>
More information about the Theory
mailing list