[Theory] REMINDER: 9/9 Thesis Defense: Sudarshan Babu, TTIC (Room 529)

Mary Marre via Theory theory at mailman.cs.uchicago.edu
Thu Sep 5 13:49:00 CDT 2024


*When*:    Monday, September 9th from* 10**:00am - 11:00am CT*

*Where*:   Talk will be given *live, in-person* at
               TTIC, 6045 S. Kenwood Avenue
               5th Floor, *CORRECTION**:* talk will be in *Room 529*

*Virtually*: via *Zoom*
<https://uchicagogroup.zoom.us/j/91743879600?pwd=8yMHWukTKFDpazQ90gpnvmzIPkyUPR.1>


*Who:  *    Sudarshan Babu, TTIC


*Title:* Acquiring and Adapting Priors for Novel Tasks via Neural
Meta-Architectures
*Abstract: *The ability to transfer knowledge from prior experiences to
novel tasks stands as a pivotal capability of intelligent agents, including
both humans and computational models. This foundational principle underlies
the process of transfer learning, wherein pre-trained neural networks are
fine-tuned to adapt to downstream tasks, demonstrating tremendous success,
both in terms of task adaptation speed and performance. However there are
several domains where, due lack of data building such foundational models
(large models trained on internet scale data) is not a possibility – 3D
vision, computational chemistry, computational immunology, medical imaging
are examples. To address these challenges, our work focuses on designing
architectures to enable faster and efficient acquisition of priors when
large amounts of data is unavailable. In particular, we demonstrate that we
can use neural memory to enable adaptation on non stationary distributions
with only few samples. Then we demonstrate that hypernetwork (a network
that generates another network) designs can acquire more generalizable
priors than standard networks when trained with Model Agnostic
Meta-Learning (MAML). Subsequently, we apply hypernetworks to 3D scene
generation, demonstrating that they can acquire priors efficiently on just
a handful of training scenes, thereby leading to faster text-to-3D
generation. We then extend our hypernetwork framework to perform 3D
segmentation on novel scenes with limited data by efficiently transferring
priors from earlier viewed scenes. Finally, we propose a molecular
generative pre-training task for downstream tasks such as molecular
property prediction and target-aware drug generation, which are crucial
tasks in computational immunology.

*Thesis Committee:* Michael Maire (Advisor), Greg Shakhnarovich
(Co-Advisor), David McAllester, Aly Khan.


Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue, Rm 517*
*Chicago, IL  60637*
*773-834-1757*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Fri, Aug 30, 2024 at 4:59 PM Mary Marre <mmarre at ttic.edu> wrote:

> *When*:    Monday, September 9th from* 10**:00am - 11:00am CT*
>
> *Where*:   Talk will be given *live, in-person* at
>                TTIC, 6045 S. Kenwood Avenue
>                5th Floor, *Room 530*
>
> *Virtually*: via *Zoom*
> <https://uchicagogroup.zoom.us/j/91743879600?pwd=8yMHWukTKFDpazQ90gpnvmzIPkyUPR.1>
>
>
> *Who:  *    Sudarshan Babu, TTIC
>
>
> *Title:* Acquiring and Adapting Priors for Novel Tasks via Neural
> Meta-Architectures
> *Abstract: *The ability to transfer knowledge from prior experiences to
> novel tasks stands as a pivotal capability of intelligent agents, including
> both humans and computational models. This foundational principle underlies
> the process of transfer learning, wherein pre-trained neural networks are
> fine-tuned to adapt to downstream tasks, demonstrating tremendous success,
> both in terms of task adaptation speed and performance. However there are
> several domains where, due lack of data building such foundational models
> (large models trained on internet scale data) is not a possibility – 3D
> vision, computational chemistry, computational immunology, medical imaging
> are examples. To address these challenges, our work focuses on designing
> architectures to enable faster and efficient acquisition of priors when
> large amounts of data is unavailable. In particular, we demonstrate that we
> can use neural memory to enable adaptation on non stationary distributions
> with only few samples. Then we demonstrate that hypernetwork (a network
> that generates another network) designs can acquire more generalizable
> priors than standard networks when trained with Model Agnostic
> Meta-Learning (MAML). Subsequently, we apply hypernetworks to 3D scene
> generation, demonstrating that they can acquire priors efficiently on just
> a handful of training scenes, thereby leading to faster text-to-3D
> generation. We then extend our hypernetwork framework to perform 3D
> segmentation on novel scenes with limited data by efficiently transferring
> priors from earlier viewed scenes. Finally, we propose a molecular
> generative pre-training task for downstream tasks such as molecular
> property prediction and target-aware drug generation, which are crucial
> tasks in computational immunology.
>
> *Thesis Committee:* Michael Maire (Advisor), Greg Shakhnarovich
> (Co-Advisor), David McAllester, Aly Khan.
>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue, Rm 517*
> *Chicago, IL  60637*
> *773-834-1757*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20240905/13697994/attachment.html>


More information about the Theory mailing list