[Theory] TOMORROW: 2/4 Talks at TTIC: Tianyu Gao, Princeton University
Mary Marre via Theory
theory at mailman.cs.uchicago.edu
Mon Feb 3 14:35:35 CST 2025
*When:* Tuesday, February 4, 2025 at* 10:00** am** CT *
*Where: *Talk will be given *live, in-person* at
TTIC, 6045 S. Kenwood Avenue
5th Floor, Room 530
*Virtually:* *via panopto: **livestream*
<https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=d733c850-cd7f-4fd4-bcc7-b27501176412>
*Who: * Tianyu Gao, Princeton University
*Title: *Enabling Language Models to Process Information at Scale
*Abstract: *Language models (LMs) are highly effective at understanding and
generating text, holding immense potential as intuitive, personalized
interfaces for accessing information. Expanding their ability to gather and
synthesize large volumes of information will further unlock transformative
applications, ranging from generative search engines to AI literature
assistants. In this talk, I will present my research on advancing LMs for
information processing at scale. (1) I will introduce my foundational work
on using contrastive learning to produce performant text embeddings, which
form the cornerstone of effective and scalable search. (2) I will then
present my evaluation framework for LM-based information-seeking systems,
emphasizing the importance of providing citations for verifying the
model-generated answers. Our evaluation highlights shortcomings in LMs’
abilities to reliably process long-form texts (e.g., dozens of webpages),
which I address by developing state-of-the-art long-context LMs that
outperform leading industry efforts while using a small fraction of the
computational budget. (3) In addition to building systems that can process
large-scale information, I will discuss my contributions to creating
efficient pre-training and adaptation methods for LMs, which enable
scalable deployment of LM-powered applications across diverse settings.
Finally, I will share my vision for the next generation of autonomous
information processing systems and outline the foundational challenges that
must be addressed to realize this vision.
*Bio: *Tianyu Gao is a fifth-year PhD student in the Department of Computer
Science at Princeton University, advised by Danqi Chen. His research
focuses on developing principled methods for training and adapting language
models, many of which have been widely adopted across academia and
industry. Driven by transformative applications, such as using language
models as information-seeking tools, his work also advances robust
evaluation and fosters a deeper understanding to guide the future
development of language models. He led the first workshop on long-context
foundation models at ICML 2024. He won an outstanding paper award at ACL
2022 and received an IBM PhD Fellowship in 2023. Before Princeton, he
received his BEng from Tsinghua University in 2020.
*Host: **Karen Livescu* <klivescu at ttic.edu>
Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue, Rm 517*
*Chicago, IL 60637*
*773-834-1757*
*mmarre at ttic.edu <mmarre at ttic.edu>*
On Wed, Jan 29, 2025 at 6:00 PM Mary Marre <mmarre at ttic.edu> wrote:
> *When:* Tuesday, February 4, 2025 at* 10:00** am** CT *
>
>
> *Where: *Talk will be given *live, in-person* at
>
> TTIC, 6045 S. Kenwood Avenue
>
> 5th Floor, Room 530
>
>
> *Virtually:* * tba*
>
>
>
>
>
> *Who: * Tianyu Gao, Princeton University
>
>
>
> *Title: *Enabling Language Models to Process Information at
> Scale
>
> *Abstract: *Language models (LMs) are highly effective at understanding
> and generating text, holding immense potential as intuitive, personalized
> interfaces for accessing information. Expanding their ability to gather and
> synthesize large volumes of information will further unlock transformative
> applications, ranging from generative search engines to AI literature
> assistants. In this talk, I will present my research on advancing LMs for
> information processing at scale. (1) I will introduce my foundational work
> on using contrastive learning to produce performant text embeddings, which
> form the cornerstone of effective and scalable search. (2) I will then
> present my evaluation framework for LM-based information-seeking systems,
> emphasizing the importance of providing citations for verifying the
> model-generated answers. Our evaluation highlights shortcomings in LMs’
> abilities to reliably process long-form texts (e.g., dozens of webpages),
> which I address by developing state-of-the-art long-context LMs that
> outperform leading industry efforts while using a small fraction of the
> computational budget. (3) In addition to building systems that can process
> large-scale information, I will discuss my contributions to creating
> efficient pre-training and adaptation methods for LMs, which enable
> scalable deployment of LM-powered applications across diverse settings.
> Finally, I will share my vision for the next generation of autonomous
> information processing systems and outline the foundational challenges that
> must be addressed to realize this vision.
>
> *Bio: *Tianyu Gao is a fifth-year PhD student in the Department of
> Computer Science at Princeton University, advised by Danqi Chen. His
> research focuses on developing principled methods for training and adapting
> language models, many of which have been widely adopted across academia and
> industry. Driven by transformative applications, such as using language
> models as information-seeking tools, his work also advances robust
> evaluation and fosters a deeper understanding to guide the future
> development of language models. He led the first workshop on long-context
> foundation models at ICML 2024. He won an outstanding paper award at ACL
> 2022 and received an IBM PhD Fellowship in 2023. Before Princeton, he
> received his BEng from Tsinghua University in 2020.
>
> *Host: **Karen Livescu* <klivescu at ttic.edu>
>
>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue, Rm 517*
> *Chicago, IL 60637*
> *773-834-1757*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20250203/49820586/attachment-0001.html>
More information about the Theory
mailing list