[Colloquium] [nlp] Today: UChicago/TTIC NLP Seminar: Peter West - Can Helpful Assistants be Unpredictable? Limits of Aligned LLMs

Yichen (Zach) Wang via Colloquium colloquium at mailman.cs.uchicago.edu
Thu Mar 6 08:00:00 CST 2025


Hi everyone,

What is happening *today* is Peter West's talk at the NLP Seminar. Please
note that *a few changes*:

1. Location: *JCL Room 346*. (temporary once only)

2. Time: *1 - 2 PM* today (March 6, Thursday). Note that there is *no lunch
today*.

3. Zoom option is available. Peter is also joining us via Zoom. [Link
<https://uchicago.zoom.us/j/97708787366?pwd=qD9KdtzwGCTPuciAfr6iaap3NubZgy.1>
]

4. We also encourage you to join Akari Asai's talk thereafter in JCL Room
390. Information is as follows.

Sorry for the interruption with the duplicate notification.

*Title: Can Helpful Assistants be Unpredictable? Limits of Aligned LLMs*

*Peter West*
March 6, Thursday, *1 - 2 PM*

*JCL 346 [Zoom
<https://uchicago.zoom.us/j/97708787366?pwd=qD9KdtzwGCTPuciAfr6iaap3NubZgy.1>]*
*Abstract*: The majority of public-facing language models have undergone
some form of alignment--a family of techniques (e.g. reinforcement learning
from human feedback) which aim to make models safer, more honest, and
better at following instructions. In this talk, I will investigate the
downsides of aligning LLMs. While the process improves model performance
across a broad range of benchmark tasks, particularly those for which a
"correct" answer is clear, it seems to mitigate some of the most
interesting aspects of LLMs, including unpredictability and generation of
text that humans find creative.

*Bio*: Peter
<https://urldefense.com/v3/__https:/peterwestai.notion.site/__;!!BpyFHLRN4TMTrA!74VUK6KY2BEpehA1mfhOzaU_1BZ4flqUMiINJrxWvvaT3boirdBIyRyILmKhPPzw1zxulUGx3Q8sDMmDKSEnxD-Afakd69yN7bF3$>
is an Assistant Professor at UBC and a recent postdoc at the Stanford
Institute for Human-Centered AI working in Natural Language Processing. His
research broadly studies the capabilities and limits of large language
models (and other generative AI systems). His work has been recognized with
multiple awards, including best method paper at NAACL 2022, outstanding
paper at ACL 2023, and outstanding paper at EMNLP 2023.

*Title: Beyond Scaling: Frontiers of Retrieval-Augmented Language Models*

*Akari Asai*
March 6, Thursday, *2 - 3 PM*
*JCL 390 [Zoom
<https://uchicagogroup.zoom.us/j/93903823738?pwd=luUQIjaxPnxs4yeHf5aT46jxAAtBFG.1>]*

*Abstract*: Large Language Models (LMs) have achieved remarkable progress
by scaling training data and model sizes. However, they continue to face
critical limitations, including hallucinations and outdated knowledge,
which hinder their reliability—especially in expert domains such as
scientific research and software development. In this talk, I will argue
that addressing these challenges requires moving beyond monolithic LMs and
toward Augmented LMs—a new AI paradigm that designs, trains, and deploys
LMs alongside complementary modules to enhance reliability and efficiency.
Focusing on my research on Retrieval-Augmented LMs, one of the most
impactful and widely adopted forms of Augmented LMs today, I will begin by
presenting systematic analyses of current LM shortcomings and demonstrating
how retrieval augmentation offers a more scalable and effective path
forward. I will then discuss my work on establishing new foundations for
these systems, including novel training approaches and retrieval mechanisms
that enable LMs to dynamically adapt to diverse inputs. Finally, I will
showcase the real-world impact of such models through OpenScholar, our
fully open Retrieval-Augmented LM for assisting scientists in synthesizing
literature—now used by over 30,000 researchers and practitioners worldwide.
I will conclude by outlining my vision for the future of Augmented LMs,
emphasizing advancements in abilities to handle heterogeneous modalities,
more efficient and flexible integration with diverse components, and
rigorous evaluation through interdisciplinary collaboration.

*Bio*: Akari Asai is a Ph.D. candidate in the Paul G. Allen School of
Computer Science & Engineering at the University of Washington. Her
research focuses on overcoming the limitations of large language models
(LMs) by developing advanced systems, such as Retrieval-Augmented LMs, and
applying them to real-world challenges, including scientific research and
underrepresented languages. Her contributions have been widely recognized,
earning multiple paper awards at top NLP and ML conferences, the IBM Global
Fellowship, and industry grants. She was also named an EECS Rising Star
(2022) and one of MIT Technology Review's Innovators Under 35 Japan. Her
work has been featured in outlets such as Forbes and MIT Technology Review.
Beyond her research, Akari actively contributes to the NLP and ML
communities as a co-organizer of high-impact tutorials and workshops,
including the first tutorial on Retrieval-Augmented LMs at ACL 2023, as
well as workshops on Multilingual Information Access (NAACL 2022) and
Knowledge-Augmented NLP (NAACL 2025).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20250306/c5e25ec8/attachment.html>


More information about the Colloquium mailing list