<div dir="ltr"><div dir="ltr"><div>Hi everyone,</div><div><br></div><div>What is happening <b>today</b> is Peter West's talk at the NLP Seminar. Please note that <b>a few changes</b>:</div><div><br></div><div>1. Location: <b style="background-color:rgb(255,229,153)">JCL Room 346</b>. (temporary once only)</div><div><br></div><div>2. Time: <b style="background-color:rgb(255,229,153)">1 - 2 PM</b> today (March 6, Thursday). Note that there is <b>no lunch today</b>.</div><div><br></div><div>3. Zoom option is available. Peter is also joining us via Zoom. [<a href="https://uchicago.zoom.us/j/97708787366?pwd=qD9KdtzwGCTPuciAfr6iaap3NubZgy.1" target="_blank">Link</a>]</div><div><br></div><div>4. We also encourage you to join Akari Asai's talk thereafter in JCL Room 390. Information is as follows.</div><div><br></div><div>Sorry for the interruption with the duplicate notification. </div><div><br></div><div><b>Title: Can Helpful Assistants be Unpredictable? Limits of Aligned LLMs</b></div><div><b><br></b></div><div><b>Peter West</b></div><div>March 6, Thursday, <b>1 - 2 PM</b></div><div><b>JCL 346 [<a href="https://uchicago.zoom.us/j/97708787366?pwd=qD9KdtzwGCTPuciAfr6iaap3NubZgy.1" target="_blank">Zoom</a>]<br></b><br><b>Abstract</b>: The majority of public-facing language models have undergone some form of alignment--a family of techniques (e.g. reinforcement learning from human feedback) which aim to make models safer, more honest, and better at following instructions. In this talk, I will investigate the downsides of aligning LLMs. While the process improves model performance across a broad range of benchmark tasks, particularly those for which a "correct" answer is clear, it seems to mitigate some of the most interesting aspects of LLMs, including unpredictability and generation of text that humans find creative.<br><br><b>Bio</b>: <a href="https://urldefense.com/v3/__https:/peterwestai.notion.site/__;!!BpyFHLRN4TMTrA!74VUK6KY2BEpehA1mfhOzaU_1BZ4flqUMiINJrxWvvaT3boirdBIyRyILmKhPPzw1zxulUGx3Q8sDMmDKSEnxD-Afakd69yN7bF3$" target="_blank">Peter</a> is an Assistant Professor at UBC and a recent postdoc at the Stanford Institute for Human-Centered AI working in Natural Language Processing. His research broadly studies the capabilities and limits of large language models (and other generative AI systems). His work has been recognized with multiple awards, including best method paper at NAACL 2022, outstanding paper at ACL 2023, and outstanding paper at EMNLP 2023.</div><div><br></div><div><b>Title: Beyond Scaling: Frontiers of Retrieval-Augmented Language Models</b></div><div><b><br></b></div><div><b>Akari Asai</b></div><div>March 6, Thursday, <b>2 - 3 PM</b></div><div><b>JCL 390 [<a href="https://uchicagogroup.zoom.us/j/93903823738?pwd=luUQIjaxPnxs4yeHf5aT46jxAAtBFG.1" target="_blank">Zoom</a>]</b></div><div><b><br></b></div><div><b>Abstract</b>: Large Language Models (LMs) have achieved remarkable progress by scaling training data and model sizes. However, they continue to face critical limitations, including hallucinations and outdated knowledge, which hinder their reliability—especially in expert domains such as scientific research and software development. In this talk, I will argue that addressing these challenges requires moving beyond monolithic LMs and toward Augmented LMs—a new AI paradigm that designs, trains, and deploys LMs alongside complementary modules to enhance reliability and efficiency. Focusing on my research on Retrieval-Augmented LMs, one of the most impactful and widely adopted forms of Augmented LMs today, I will begin by presenting systematic analyses of current LM shortcomings and demonstrating how retrieval augmentation offers a more scalable and effective path forward. I will then discuss my work on establishing new foundations for these systems, including novel training approaches and retrieval mechanisms that enable LMs to dynamically adapt to diverse inputs. Finally, I will showcase the real-world impact of such models through OpenScholar, our fully open Retrieval-Augmented LM for assisting scientists in synthesizing literature—now used by over 30,000 researchers and practitioners worldwide. I will conclude by outlining my vision for the future of Augmented LMs, emphasizing advancements in abilities to handle heterogeneous modalities, more efficient and flexible integration with diverse components, and rigorous evaluation through interdisciplinary collaboration.<b></b></div><div><br></div><div><b>Bio</b>: Akari Asai is a Ph.D. candidate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Her research focuses on overcoming the limitations of large language models (LMs) by developing advanced systems, such as Retrieval-Augmented LMs, and applying them to real-world challenges, including scientific research and underrepresented languages. Her contributions have been widely recognized, earning multiple paper awards at top NLP and ML conferences, the IBM Global Fellowship, and industry grants. She was also named an EECS Rising Star (2022) and one of MIT Technology Review's Innovators Under 35 Japan. Her work has been featured in outlets such as Forbes and MIT Technology Review. Beyond her research, Akari actively contributes to the NLP and ML communities as a co-organizer of high-impact tutorials and workshops, including the first tutorial on Retrieval-Augmented LMs at ACL 2023, as well as workshops on Multilingual Information Access (NAACL 2022) and Knowledge-Augmented NLP (NAACL 2025).</div><div><b><br></b></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"></div></div></div></div>
</div>