[Colloquium] 2/20 Peter Hase (Anthropic/UNC) AI Safety Through Interpretable and Controllable Language Models

Holly Santos via Colloquium colloquium at mailman.cs.uchicago.edu
Thu Feb 13 15:08:01 CST 2025


Department of Computer Science and Data Science Institute Colloquium Presents

Peter Hase
Resident AI Researcher
Anthropic

Thursday, February 20th
2:00pm - 3:00pm 
In-Person: John Crerar Library Rm 390

https://uchicagogroup.zoom.us/j/97911474357?pwd=rzQjiCFi6bsk70Y5AI1S02bEyklAd7.1

Meeting ID: 979 1147 4357
Passcode: 412825

Title: AI Safety Through Interpretable and Controllable Language Models

Abstract: The AI research community has become increasingly concerned about risks arising from capable AI systems, ranging from misuse of generative models to misalignment of agents. My research aims to address problems in AI safety by tackling key issues with the interpretability and controllability of large language models (LLMs). In this talk, I present research showing that we are well beyond the point of thinking of AI systems as “black boxes.” AI models, and LLMs especially, are more interpretable than ever. Advances in interpretability have enabled us to control model reasoning and update knowledge in LLMs, among other promising applications. My work has also highlighted challenges that must be solved for interpretability to continue progressing. Building from this point, I argue that we can explain LLM behavior in terms of “beliefs”, meaning that core knowledge about the world determines downstream behavior of models. Furthermore, model editing techniques provide a toolkit for intervening on beliefs in LLMs in order to test theories about their behavior. By better understanding beliefs in LLMs and developing robust methods for controlling their behavior, we will create a scientific foundation for building powerful and safe AI systems. 

Bio: Peter Hase is an AI Resident at Anthropic. He recently completed his PhD at the University of North Carolina at Chapel Hill, advised by Mohit Bansal. His research focuses on NLP and AI Safety, with the goal of explaining and controlling the behavior of machine learning models. He is a recipient of a Google PhD Fellowship and before that a Royster PhD Fellowship. While at UNC, he also worked at Meta, Google, and the Allen Institute for AI. 



Host: Chenhao Tan

---
Holly Santos
Executive Assistant to Hank Hoffmann, Liew Family Chair
Department of Computer Science
The University of Chicago
5730 S Ellis Ave-217   Chicago, IL 60637
P: 773-834-8977
hsantos at uchicago.edu






-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20250213/e66d9363/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: square-1-smaller.png
Type: image/png
Size: 71416 bytes
Desc: not available
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20250213/e66d9363/attachment-0001.png>


More information about the Colloquium mailing list