[CS] TOMORROW: Chenghao Yang Candidacy Exam/Oct 8, 2025
via cs
cs at mailman.cs.uchicago.edu
Tue Oct 7 09:19:42 CDT 2025
This is an announcement of Chenghao Yang's Candidacy Exam.
===============================================
Candidate: Chenghao Yang
Date: Wednesday, October 08, 2025
Time: 2 pm CST
Remote Location: https://uchicago.zoom.us/j/98533388283?pwd=UrhUTtawd8L10VH0ANPnRcad1t8O7z.1 Meeting ID: 985 3338 8283 Passcode: 936407
Location: JCL 298
Title: The Shrinking Landscape: A Mechanistic Study of Width, Depth, and Control in LLM Generation
Abstract: This thesis proposes a mechanistic study of large language model (LLM) output dynamics, investigating the generative process through the lens of width (generation diversity) and depth (sequence progression). Our central premise is that the LLM output space is not static but a dynamic landscape whose properties evolve during generation. To probe this landscape, we introduce the Branching Factor (BF), a metric that quantifies the probability concentration in the model's output distribution at each step.
Our foundational work establishes that this landscape has a predictable dynamic: the width of plausible continuations, as measured by BF, systematically shrinks as generation depth increases. More critically, we find that alignment tuning acts as a global constraint, collapsing this width from the very first token and steering generation onto narrow, low-entropy trajectories. This insight reframes the challenge of LLM control: instead of trying to force diversity from aligned models whose output space is intrinsically constricted, we should design strategies that leverage these fundamental dynamics.
This thesis builds on this finding by presenting two primary applications of this mechanistic understanding. First, we explore a base-aligned model collaboration framework. This approach treats the base model as a source of generative width and the aligned model as a structural guide, dynamically switching between them to navigate the vast, diverse regions of the base model's output space while maintaining coherence. Second, we introduce Annealed Sampling, an inference strategy that explicitly synchronizes with the natural flow of probability concentration. By front-loading exploration during the high-BF initial steps and transitioning to exploitation as the BF naturally decreases, this method enhances reasoning without complex heuristics.
Ultimately, this thesis aims to contribute a more fundamental, "physical" understanding of LLM generation. By characterizing the dynamic interplay of width and depth, we move beyond surface-level interventions and toward principled, dynamics-aware algorithms for controlling LLM behavior.
Advisors: Allyson Ettinger
Committee Members: Allyson Ettinger, Haifeng Xu and Mina Lee
More information about the cs
mailing list