[CS] Chenghao Yang Dissertation Defense/Jan 30, 2026

via cs cs at mailman.cs.uchicago.edu
Fri Jan 16 09:09:21 CST 2026


This is an announcement of Chenghao Yang's Dissertation Defense.
===============================================
Candidate: Chenghao Yang

Date: Friday, January 30, 2026

Time:  2 pm CST

Remote Location:  https://uchicago.zoom.us/j/96014992390?pwd=lHS7dFqdSXilLVkvkoqKougOfRAsHs.1  Meeting ID: 960 1499 2390 Passcode: 644684

Location: JCL 298

Title: Beyond Surface Alignment: Grounding the Dynamics of Situational Understanding and Generative Control in LLMs

Abstract: The current alignment tuning paradigm for Large Language Models (LLMs) prioritizes surface-level behaviors—fluency, safety, and tonal consistency. While effective for casual chat, this thesis argues that such surface alignment masks a lack of grounding, creating models that are stylistically confident but situationally brittle. We investigate this disconnect through a dual analysis of how models process context (Input) and how they structure generation (Output).

First, we diagnose failures in Situational Grounding. Through SitTest, we reveal that despite massive context windows, state-of-the-art models struggle to maintain a consistent ``mental model'' of a changing environment, often hallucinating state updates. This fragility is quantified by ReCode, which demonstrates that models rely on surface heuristics (like variable names) rather than resolving syntactic dependencies, causing them to break under semantic-preserving perturbations. These findings highlight a critical gap: models ``read'' extensive histories without truly ``understanding'' the evolving situation.

Second, we investigate the dynamics of Generative Grounding. We introduce the Branching Factor (BF) to map the landscape of LLM generation. We find that standard alignment tuning artificially constricts this landscape, forcing models into low-entropy trajectories from the very first token. While this mimics decisive reasoning, we argue it represents a premature ``stylistic collapse'' that precludes the exploration necessary for complex tasks.

To bridge these gaps, we propose mechanisms that restore dynamic grounding. We introduce Annealed Sampling and Base-Aligned Model Collaboration strategies that resist premature collapse by synchronizing exploration with the model’s intrinsic uncertainty. Finally, we demonstrate the viability of this approach with AI Realtor, an agentic framework that successfully balances user alignment with factual grounding. Collectively, this work moves beyond the facade of surface alignment, offering a roadmap for building agents robustly anchored in both their context and their generation.

Advisors: Allyson Ettinger 

Committee Members: Allyson Ettinger, Haifeng Xu, and Mina Lee



More information about the cs mailing list