[CS] Fengxue Zhang Candidacy Exam/May 15, 2025
via cs
cs at mailman.cs.uchicago.edu
Tue May 6 15:32:02 CDT 2025
This is an announcement of Fengxue Zhang's Candidacy Exam.
===============================================
Candidate: Fengxue Zhang
Date: Thursday, May 15, 2025
Time: 11 am CST
Remote Location: https://uchicago.zoom.us/j/2159183425?pwd=bEN2VVlGM0l5Y2U0L2l5TTVSdjRodz09
Location: JCL 298
Title: Learning for Efficient, Scalable, and Constrained Bayesian Optimization in Real-World Applications
Abstract: Bayesian Optimization (BO) is a critical framework for optimizing expensive black-box functions prevalent in real-world applications like hyperparameter tuning, medical therapy design, and scientific discovery. However, standard BO faces significant challenges related to sample efficiency, scalability, and the handling of complex constraints or multiple objectives often encountered in practice. This collection of work highlights the power of incorporating learning mechanisms to address these limitations and enhance BO performance.
One major thrust focuses on intelligently learning and navigating the feasible search space under constraints. This includes approaches that jointly learn constraint boundaries and optimize the objective to efficiently identify robust interior optima in single-objective constrained problems, moving beyond heuristics by focusing on high-confidence regions of interest. This concept extends to constrained multi-objective settings, where algorithms actively balance learning the feasible level-set defined by multiple practical thresholds (e.g., safety constraints) with optimizing objectives within that learned region, improving sample efficiency with theoretical backing.
Complementing the focus on feasible space learning, another innovative direction leverages deep learning to fundamentally improve the optimization policy itself. By training end-to-end decision transformers on diverse simulated BO trajectories generated via model ensembles and acquisition ensembles, it becomes possible to learn non-myopic strategies that directly optimize the final outcome. The end-to-end optimization for decision-making surpasses traditional manually specified surrogate models and hand-crafted acquisition functions, allowing for more integrated learning and decision making. This approach utilizes a dense offline training phase on simulations followed by sparse online refinement, leading to robust performance and lower regret, especially in noisy or higher-dimensional scenarios.
Collectively, these advancements demonstrate how various forms of learning – from active learning of constraints and feasible regions to learning complex optimization policies via deep models trained on simulated experience – significantly boost the efficiency, robustness, scalability, and practical applicability of Bayesian optimization for challenging, real-world problems.
Advisors: Yuxin Chen
Committee Members: Yuxin Chen, Haifeng Xu,Rebecca Willett, and Thomas Anthony Desautels
More information about the cs
mailing list