From cs at mailman.cs.uchicago.edu Mon Aug 11 11:56:26 2025 From: cs at mailman.cs.uchicago.edu (via cs) Date: Mon, 11 Aug 2025 11:56:26 -0500 Subject: [CS] Mike Tynes MS Presentation/Aug 12, 2025 Message-ID: <689a20ba26321_dba7c23e4d7c109433@gradmin.mail> This is an announcement of Mike Tynes's MS Presentation =============================================== Candidate: Mike Tynes Date: Tuesday, August 12, 2025 Time: 1 pm CST Location: JCL 298 Remote Location: https://uchicago.zoom.us/j/91533660351?pwd=bO0xgZOdFXHXrSm8LRensxxII2Rlo2.1 Title: On-the-fly training of machine-learned surrogates for dynamical simulations Abstract: Training and deploying inexpensive machine-learned (ML) surrogates for computationally expensive subroutines "on-the-fly" (OTF) during physical simulations offers both potential advantages and unique drawbacks. In OTF learning, an ML surrogate is trained to replace a target subroutine as a simulation evolves. The advantages of OTF learning include reducing simulation error and model training costs, while the weaknesses include the possibility of introducing artifacts when adaptively updating the model which drives the physics of the simulation. This work extends an existing approach called Proxima, which uses a control system to train an OTF surrogate such that the average error over simulation steps meets a user-specified error bound. The existing Proxima approach is appropriate for non-dynamical state space sampling approaches (such as Monte Carlo methods), but our results show that it can introduce artifacts when simulations are explicitly evolved over time according to equations of motion. To solve this problem and extend Proxima to dynamic simulations, we introduce a "blending" procedure, called Proxima+Blend, that removes discontinuities when transitioning between the expensive subroutine and surrogate by evolving the simulation according to a mixture of the forces obtained from both surrogate and target functions. The mixing coefficient varies smoothly between zero and one over time according to both uncertainty quantification of the ML surrogate and observed errors when the target subroutine data is available. We show that while the original Proxima implementation can shorten simulation runtime and accurately capture some macro-scale properties in molecular dynamics simulations, it introduces unphysical dynamics at short time and length scales and for some dynamical properties it can introduce as much as 80% error. Meanwhile, our new approach Proxima+Blend delivers a 1.5x speedup over use of the target subroutine while estimating measuring these same properties within 5% error. Our implementation of Proxima+Blend can be deployed by simply replacing the existing subroutine with a wrapper that includes a machine learning approach for surrogate training, along with specifying control and uncertainty signals for the simulation of interest. Advisors: Ian Foster and Kyle Chard Committee Members: Kyle Chard, Logan Ward and Ian Foster From cs at mailman.cs.uchicago.edu Mon Aug 11 11:56:30 2025 From: cs at mailman.cs.uchicago.edu (via cs) Date: Mon, 11 Aug 2025 11:56:30 -0500 Subject: [CS] Alok Kamatar MS Presentation/Aug 14, 2025 Message-ID: <689a20be9c16e_dba7c23e4d7c10954e@gradmin.mail> This is an announcement of Alok Kamatar's MS Presentation =============================================== Candidate: Alok Kamatar Date: Thursday, August 14, 2025 Time: 9 am CST Remote Location: https://urldefense.com/v3/__https://uchicago.zoom.us/j/91535157991?pwd=IF5jKmrLcekb2oRL77AalbGJOgHwp0.1__;!!BpyFHLRN4TMTrA!97tQpkEq8SfA3FzfZTT4oSRlHmevx7LRF67SoBbACukCqBIujVetG2-kGTstMTR7M-IKr5RBVOy3AwlLtWa8-yo$ Location: JCL 346 Title: Core Hours and Carbon Credits: Investigating and Incentivizing Sustainability in HPC Abstract: Growing demand accompanied by slower improvements in the efficiency of hardware and data-centers has reinvigorated research into energy-efficient and sustainable methods for computing. In this work, we begin by examining the effect that user decisions can have in the environmental impact of computing; specifically, how users' choices of which facility/machine to run jobs can significantly impact the efficiency/performance trade-off of an application. Despite this considerable influence users have to control the efficiency of their applications, we observe they remain largely unconcerned with their energy-use. In a survey of 300 HPC users, we find that fewer than 30% are aware of their energy consumption, and that energy efficiency is a low priority concern. To incentivize more awareness of energy and sustainability, we then propose two new multi-resource accounting methods that charge for computations based on their energy consumption or carbon footprint, respectively. We conduct both simulation studies and a user study to evaluate the impact of these two methods on user behavior. We find that while only providing users feedback on their energy use had no impact on their behavior, associating energy with cost incentivized users to select more efficient resources, and use 40% less energy. Advisors: Kyle Chard Committee Members: Kyle Chard and Ian Foster From cs at mailman.cs.uchicago.edu Mon Aug 11 14:11:14 2025 From: cs at mailman.cs.uchicago.edu (via cs) Date: Mon, 11 Aug 2025 14:11:14 -0500 Subject: [CS] Daniel Grzenda MS PresentationAug 25, 2025 Message-ID: <689a405251c51_dba7c23e4d7c10981c@gradmin.mail> This is an announcement of Daniel Grzenda's MS Presentation =============================================== Candidate: Daniel Grzenda Date: Monday, August 25, 2025 Time: 9 am CST Remote Location: https://urldefense.com/v3/__https://uchicago.zoom.us/j/99397184323?pwd=JGUyq5VQcYZPJcDQR6hqbh8aav8Ted.1__;!!BpyFHLRN4TMTrA!5gAk2o-HIU_rwZXZYYdJE9CUUuX4nRU0hZZSq-BYiAA1UNMMSqnaQAxSM7LLe7xTAS5_QBcv-1-xosjqy0sa3Sk$ Location: JCL 298 Title: Multivariate Functional Approximation Neural Networks (MFANN) Abstract: We present the Multivariate Functional Approximation Neural Network (MFANN), an architecture that combines the principles of multivariate functional approximation (MFA) with iterative optimization techniques commonly found in the neural network (NN) literature. MFA is a data modeling, compression, and visualization tool that uses the tensor product of B-spline functions to build continuous, differentiable representations of input data. We extend MFA to use stochastic iterative mini-batch optimization methods, periodically updating the spline-based models instead of numerically solving for the representation. Through an ablative analysis, we study the differences between these B-spline-based networks and traditional neural networks. Our analysis demonstrates MFANN is less prone to overfitting – generalizing to multiple resolutions of input data, while remaining flexible enough to fit complex analytical functions and real-world scientific data. We empirically compare MFANN to conventional MFA and multilayer perceptrons (MLPs). Our work highlights MFANN as a promising paradigm for advancing the theory and practice of data-driven function approximation with a new class of neural networks. Advisors: Ian Foster, Kyle Chard, and Rana Hanocka Committee Members: Ian Foster, David Lenz and Kyle Chard