[Colloquium] Xin Yuan MS Presentation/Dec 8, 2022

nitayack at cs.uchicago.edu nitayack at cs.uchicago.edu
Mon Nov 28 15:23:17 CST 2022


This is an announcement of Xin Yuan's MS Presentation
===============================================
Candidate: Xin Yuan

Date: Thursday, December 08, 2022

Time: 11 am CST

Location: JCL 298

M.S. Paper Title: Efficient Network Training by Growing

Abstract: Large deep networks have achieved impressive performance on various machine learning tasks.  Yet, the training process is usually compute-intensive.  Efficient and accurate training techniques have become more important in recent intelligence systems.  We propose two different methods to progressively and dynamically grow neural networks, jointly optimizing architectures and parameters.
First, we develop an approach to growing deep network architectures over the course of training, driven by a principled combination of accuracy and sparsity objectives.  Unlike existing pruning or architecture search techniques that operate on full-sized models or supernet architectures, our method can start from a small, simple seed architecture and dynamically grow and prune both layers and filters.  By combining a continuous relaxation of discrete network structure optimization with a scheme for sampling sparse subnetworks, we produce compact, pruned networks, while also drastically reducing the computational expense of training.
Second, we develop another approach for efficient network growing, within which parameterization and optimization strategies are designed by considering their effects on the training dynamics.  Unlike existing growing methods, which follow simple replication heuristics or utilize auxiliary gradient-based local optimization, we craft a parameterization scheme which dynamically stabilizes weight, activation, and gradient scaling as the architecture evolves, and maintains the inference functionality of the network.  To address the optimization difficulty resulting from imbalanced training effort distributed to subnetworks fading in at different growth phases, we propose a learning rate adaption mechanism that rebalances the gradient contribution of these separate subcomponents.  Our method achieves comparable or better accuracy than training large fixed-size models, while saving a substantial portion of the original computation budget for training.

Advisors: Michael Maire



More information about the Colloquium mailing list