[Colloquium] 1/27 Talks at TTIC: Mert Gurbuzbalaban, MIT

Mary Marre mmarre at ttic.edu
Wed Jan 20 18:20:24 CST 2016


When:     Wednesday, January 27th at 11:00 am

Where:    TTIC, 6045 S Kenwood Avenue, 5th Floor, Room 526

Who:       Mert Gurbuzbalaban, MIT


Title: Analyzing Complex Systems and Networks: Incremental Optimization and
Robustness



Abstract: Many of the emergent technologies and systems including
infrastructure systems (communication, transportation and energy systems)
and decision networks (sensor and robotic networks) require rapid
processing of large data and comprise dynamic interactions that necessitate
robustness to small errors, disturbances or outliers.

Motivated by large-scale data processing in such systems, we first consider
additive cost convex optimization problems (where each component function
of the sum represents the loss associated to a data block) and propose and
analyze novel incremental gradient algorithms which process component
functions sequentially and one at a time, thus avoiding costly computation
of a full gradient step. We focus on two randomized incremental methods,
Stochastic Gradient Descent (SGD) and Random Reshuffling (RR), which have
been the most widely used optimization methods in machine learning practice
since the fifties. The only difference between these two methods is that RR
samples the component functions without-replacement whereas SGD samples
with-replacement. Much empirical evidence suggested that RR is faster than
SGD, although no successful attempt has been made to explain and quantify
this discrepancy for a long time. We provide the first theoretical
convergence rate result of O(1/k2s) for any s in (1/2,1) (and O(1/k2) for a
bias-removed novel variant) with probability one for RR showing its
improvement over Ω(1/k) rate of SGD and highlighting the mechanism for this
improvement. Our result relies on a detailed analysis of deterministic
incremental methods and a careful study of random gradient errors. We then
consider deterministic incremental gradient methods with memory and show
that they can achieve a much-improved linear rate using a delayed dynamical
system analysis.



In the second part, we focus on large-scale continuous-time and
discrete-time linear dynamical systems that model various interactions over
complex networks and systems. There are a number of different metrics that
can be used to quantify the robustness of such dynamical systems with
respect to input disturbance, noise or error. Some key metrics are the
H∞ norm and the stability radius of the transfer matrix associated to the
system. Algorithms to compute these metrics exist, but they are impractical
for large-scale complex networks or systems where the dimension is large
and they do not exploit the sparsity patterns in the network structure. We
develop and analyze the convergence of a novel scalable algorithm to
approximate both of the metrics for large-scale sparse networks. We then
illustrate the performance of our method on numerical examples and discuss
applications to design optimal control policies for dynamics over complex
networks and systems.

Host: Nathan Srebro, nati at ttic.edu


Mary C. Marre
Administrative Assistant
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 504*
*Chicago, IL  60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20160120/8d256278/attachment.htm 


More information about the Colloquium mailing list