[Colloquium] Reminder: Zheng/Dissertation Defense/Jul 14, 2017

Margaret Jaffey via Colloquium colloquium at mailman.cs.uchicago.edu
Thu Jul 13 09:14:47 CDT 2017


This is a reminder about Qinqing's defense tomorrow.

       Department of Computer Science/The University of Chicago

                     *** Dissertation Defense ***


Candidate:  Qinqing Zheng

Date:  Friday, July 14, 2017

Time:  3:00 PM

Place:  Ryerson 277

Title: First Order Methods for Nonconvex Optimization via Symmetric
Factorization

Abstract:
A growing body of recent research is shedding new light on the role of
nonconvex optimization for tackling large scale problems in machine
learning, signal processing, and convex programming.

In this thesis, we discuss three problems that can be characterized by
estimating a semidefinite matrix. We develop nonconvex reformulations
of them by decomposing the semidefinite variable into structured
symmetric factors, and investigate first-order methods for optimizing
the transformed objectives. One central theme of our methods is to
exploit the structures of factors so that we can reduce the number of
parameters to the intrinsic degrees of freedom.

The first part of this thesis focuses on the low rank structure. We
first consider a rank minimization problem for and a closely related
family of semidefinite programs. We propose a simple, scalable, and
fast gradient descent algorithm to the nonconvex objective. With
$O(\r^3 \kappa^2 n \log n)$ random measurements of a positive
semidefinite $n\times n$ matrix of rank $\r$ and condition number
$\kappa$, our method is guaranteed to converge linearly to the global
optimum. We then address the rectangular matrix completion problem by
lifting the unknown matrix to a positive semidefinite matrix in higher
dimension, and optimizing a nonconvex objective over the semidefinite
factor using a simple gradient descent scheme. With $O( \mu r^2
\kappa^2 n \max(\mu, \log n))$ random observations of a $n_1 \times
n_2$ $\mu$-incoherent matrix of rank $r$ and condition number
$\kappa$, where $n = \max(n_1, n_2)$, the algorithm linearly converges
to the global optimum with high probability.

In the second part, we consider the problem of computing the fastest
mixing Markov chain on a given graph. It can be characterized as
optimizing the eigenvalues of the graph Laplacian matrix. Graph
Laplacian is positive semidefinite and has sparse Choleksy
factorization. We develop an variant of nonconvex ADMM that searches
the Choleksy factor with fixed sparsity pattern. Empirical
investigation are conducted to confirm the convergence of this
approach.

While SDP is usually used as surrogate relaxations of difficult
nonconvex problems, the newly proposed methods in this thesis
approaches SDPs via nonconvexifying. Our analysis and experimental
results demonstrate that these simple methods can quickly crack the
nonconvex objectives. More importantly, this thesis indicates that the
road between nonconvex and convex approaches are in fact
bidirectional. The transformation between two types of methods, and
the potential trade-off between statistical and computational
properties, worth further study.

Qinqing's advisor is Prof. John Lafferty

Login to the Computer Science Department website for details,
including a draft copy of the dissertation:

 https://www.cs.uchicago.edu/phd/phd_announcements#qinqing

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Margaret P. Jaffey            margaret at cs.uchicago.edu
Department of Computer Science
Student Support Rep (Ry 156)               (773) 702-6011
The University of Chicago      http://www.cs.uchicago.edu
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


More information about the Colloquium mailing list