[Theory] Fwd: Seminar by Aditya Bhaskara at 11am Friday (4th June) on "Iterative Greedy Algorithms for Matrix Approximation"

Madhur Tulsiani madhurt at ttic.edu
Fri Jun 4 00:39:52 CDT 2021


This talk today may be of interest to many….

Madhur 


Begin forwarded message:

> From: Aravindan Vijayaraghavan <v.aravindan at gmail.com>
> Date: June 4, 2021 at 7:56:36 AM GMT+5:30
> To: Varun <varun9upta at gmail.com>, Madhur Tulsiani <madhur.tulsiani at gmail.com>, Yury Makarychev <yury at ttic.edu>, Avrim Blum <avrim at ttic.edu>, Ali Vakilian <vakilian at mit.edu>, Jinshuo Dong <djs.pku at gmail.com>, rad at cs.stanford.edu
> Subject: Fwd: Seminar by Aditya Bhaskara at 11am Friday (4th June) on "Iterative Greedy Algorithms for Matrix Approximation"
> 
> 
> Hi all, 
> 
> Aditya Bhaskara (Asst. Prof., University of Utah) is giving a theory seminar talk at 11am CT on June 4th. This should be of interest to those interested in algorithms and/or ML theory. The talk details are included below.  Please feel free to forward this to others who may be interested. 
> 
> Thanks,
> Aravindan
> -----------------------------------
> 
> FRIDAY / Theory Seminar  
> June 4 / 11:00 am (PLEASE NOTE THE TIME DIFFERENCE)  
> 
> “Iterative Greedy Algorithms for Matrix Approximation”
> Aditya Bhaskara, University of Utah
>  
> Zoom: 
> https://northwestern.zoom.us/j/96460224042?pwd=T3RSc1lRS2NqeUJDdFo2Z3BEajI5dz09
> Meeting ID: 964 6022 4042
> Passcode: 179234
> 
> Abstract:
> Approximating a matrix using "simpler" matrices is one of the fundamental problems in data analysis. The classic singular value decomposition (SVD) gives a way of obtaining a low-rank matrix that best approximates a given matrix in terms of Frobenius error. While an SVD can be computed efficiently, many natural variants turn out to be NP hard. For instance, adding a weight to each entry results in the so-called weighted low-rank approximation problem. Asking to approximate the matrix using a span of a small number of _columns_ of the matrix leads to the column subset selection problem. Approximation with an added sparsity constraint leads to the so-called dictionary learning problem.
> In the talk, I will outline a simple greedy framework inspired by the Frank-Wolfe method in optimization that gives approximation guarantees for all these problems. The resulting algorithms tend to be efficient, and also robust to the presence of outlier columns in the matrix.
> 
> Biography:
> Aditya Bhaskara is an Assistant Professor of Computer Science at the University of Utah. He received a Ph.D. in Computer Science from Princeton University and a Bachelor of Technology in Computer Science and Engineering from the Indian Institute of Technology in Mumbai, India. Before joining Utah, he was a postdoctoral researcher at Google Research NYC and at EPFL, Switzerland. He has served on the program committees of conferences such as the IEEE Foundations of Computer Science and the ACM-SIAM Symposium on Discrete Algorithms. He is a recipient of the Google Faculty Research Award and the NSF Early Career Research Award.
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20210604/072680bb/attachment.html>


More information about the Theory mailing list