[Colloquium] [TTIC Talks] 1/18 Talks at TTIC: John Wright, CMU

Madhur Tulsiani madhurt at ttic.edu
Mon Jan 18 09:09:24 CST 2016


Just a reminder - this talk is TODAY at 11 am. There will be someone near
the front entrance to let you in, in case you do not have access to the
building.

Madhur

On Wed, Jan 13, 2016 at 2:50 PM, Mary Marre <mmarre at ttic.edu> wrote:

> When:     Monday, January 18th at 11:00 am
>
> Where:    TTIC, 6045 S Kenwood Avenue, 5th Floor, Room 526
>
> Who:       John Wright, CMU
>
> Title:     Random Words, Longest Increasing Subsequences, and Quantum PCA
>
>
> Abstract:
> Suppose you have access to iid samples from an unknown probability
> distribution $p = (p_1, …, p_d)$ on $[d]$, and you want to learn or test
> something about it.  For example, if one wants to estimate $p$ itself, then
> the empirical distribution will suffice when the number of samples, $n$, is
> $O(d/epsilon^2)$.  In general, you can ask many more specific questions
> about $p$---Is it close to some known distribution $q$?  Does it have high
> entropy?  etc. etc.----and for many of these questions the optimal sample
> complexity has only been determined over the last $10$ years in the
> computer science literature.
>
> The natural quantum version of these problems involves being given samples
> of an unknown $d$-dimensional quantum mixed state $\rho$, which is a $d
> \times d$ PSD matrix with trace $1$. Many questions from learning and
> testing probability carry over naturally to this setting.  In this talk, we
> will focus on the most basic of these questions: how many samples of $\rho$
> are necessary to produce a good approximation of it?  Our main result is an
> algorithm for learning $\rho$ with optimal sample complexity.  Furthermore,
> in the case when $\rho$ is almost low-rank, we show how to perform PCA on
> it with optimal sample complexity.
>
> Surprisingly, we are able to reduce the analysis of our algorithm to
> questions dealing with the combinatorics of longest increasing subsequences
> (LISes) in random words.  In particular, the main technical question we
> have to solve is this: given a random word $w \in [d]^n$, where each letter
> $w_i$ is drawn iid from some distribution $p$, what do we expect
> $\mathrm{LIS}(w)$ to be? Answering this question requires diversions into
> the RSK algorithm, representation theory of the symmetric group, the theory
> of symmetric polynomials, and many other interesting areas.
>
> This is joint work with Ryan O'Donnell.
>
>
>
> Host: Madhur Tulsiani, madhurt at ttic.edu
>
>
>
>
> Mary C. Marre
> Administrative Assistant
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 504*
> *Chicago, IL  60637*
> *p:(773) 834-1757 <%28773%29%20834-1757>*
> *f: (773) 357-6970 <%28773%29%20357-6970>*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20160118/cf519797/attachment.htm 


More information about the Colloquium mailing list