<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">DEPARTMENT OF COMPUTER SCIENCE<br><br>UNIVERSITY OF CHICAGO<br><br>Date: Monday, March 3, 2008<br>Time: 2:30 p.m.<br>Place: Ryerson 251, 1100 E. 58th Street<br><br>----------------------------------------------<br><br>Speaker:<span class="Apple-tab-span" style="white-space: pre; ">        </span>Parikshit Gopalan<br><br>From:<span class="Apple-tab-span" style="white-space: pre; ">        </span><span class="Apple-tab-span" style="white-space: pre; ">        </span>University of Washington<br><br>Web page:<span class="Apple-tab-span" style="white-space: pre; ">        </span><a href="http://www.cs.washington.edu/homes/parik/">http://www.cs.washington.edu/homes/parik/</a><br><br>Title: Fitting Polynomials to Noisy Data<br><br>Abstract: The problem of finding the polynomial that best fits a <br>noisy data-set (or polynomial reconstruction) has a long history, <br>dating back to curve-fitting problems studied in the 1800s. In the <br>last two decades, there has been tremendous progress on this problem <br>in computer science, driven by the discovery of powerful new <br>algorithms. These results have spurred exciting new developments in <br>Coding theory, Computational learning, Cryptography and Hardness of <br>Approximation. In this talk, we will explore this problem from the <br>perspectives of Coding theory and Computational learning.<br>We begin with an algorithm for decoding a well-studied family of <br>binary error-correcting codes called Reed-Muller codes, which are <br>obtained from low-degree polynomials. The salient feature of this <br>algorithm is that it works even when the number of errors far exceeds <br>the so-called Johnson bound.<br><br>I will present an algorithm for agnostically learning decision trees <br>under the uniform distribution. This is the first polynomial time <br>algorithm for learning decision trees in a harsh noise model. This <br>algorithm solves the reconstruction problem for real polynomials using <br>tools from convex optimization.<br><br>I will also discuss settings where the reconstruction problem seems <br>intractable. We will see evidence that the notorious Noisy Parity <br>problem is hard under the uniform distribution. We will present <br>hardness results suggesting that learning simple concepts with noise <br>is impossible for arbitrary distributions.<br><br>-----------------------------------------------<br><br>Host: Laci Babai<br></body></html>