[Colloquium] Learning Theory /MachineLearning Program Talks TODAY: Bickel & Steinwart & Zhou

Katherine Cumming kcumming at tti-c.org
Mon May 16 08:52:47 CDT 2005


TOYOTA TECHNOLOGICAL INSTITUTE TALKS TODAY (3)
Guest Speaker 
Speaker:  Peter Bickel, UC Berkeley
Speaker's homepage: http://stat-www.berkeley.edu/users/bickel/
 
Time:  Monday, May 16th at 2:00pm 
Location: International House-University of Chicago
Assembly Hall
 
Title: On the borders of Statistics and Computer Science
Abstract: Machine learning in computer science and prediction and
classification in statistics are essentially equivalent fields. I will
try to illustrate the relation between theory and practice in this huge
area by a few examples and results. In particular I will try to address
an apparent puzzle: Worst case analyses, using empirical process theory,
seem to suggest that even for moderate data dimension and reasonable
sample sizes good prediction (supervised learning) should be very
difficult. On the other hand, practice seems to indicate that even when
the number of dimensions is very much higher than the number of
observations, we can often do very well. We also discuss a new method of
dimension estimation and some features of cross validation. 
 
Guest Speaker 
Speaker:  Ingo Steinwart, Los Alamos National Laboratory
Speaker's homepage:  http://www.c3.lanl.gov/~ingo/
Time:  Monday, May 16th at 3:00pm
Location: International House-University of Chicago
Assembly Hall
 
Title: Some Aspects of Learning Rates for SVMs
Abstract: We present some learning rates for support vector machine
classification. In particular we discuss a recently proposed geometric
noise assumption which allows to bound the approximation error for
Gaussian RKHSs. Furthermore we show how a noise assumption proposed by
Tsybakov can be used to obtain learning rates between $1/ \sqrt n $ and
$1/n$. Finally, we describe the influence of the approximation error on
the overall learning rate.
 
Guest Speaker 
Speaker:  Ding-Xuan Zhou, City University of Hong Kong
Speaker's homepage:  http://www6.cityu.edu.hk/ma/people/dxzhou.html
Time: Monday, May 16th at 4:30pm 
Location:  International House-University of Chicago
 
Title: Learning Variable Covariances via Gradients
Abstract: Learning theory studies learning function relations from
samples. In this talk we are interested in the learning of function
gradients from their sample values. A least-square type learning
algorithm based on the Tikhonov regularization in reproducing kernel
Hilbert spaces is proposed. We show with error bounds that the output
function converges to the gradient of the regression function as the
sample size becomes large. Hence variable selection and estimation of
covariation can be expected. An efficient method is provided to reduce
the size of the linear system when the number of variables is much
larger than the sample size. Some applications of our algorithm to gene
expression analysis will be mentioned. This is joint work with Sayan
Mukherjee. 
 
-----------------------------------------------------------------
If you have questions, or would like to meet the speaker, please contact
Katherine at 773-834-1994 or  <mailto:kcumming at tti-c.org>
kcumming at tti-c.org.   For information on future TTI-C talks and events,
please go to the TTI-C Events page:   <http://www.tti-c.org/events.html>
http://www.tti-c.org/events.html.  TTI-C (1427 East 60th Street,
Chicago, IL  60637)






-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20050516/1323e619/attachment.htm


More information about the Colloquium mailing list