[Colloquium] Guest Speakers @ TTI-C Next Week (3/20/06 - 3/24/06)

Katherine Cumming kcumming at tti-c.org
Fri Mar 17 14:17:13 CST 2006


 
**********TTI-C Guest Speakers (2) Next Week***********
                      March 20 - March 24, 2006
        Presented by:  Toyota Technological Institute at Chicago
 
(1)
 
Speaker: Amit Deshpande, MIT
Speaker's home page: http://web.mit.edu/amitd/www/
 
Date: Wednesday, March 22, 2006 
Location: TTI-C Conference Room, Part of Thoery Seminar
Time: 1 2:40 pm      
Title: Sampling techniques in low-rank matrix approximation
Abstract:
We consider the problem of approximating a given m by n matrix A by an
another matrix of rank k, where k is much smaller than m or n.  The "best"
such approximation can be computed using Singular Value Decomposition (SVD)
but that takes time O (min{mn^2, nm^2}), which is too large for certain
applications.
Frieze-Kannan-Vempala (FKV) showed that from a small sample of the rows of A
we can compute a low-rank approximation, which is (in expectation) only an
additive error worse than the "best" low-rank approximation.  This can be
turned into a randomized algorithm to compute this additive low-rank
approximation in "linear" (in the number of non-zero entries in A) time.
But in principle, this additive error is unbounded compared to the error of
the "best" low-rank approximation.  So how about low-rank approximation
within a multiplicative error?
Our results are the following.
1. Two different generalizations (Adaptive Sampling and Volume Sampling) of
the sampling technique of FKV.
2. A multiplicative approximation using almost the same number of rows
needed for an additive approximation in the FKV result.
3. An algorithm based on this computes a multiplicative low-rank
approximation in "linear" time.
(This is joint work with Luis Rademacher, Santosh Vempala, and Grant Wang.) 
 
(2)
 
Speaker:  Hal Daume III, USC
Speaker's home page: http://www.isi.edu/~hdaume/ 
 
Date: Thursday, March 23, 2006 
Location: TTI-C Conference Room
Time:  10:00 am
 
Title:   Flexible Machine Learning for Hard Language Problems
Abstract:
Natural language processing abounds with hard prediction problems for which
complex outputs are sought. In machine translation, one aims to produce a
coherent translated sentence; in document summarization, an entire short
document is required. Effective models for these and other problems rely
heavily on approximate search methods in order to find the best possible
output. Unfortunately, the fact that search is used in the final output
production is rarely taken into account when machine learning methods are
conceived and employed. This leads to complex algorithms with few
theoretical guarantees about performance on unseen test data.

I present a machine learning approach that directly solves "structured
prediction" problems by considering formal techniques that reduce structured
prediction to simple binary classification, within the context of search.
This reduction is error-limiting: it provides theoretical guarantees about
the performance of the structured prediction model on unseen test data. It
also lends itself to novel training methods for structured prediction
models, yielding efficient learning algorithms that perform well in
practice. I empirically evaluate this approach in the context of two tasks:
entity detection and tracking and automatic document summarization.
----------------------------------------------------------------------------
------
If you have questions, or would like to meet the speaker, please contact
Katherine at 773-834-1994 or kcumming at tti-c.org.   
For information on future TTI-C talks and events, please go to the TTI-C
Events page:  http://www.tti-c.org/events.html.  TTI-C (1427 East 60th
Street, Chicago, IL  60637)
 
 
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20060317/4cfc1361/attachment.htm


More information about the Colloquium mailing list