[Colloquium] TTI-C Colloquium: Ali Rahimi, Intel

Julia MacGlashan macglashan at tti-c.org
Mon Nov 10 08:50:00 CST 2008


When:             TODAY: Monday, November 10 @ 2:00pm

 

Where:            TTI-C Conference Room: 1427 E. 60th St, 2nd Floor

 

Who:                Ali Rahimi, Intel

 

Title:                 Random Features: Replacing Optimization with
Randomization in Learning

 

 

Training modern supervised learning models like weighted sums of kernels (as
in the Kernelized SVM) and ensembles of weak learners (as in Adaboost)
typically requires carrying out a meticulous optimization over a large
number of parameters. But there is a much simpler way: instead of optimizing
over all the parameters, I propose to randomize over most of parameters and
then carry out a much cheaper optimization over the rest. A theoretical
analysis for this Random Features trick using the concentration of measure
phenomenon in Banach spaces guarantees that doing this trick almost as good
as carrying out the full optimization. The empirical performance is even
more surprising: on moderate-sized datasets (~60,000 examples), we get
speedups of three orders of magnitude with no loss in accuracy, and we can
train on datasets with millions of examples in a few minutes.

 

I'll also briefly mention other machine learning and vision projects at
Intel's Berkeley and Seattle lablets, including a real-time object
recognition system, a large-scale 3D reconstruction of Seattle, data
reduction tricks to speed up large scale clustering, theoretical guarantees
for kernel machines when the kernel is not positive definite, and an
analysis of the execution of faulty CPU using online learning bounds.

 

 

Contact:          Nati Srebro, TTI-C         nati at tti-c.org
834-7493

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20081110/ea3e18fa/attachment.htm 


More information about the Colloquium mailing list