[Colloquium] Reminder: Kolchinski/MS Presentation/June 2, 2014

Margaret Jaffey margaret at cs.uchicago.edu
Fri May 30 09:34:09 CDT 2014


This is a reminder about Alex Kolchinski's MS Presentation on Monday at 11:00 am.

----------------------------------------------------------
Department of Computer Science
The University of Chicago

Date:  Monday, June 2, 2014

Time:  11:00 am 

Place:  Ryerson 277

Bx/MS Candidate:  Alex Kolchinski

MS Paper Title:  Convergence of Multi-Perceptron Classifiers and Support Vector Machines in Response to Training Data Features 

Abstract:
In supervised learning, linear classifiers like support vector machines (SVMs) and perceptrons learn how to classify training points by adapting vectors of weights or synapses to reflect the underlying structure of the training data. Significant research has been conducted into the stochastic methods that are typically used to train linear classifiers, but the way that the classifier weights actually adapt to training data has been much less thoroughly investigated. In short, there is no easy intuitive explanationn for how SVMs draw hyperplanes separating disparate classes’ clusters of points, or how a probabilistically trained perceptron’s synapse weights converge to respond to the structure of a class of inputs. In this paper, I present an investigation of how a multi-perceptron classifier’s synapse weights converge to reflect training data, and how their convergence compares to that of an SVM’s hyperplane weights on the same data. Using both artificially generated vectors of Bernoulli-distributed random variables and handwritten digit data from the MNIST database, experiments show that perceptron synapses and SVM weights typically converge to values that differ from those which would be expected for the optimal Bayes-rule classifier. As perceptrons which are trained with a Hebbian-type learning rule can be interpreted in some ways to model the function of biological neurons, this analysis of their convergence patterns may be useful to further the understanding how the brain adapts to reflect sensory data. In addition, a better understanding of the underlying functioning of perceptrons is likely to be useful with a view to designing better learning algorithms in the future [revised].

Alex's MS advisor:  Professor Yali Amit

A draft copy of Alex's MS paper is available in Ry 156.

----------------------------------------------------------

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Margaret Jaffey
margaret at cs.uchicago.edu
Department of Computer Science
Student Affairs Administrator
Ryerson 156
773-702-6011
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

_______________________________________________
cs mailing list  -  cs at mailman.cs.uchicago.edu
https://mailman.cs.uchicago.edu/mailman/listinfo/cs
_______________________________________________
cs mailing list  -  cs at mailman.cs.uchicago.edu
https://mailman.cs.uchicago.edu/mailman/listinfo/cs
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20140530/3bf00fd9/attachment.htm 


More information about the Colloquium mailing list