[Colloquium] 1/29 Talks at TTIC: Travis Dick, Carnegie Mellon University

Mary Marre via Colloquium colloquium at mailman.cs.uchicago.edu
Thu Jan 24 17:26:45 CST 2019


When:     Tuesday, January 29th at *11:00 am*

Where:    TTIC, 6045 S Kenwood Avenue, 5th Floor, Room 526

Who:       Travis Dick, Carnegie Mellon University


*Title:     *Machine Learning: Social Values, Data Efficiency, and Beyond
Prediction

*Abstract: *In this talk I will discuss two recent research projects, both
extending the theory and practice of machine learning to accommodate modern
requirements of learning systems. These projects focus on requirements
stemming from two sources: applying machine learning to problems beyond
standard prediction, and the need to incorporate social values into
learning systems.


Beyond Standard Prediction Problems: While most machine learning focuses on
making predictions, there are learning problems where the output of the
learner is not a prediction rule. We focus on data-driven algorithm
configuration, where the goal is to find the best algorithm parameters for
a specific application domain. We consider this problem in two new learning
settings: the online setting, where problems are chosen by an adversary and
arrive one at a time, and the private setting, where problems encode
sensitive information. Algorithm configuration often reduces to maximizing
a collection of piecewise Lipschitz functions. In both online and private
settings, optimization is impossible in the worst case. Our main
contribution is a condition, called dispersion, that allows for meaningful
regret bounds and utility guarantees. We also show that dispersion is
satisfied for many problems under mild assumptions.


Social Values: Machine learning is becoming central to the infrastructure
of our society, and as a result it often learns from our personal data and
makes predictions about our behavior. When these predictions have
significant consequences, we may want to design our learning systems in
ways that make it easier to uphold our social values like fairness and
privacy. The second part of this talk will focus on a new notion of
individual fairness for machine learning called Envy-freeness, which is
suitable for learning problems with many outcomes and when individuals have
heterogeneous preferences over those outcomes. Roughly speaking, a
classifier is envy-free if no individual prefers the prediction made for
another over their own. In this work, we consider the generalization
properties of envy-freeness, providing conditions under which a classifier
that appears to be envy-free on a sample guarantees that it is also
envy-free on the underlying distribution.




*Host:*  Avrim Blum <avrim at ttic.edu>




Mary C. Marre
Administrative Assistant
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 517*
*Chicago, IL  60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20190124/d4c51530/attachment.html>


More information about the Colloquium mailing list