[Colloquium] Harrison/MS Presentation/Nov 5, 2019

Margaret Jaffey margaret at cs.uchicago.edu
Tue Oct 22 14:37:49 CDT 2019


This is an announcement of Galen Harrison's MS Presentation.

------------------------------------------------------------------------------
Date:  Tuesday, November 5, 2019

Time:  11:00 AM

Place:  John Crerar Library 390

M.S. Candidate:  Galen Harrison

M.S. Paper Title: Towards Understanding Decisions about Fairness in
Machine Learning

Abstract:
There are many competing definitions of what statistical properties
make a machine learning model fair. Unfortunately, research has shown
that some key properties are mutually exclusive. Thus, realistic
models are thus necessarily imperfect, reflecting a decision to favor
one side of a trade-off over the other. Current work in fair and
transparent machine learning has neglected decisions up to this point.
In particular, current data science workflow tools are not well suited
to address identification and characterization of these decisions.
Furthermore, it is not currently clear what the best practices are for
visualizing these decisions in an unambiguous and non-manipulative
form. Understanding perceptions of fairness, whether participants
understood and had opinions about the tradeoff and if so what those
opinions were, is a first step towards these goals. To this end, I
describe a study into perceptions of fairness in realistic, imperfect
models. In the study, my coauthors and I had participants compare two
models for deciding whether to grant bail to criminal defendants. The
first model equalized one potentially desirable model property (with
the other property varying across racial groups). The second model did
the opposite. We observed a preference among participants for
equalizing the false positive rate between groups over equalizing
accuracy. Nonetheless, no preferences were overwhelming, and both
sides of each trade-off we tested were strongly preferred by a
non-trivial fraction of participants. %We observed nuanced
distinctions between participants considering a model being
``unbiased'' and considering it ``fair.'' %Furthermore, even when a
model within a trade-off pair was seen as fair and unbiased by a
majority of participants, we did not observe consensus that a machine
learning model was preferable to a human judge. Our findings suggest
that while there is a strong interest in these kinds of decisions,
future work in this area should try to surface contentious issues,
rather than produce consensus.

Galen's advisor is Prof. Blase Ur

Login to the Computer Science Department website for details:
 https://newtraell.cs.uchicago.edu/phd/ms_announcements#harrisong

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Margaret P. Jaffey            margaret at cs.uchicago.edu
Department of Computer Science
Student Support Rep (Ry 156)               (773) 702-6011
The University of Chicago      http://www.cs.uchicago.edu
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


More information about the Colloquium mailing list