[Theory] IDEAL 3/19 Quarterly Theory Workshop: Algorithms and Their Social Impact

Mary Marre mmarre at ttic.edu
Sun Mar 14 15:56:41 CDT 2021


Upcoming 3/19 Workshop:
Quarterly Theory Workshop: Algorithms and Their Social Impact
*About the Series*

The Quarterly Theory Workshop brings in theoretical computer science
experts to present their perspective and research on a common theme.
Chicago area researchers with interest in theoretical computer science are
invited to attend.  The technical program is in the morning and includes
coffee and lunch (on your own).  The afternoon of the workshop will allow
for continued discussion between attendees and the speakers.

Part of the IDEAL Special Quarter on Data Science and Law
<https://northwestern.us20.list-manage.com/track/click?u=3fc8e0df393510ea0a5b018e1&id=0e03fd98d4&e=5a7c74841c>
.

*Synopsis*

The focus of this workshop
<https://northwestern.us20.list-manage.com/track/click?u=3fc8e0df393510ea0a5b018e1&id=37455d3ddb&e=5a7c74841c>
will
be on the societal impacts of algorithms. From designing self-driving cars
to selecting the order of news posts on Facebook to automating credit
checks, the use of algorithms for decision making is now commonplace. Hence
it is more important than ever to consider fairness as a key aspect while
designing these algorithms to prevent unwanted bias and prejudice. The
speakers for this workshop are Rakesh Vohra
<https://northwestern.us20.list-manage.com/track/click?u=3fc8e0df393510ea0a5b018e1&id=754c3a9156&e=5a7c74841c>
, Michael Kearns
<https://northwestern.us20.list-manage.com/track/click?u=3fc8e0df393510ea0a5b018e1&id=7d83ce1552&e=5a7c74841c>
, Samira Samadi
<https://northwestern.us20.list-manage.com/track/click?u=3fc8e0df393510ea0a5b018e1&id=02714b839a&e=5a7c74841c>
, Steven Wu
<https://northwestern.us20.list-manage.com/track/click?u=3fc8e0df393510ea0a5b018e1&id=eb95f86adb&e=5a7c74841c>,
and Suresh Venkatasubramanian
<https://northwestern.us20.list-manage.com/track/click?u=3fc8e0df393510ea0a5b018e1&id=ec4e12416c&e=5a7c74841c>
.

*Logistics*

   - *Date: *Friday, March 19, 2021
   - *Location:* Virtual (on Gather.Town and Zoom). Further details to come.
   - *Registration:* Registration
   <https://northwestern.us20.list-manage.com/track/click?u=3fc8e0df393510ea0a5b018e1&id=0bb4a10463&e=5a7c74841c>
is
   free but required. The login information will be sent to your email address.

*Schedule*

   - *10:30 – 11:00: *Michael Kearns
   - *11:00 – 11:30: *Samira Samadi
   - *11:30 – 11:40: *break
   - *11:40 – 12:10: *Steven Wu
   - *12:10 – 1:00:  *lunch break
   - *1:00 – 1:30: *Suresh Venkatasubramanian
   - *1:30 – 2:00: *Rakesh Vohra
   - *2:00 – 2:30: *Discussion time


*Titles and Abstracts*

*Title: *Between Group and Individual Fairness for Machine Learning
*Speaker: *Michael Kearns (UPENN)
*Abstract:* We will overview recent research that interpolates between
group fairness definitions (which have nice algorithmic properties but only
blunt fairness guarantees), and individual fairness definitions (which has
strong individual semantics but poor algorithmic properties). We describe
algorithms enforcing fairness notions lying between these extremes, as well
as recent strengthenings of group fairness, such as minimax and
lexicographic fairness. A common theme is the use of connections between
game theory and machine learning as an algorithm design principle.

*Title: *Socially Fair k-Means Clustering
*Speaker:* Samira Samadi (MPI)
*Abstract: *We show that the popular k-means clustering algorithm (Lloyd’s
heuristic), can result in outcomes that are unfavorable to subgroups of
data (e.g., demographic groups). Such biased clusterings can have
deleterious implications for human-centric applications such as resource
allocation. We present a fair 𝑘-means objective and algorithm to choose
cluster centers that provide equitable costs for different groups. The
algorithm, Fair-Lloyd, is a modification of Lloyd’s heuristic for 𝑘-means,
inheriting its simplicity, efficiency, and stability. In comparison with
standard Lloyd’s, we find that on benchmark datasets, Fair-Lloyd exhibits
unbiased performance by ensuring that all groups have equal costs in the
output 𝑘-clustering, while incurring a negligible increase in running
time, thus making it a viable fair option wherever 𝑘-means is currently
used.

*Title: *Involving Stakeholders in Building Fair ML Systems
*Speaker: *Steven Wu (CMU)
*Abstract: *Recent work in fair machine learning has proposed dozens of
technical definitions of algorithmic fairness and methods for enforcing
these definitions. However, we still lack a comprehensive understanding of
how to develop machine learning systems with fairness criteria that reflect
relevant stakeholders’ nuanced viewpoints in real-world contexts. This talk
will cover our recent work that aims to address this gap. We will first
discuss an algorithmic framework that enforces the individual fairness
criterion through interactions with a human auditor, who can identify
fairness violations without enunciating a fairness (similarity) measure. We
then discuss an empirical study on how to elicit stakeholders’ fairness
notions in the context of a child maltreatment predictive system.

*Title: *The Limits of Shapley Values as a Method for Explaining the
Predictions of an ML System
*Speaker: *Suresh Venkatasubramanian (Univ. of Utah)
*Abstract: *One of the more pressing concerns around the deployment of ML
systems is explainability: can we understand why an ML system made the
decision that it did. This question can be unpacked in a variety of ways,
and one approach that has become popular is the idea of feature influence:
that we can assign a score to features that represents their (relative)
influence in an outcome (either locally for particular input, or globally).

One of the most influential of such approaches has been one based on
cooperative game theory, where features are modeled as “players” and
feature influence is captured as “player contribution” via the Shapley
value of a game. The argument is that the axiomatic framework provided by
Shapley values is well-aligned with the needs of an explanation system.

But is it? I’ll talk about two pieces of work that nail down mathematical
deficiencies of Shapley values as a way of estimating feature influence and
quantify the limits of Shapley values via a fascinating geometric
interpretation that comes with interesting algorithmic challenges.

*Title: *Fair Prediction with Endogenous Behavior
*Speaker: *Rakesh Vohra (UPENN)
*Abstract: *There is increasing regulatory interest in whether machine
learning algorithms deployed in consequential domains (e.g. in criminal
justice) treat different demographic groups “fairly.” Several proposed
notions of fairness, typically mutually incompatible, have been examined in
settings where the behavior being predicted is treated as exogenous.

Using criminal justice as a setting where the behavior being predicted can
be endogenous, we study a model in which society chooses an incarceration
rule. Agents of different demographic groups differ in their outside
options (e.g. opportunity for legal employment) and decide whether to
commit crimes. We show that equalizing type I and type II errors across
groups is consistent with the goal of minimizing the overall crime rate;
other popular notions of fairness are not.

Hope to see you all there virtually!
Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 517*
*Chicago, IL  60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20210314/b56ce7a1/attachment.html>


More information about the Theory mailing list