[Colloquium] [CDAC] January 25 - Olga Russakovsky (Princeton)

Rob Mitchum rmitchum at uchicago.edu
Mon Jan 18 10:55:15 CST 2021


*CDAC Distinguished Speaker Series*


*Olga Russakovsky*
*Assistant Professor, Computer Science*
*Princeton University*
*Fairness in Visual Recognition*

*Monday, January 25th*
*3:00 p.m. - 4:00 p.m.*
*Zoom (RSVP for login
<https://www.eventbrite.com/e/cdac-distinguished-speaker-series-olga-russakovsky-princeton-tickets-129939214689>)
or YouTube <https://youtu.be/82spfonv9DA> (no registration required)*

*Abstract*: Computer vision models trained on unparalleled amounts of data
hold promise for making impartial, well-informed decisions in a variety of
applications. However, more and more historical societal biases are making
their way into these seemingly innocuous systems.  We focus our attention
on bias in the form of inappropriate correlations between visual protected
attributes (age, gender expression, skin color, …) and the predictions of
visual recognition models, as well as any unintended discrepancy in error
rates of vision systems across different social, demographic or cultural
groups. In this talk, we’ll dive deeper both into the technical reasons and
the potential solutions for bias in computer vision. I’ll highlight our
recent work addressing bias in visual datasets (FAT*2020
<http://image-net.org/filtering-and-balancing/>; ECCV 2020
<https://github.com/princetonvisualai/revise-tool>), in visual models (CVPR
2020 <https://arxiv.org/abs/1911.11834>; under review
<https://arxiv.org/abs/2012.01469>) as well as in the makeup of AI
leadership <http://ai-4-all.org>.

*Bio*: Dr. Olga Russakovsky is an Assistant Professor in the Computer
Science Department at Princeton University. Her research is in computer
vision, closely integrated with the fields of machine learning,
human-computer interaction and fairness, accountability and transparency.
She has been awarded the AnitaB.org’s Emerging Leader Abie Award in honor
of Denice Denton in 2020, the CRA-WP Anita Borg Early Career Award in 2020,
the MIT Technology Review’s 35-under-35 Innovator award in 2017, the PAMI
Everingham Prize in 2016 and Foreign Policy Magazine’s 100 Leading Global
Thinkers award in 2015. In addition to her research, she co-founded and
continues to serve on the Board of Directors of the AI4ALL foundation
dedicated to increasing diversity and inclusion in Artificial Intelligence
(AI). She completed her PhD at Stanford University in 2015 and her
postdoctoral fellowship at Carnegie Mellon University in 2017.

*Part of the CDAC Winter 2021 Distinguished Speaker Series:
<https://cdac.uchicago.edu/news/announcing-the-cdac-winter-2021-distinguished-speaker-series/>*

*Bias Correction: Solutions for Socially Responsible Data Science*
Security, privacy and bias in the context of machine learning are often
treated as binary issues, where an algorithm is either biased or fair,
ethical or unjust. In reality, there is a tradeoff between using technology
and opening up new privacy and security risks. Researchers are developing
innovative tools that navigate these tradeoffs by applying advances in
machine learning to societal issues without exacerbating bias or
endangering privacy and security. The CDAC Winter 2021 Distinguished
Speaker Series will host interdisciplinary researchers and thinkers
exploring methods and applications that protect user privacy, prevent
malicious use, and avoid deepening societal inequities — while diving into
the human values and decisions that underpin these approaches.


-- 
*Rob Mitchum*

*Associate Director of Communications for Data Science and Computing*
*University of Chicago*
*rmitchum at uchicago.edu <rmitchum at ci.uchicago.edu>*
*773-484-9890*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20210118/5a5a5a8f/attachment.html>


More information about the Colloquium mailing list