[Theory] TOMORROW: 7/29 Thesis Defense: Kevin Stangl, TTIC

Mary Marre via Theory theory at mailman.cs.uchicago.edu
Sun Jul 28 16:08:32 CDT 2024


*When*:   Monday, July 29th from 11:30am - 12:30pm CT

*Where*:  Talk will be given *live, in-person* at
              TTIC, 6045 S. Kenwood Avenue
              5th Floor, *Room 530*

*Virtually*: via *Zoom*
<https://us02web.zoom.us/j/82622819147?pwd=ubre2fCUgmj4kuOX20j45v6IwyioJU.1>


*Who:  *   Kevin Stangl, TTIC



*Title: * Fairness, Accuracy, and Unreliable Data
*Abstract:* A theme throughout my thesis is thinking about ways and
responses to how a `plain' empirical risk minimization algorithm will be
misleading or ineffective because of a train-test distribution mis-match
due to biased data, strategic behavior, or adversarial data corruptions.
The overarching research goal for these related topics is to provide a
crisp mathematical model for each learning scenario that  exposes different
failure modes and makes trade-offs explicit.

In my defense, I will survey all of my completed research and dive deeply
into two papers, which study a fundamental question in fairness in machine
learning, how effectively or ineffectively a range of fairness constraints
recover from biased and adversarial corruptions in training data.

*Committee: *Avrim Blum (chair), Madhur Tulsiani, Ali Vakilian, and Juba
Ziani (Georgia Tech)



Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue, Rm 517*
*Chicago, IL  60637*
*773-834-1757*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Thu, Jul 25, 2024 at 12:42 PM Mary Marre <mmarre at ttic.edu> wrote:

> *When*:   Monday, July 29th from 11:30am - 12:30pm CT
>
> *Where*:  Talk will be given *live, in-person* at
>               TTIC, 6045 S. Kenwood Avenue
>               5th Floor, *Room 530*
>
> *Virtually*: via *Zoom*
> <https://us02web.zoom.us/j/82622819147?pwd=ubre2fCUgmj4kuOX20j45v6IwyioJU.1>
>
>
> *Who:  *   Kevin Stangl, TTIC
>
>
>
> *Title: * Fairness, Accuracy, and Unreliable Data
> *Abstract:* A theme throughout my thesis is thinking about ways and
> responses to how a `plain' empirical risk minimization algorithm will be
> misleading or ineffective because of a train-test distribution mis-match
> due to biased data, strategic behavior, or adversarial data corruptions.
> The overarching research goal for these related topics is to provide a
> crisp mathematical model for each learning scenario that  exposes different
> failure modes and makes trade-offs explicit.
>
> In my defense, I will survey all of my completed research and dive deeply
> into two papers, which study a fundamental question in fairness in machine
> learning, how effectively or ineffectively a range of fairness constraints
> recover from biased and adversarial corruptions in training data.
>
> *Committee: *Avrim Blum (chair), Madhur Tulsiani, Ali Vakilian, and Juba
> Ziani (Georgia Tech)
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue, Rm 517*
> *Chicago, IL  60637*
> *773-834-1757*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20240728/b0c4bbc2/attachment.html>


More information about the Theory mailing list