[Theory] [Talks at TTIC] 11/12 Young Researcher Seminar Series: ​Konstantinos Stavropoulos​, University of Texas

Brandie Jones via Theory theory at mailman.cs.uchicago.edu
Tue Nov 4 09:00:00 CST 2025


*When:*        Wednesday, November 12th at *****10:30am CT***

*Where:       *Talk will be given *live, in-person* at

                       TTIC, 6045 S. Kenwood Avenue

                       5th Floor, Room 530


*Virtually:*  via Panopto (livestream
<https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=4ff56911-9597-4a72-95f4-b37701267e2d>
)


*Who: *         Konstantinos Stavropoulos, University of Texas at Austin

*Title:*          Efficient Learning Algorithms under (Heavy) Contamination
*Abstract:  *In this talk, I will present a series of new results in
supervised learning from contaminated datasets, based on a general outlier
removal algorithm inspired by recent work on learning with distribution
shift. Specifically:

We will show that any function class that can be approximated by low-degree
polynomials with respect to a hypercontractive distribution can be
efficiently learned under bounded contamination (also known as nasty
noise). This resolves a longstanding gap between the complexity of agnostic
learning and learning with contamination, even though it was widely
believed that low-degree approximators only implied tolerance to label
noise.
For any function class that admits the (stronger) notion of sandwiching
approximators, we obtain near-optimal learning guarantees even with respect
to heavy additive contamination, where far more than 1/2 of the training
set may be added adversarially. Prior related work held only for regression
and in a list-decodable setting.
These results significantly advance our understanding of efficient
supervised learning under contamination, a setting that has been much less
studied than its unsupervised counterpart. As a notable application, our
framework yields the first quasipolynomial-time algorithm for learning
constant-depth circuits (AC⁰) under bounded contamination, extending the
seminal result of Linial, Mansour, and Nisan on learning AC⁰ under
adversarial label noise.

*Bio*: Kostas Stavropoulos is a fifth-year Ph.D. student in Computer
Science at the University of Texas at Austin, advised by Prof. Adam
Klivans. He is a 2025 Apple Scholar in AI/ML. His research focuses on the
theory of machine learning, particularly on designing efficient algorithms
with provable guarantees under minimal—or at least verifiable—assumptions,
especially in challenging settings such as learning under distribution
shift and data contamination.

*Host: Nati Srebro <nati at ttic.edu>*

*Brandie Jones *
*Executive **Administrative Assistant*
*Outreach Administrator *
Toyota Technological Institute
6045 S. Kenwood Avenue
Chicago, IL  60637
www.ttic.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20251104/5ebbc95d/attachment.html>


More information about the Theory mailing list