[Colloquium] TOMORROW: 12/17 Talks at TTIC: Idan Attias, IDEAL/UIC/TTIC Postdoc
Mary Marre via Colloquium
colloquium at mailman.cs.uchicago.edu
Tue Dec 16 17:28:27 CST 2025
*When:* Wednesday, December 17, 2025 at* 1:00 p**m CT *
*Where: *Talk will be given *live, in-person* at
TTIC, 6045 S. Kenwood Avenue
5th Floor, Room 530
*Virtually:* * via Panopto
<https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=efa598a9-48cc-4171-af5d-b3af018640f5>*
*Who: * Idan Attias, IDEAL Institute/UIC/TTIC Postdoc
*Title: *
Computational Learning Theory: Hardness of Learning Regular Expressions and
Tractable Learning under Distribution Shift
*Abstract:*
I will discuss two recent papers in computational learning theory.
In the first part, I will present new hardness results for learning regular
expressions, showing that PAC learning them is computationally intractable,
even under the uniform distribution on the hypercube, and that membership
queries do not appear to help. These results do not follow from existing
hardness results for learning DFAs or NFAs, since the descriptive
complexity of regular languages can differ exponentially between DFAs,
NFAs, and regular expressions.
In the second part, I will present a new learning framework called Positive
Distribution Shift (PDS). Here, the goal is to learn a target function with
respect to a distribution D, but the training samples come from a different
distribution D'. While distribution shift is usually viewed as harmful, we
show that with a well-chosen D', the shift can be beneficial and make
learning computationally easier, even for standard gradient-based learners.
We formalize several variants of PDS, demonstrate how certain
computationally hard classes become tractable under an appropriate choice
of D', and draw connections to classical ideas from membership-query
learning.
Based on joint work with Lev Reyzin, Nati Srebro, Gal Vardi, Marko
Medvedev, Elisabetta Cornacchia, Theodor Misiakiewicz, and Nader Bshouty.
*Bio:*
Idan Attias is a postdoctoral researcher at the IDEAL Institute, working
with Lev Reyzin (UIC), Nati Srebro, and Avrim Blum (TTIC). His research
focuses on the foundations of machine learning theory and data-driven
sequential decision-making. His work has been recognized with a Best Paper
Award at ICML ’24, selection as a Rising Star in Data Science (UCSD ’24),
and several Oral and Spotlight presentations at top machine learning venues.
*Host: **Avrim Blum* <avrim at ttic.edu>
Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue, Rm 517*
*Chicago, IL 60637*
*773-834-1757*
*mmarre at ttic.edu <mmarre at ttic.edu>*
On Thu, Dec 11, 2025 at 11:14 AM Mary Marre <mmarre at ttic.edu> wrote:
> *When:* Wednesday, December 17, 2025 at* 1:00 p**m CT *
>
> *Where: *Talk will be given *live, in-person* at
>
> TTIC, 6045 S. Kenwood Avenue
>
> 5th Floor, Room 530
>
>
> *Virtually:* * via Panopto
> <https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=efa598a9-48cc-4171-af5d-b3af018640f5>*
>
>
>
> *Who: * Idan Attias, IDEAL Institute/UIC/TTIC Postdoc
>
>
>
> *Title: *
> Computational Learning Theory: Hardness of Learning Regular Expressions
> and Tractable Learning under Distribution Shift
>
> *Abstract:*
> I will discuss two recent papers in computational learning theory.
> In the first part, I will present new hardness results for learning
> regular expressions, showing that PAC learning them is computationally
> intractable, even under the uniform distribution on the hypercube, and that
> membership queries do not appear to help. These results do not follow from
> existing hardness results for learning DFAs or NFAs, since the descriptive
> complexity of regular languages can differ exponentially between DFAs,
> NFAs, and regular expressions.
>
> In the second part, I will present a new learning framework called
> Positive Distribution Shift (PDS). Here, the goal is to learn a target
> function with respect to a distribution D, but the training samples come
> from a different distribution D'. While distribution shift is usually
> viewed as harmful, we show that with a well-chosen D', the shift can be
> beneficial and make learning computationally easier, even for standard
> gradient-based learners. We formalize several variants of PDS, demonstrate
> how certain computationally hard classes become tractable under an
> appropriate choice of D', and draw connections to classical ideas from
> membership-query learning.
>
> Based on joint work with Lev Reyzin, Nati Srebro, Gal Vardi, Marko
> Medvedev, Elisabetta Cornacchia, Theodor Misiakiewicz, and Nader Bshouty.
>
> *Bio:*
> Idan Attias is a postdoctoral researcher at the IDEAL Institute, working
> with Lev Reyzin (UIC), Nati Srebro, and Avrim Blum (TTIC). His research
> focuses on the foundations of machine learning theory and data-driven
> sequential decision-making. His work has been recognized with a Best Paper
> Award at ICML ’24, selection as a Rising Star in Data Science (UCSD ’24),
> and several Oral and Spotlight presentations at top machine learning venues.
>
> *Host: **Avrim Blum* <avrim at ttic.edu>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue, Rm 517*
> *Chicago, IL 60637*
> *773-834-1757*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20251216/db8a49fe/attachment.html>
More information about the Colloquium
mailing list