[Colloquium] REMINDER: 12/12 TTIC Colloquium: Shaddin Dughmi, USC

Mary Marre via Colloquium colloquium at mailman.cs.uchicago.edu
Thu Dec 11 23:46:15 CST 2025


*When:*        Friday, December 12, 2025 at* 11:00 **am CT *

*Where:       *Talk will be given *live, in-person* at

                   TTIC, 6045 S. Kenwood Avenue

                   5th Floor, Room 530


*Virtually:*  * via Panopto
<https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=ae82a30b-c0d0-4557-9eb9-b3af017cbb81>*



*Who: *         Shaddin Dughmi, USC



*Title:* Local Regularizers are Not Transductive Learners

*Abstract: *A central question in learning theory is the following: When a
problem is learnable, what do the associated learning algorithms look like?
The question is particularly salient for multiclass classification, where
"simple" algorithmic templates such as empirical risk minimization (ERM)
and regularization can provably fail even when more intricate learners
succeed. Are there modest generalizations of ERM, perhaps reminiscent of
approaches employed in practice, which are guaranteed to succeed whenever a
multiclass problem is learnable?

We examine a particularly appealing candidate for such an algorithmic
template: Local regularization, a generalization of regularization which
allows model complexity to depend on the location of the unlabeled test
data. We partly resolve an open question raised by Asilis et al. (COLT
2024): whether local regularization suffices to learn all learnable
multiclass problems. We provide a negative answer to this question in the
transductive model of learning. We exhibit a multiclass classification
problem which is learnable in both the transductive and PAC models, yet
cannot be learned transductively by any local regularizer. The
corresponding hypothesis class, and our proof, are based on principles from
cryptographic secret sharing. We outline challenges in extending our
negative result to the PAC model, leaving open the tantalizing possibility
of a PAC/transductive separation with respect to local regularization.

This paper is joint work with Sky Jafar and Julian Asilis, and appeared in
COLT 2025.

*Host: **Avrim Blum* <avrim at ttic.edu>



Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue, Rm 517*
*Chicago, IL  60637*
*773-834-1757*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Thu, Dec 11, 2025 at 10:05 AM Mary Marre <mmarre at ttic.edu> wrote:

>
> <https://mail.google.com/mail/u/0/%E2%80%8Bhttps://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=ae82a30b-c0d0-4557-9eb9-b3af017cbb81>
> *When:*        Friday, December 12, 2025 at* 11:00 **am CT *
>
> *Where:       *Talk will be given *live, in-person* at
>
>                    TTIC, 6045 S. Kenwood Avenue
>
>                    5th Floor, Room 530
>
>
> *Virtually:*  * via Panopto
> <https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=ae82a30b-c0d0-4557-9eb9-b3af017cbb81>*
>
>
>
> *Who: *         Shaddin Dughmi, USC
>
>
>
> *Title:* Local Regularizers are Not Transductive Learners
>
> *Abstract: *A central question in learning theory is the following: When
> a problem is learnable, what do the associated learning algorithms look
> like? The question is particularly salient for multiclass classification,
> where "simple" algorithmic templates such as empirical risk minimization
> (ERM) and regularization can provably fail even when more intricate
> learners succeed. Are there modest generalizations of ERM, perhaps
> reminiscent of approaches employed in practice, which are guaranteed to
> succeed whenever a multiclass problem is learnable?
>
> We examine a particularly appealing candidate for such an algorithmic
> template: Local regularization, a generalization of regularization which
> allows model complexity to depend on the location of the unlabeled test
> data. We partly resolve an open question raised by Asilis et al. (COLT
> 2024): whether local regularization suffices to learn all learnable
> multiclass problems. We provide a negative answer to this question in the
> transductive model of learning. We exhibit a multiclass classification
> problem which is learnable in both the transductive and PAC models, yet
> cannot be learned transductively by any local regularizer. The
> corresponding hypothesis class, and our proof, are based on principles from
> cryptographic secret sharing. We outline challenges in extending our
> negative result to the PAC model, leaving open the tantalizing possibility
> of a PAC/transductive separation with respect to local regularization.
>
> This paper is joint work with Sky Jafar and Julian Asilis, and appeared in
> COLT 2025.
>
> *Host: **Avrim Blum* <avrim at ttic.edu>
>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue, Rm 517*
> *Chicago, IL  60637*
> *773-834-1757*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Tue, Dec 9, 2025 at 2:26 PM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When:*        Friday, December 12, 2025 at* 11:00 **am CT *
>>
>> *Where:       *Talk will be given *live, in-person* at
>>
>>                    TTIC, 6045 S. Kenwood Avenue
>>
>>                    5th Floor, Room 530
>>
>>
>> *Virtually:*  * tba*
>>
>>
>>
>> *Who: *         Shaddin Dughmi, USC
>>
>>
>>
>> *Title:* Local Regularizers are Not Transductive Learners
>>
>> *Abstract: *A central question in learning theory is the following: When
>> a problem is learnable, what do the associated learning algorithms look
>> like? The question is particularly salient for multiclass classification,
>> where "simple" algorithmic templates such as empirical risk minimization
>> (ERM) and regularization can provably fail even when more intricate
>> learners succeed. Are there modest generalizations of ERM, perhaps
>> reminiscent of approaches employed in practice, which are guaranteed to
>> succeed whenever a multiclass problem is learnable?
>>
>> We examine a particularly appealing candidate for such an algorithmic
>> template: Local regularization, a generalization of regularization which
>> allows model complexity to depend on the location of the unlabeled test
>> data. We partly resolve an open question raised by Asilis et al. (COLT
>> 2024): whether local regularization suffices to learn all learnable
>> multiclass problems. We provide a negative answer to this question in the
>> transductive model of learning. We exhibit a multiclass classification
>> problem which is learnable in both the transductive and PAC models, yet
>> cannot be learned transductively by any local regularizer. The
>> corresponding hypothesis class, and our proof, are based on principles from
>> cryptographic secret sharing. We outline challenges in extending our
>> negative result to the PAC model, leaving open the tantalizing possibility
>> of a PAC/transductive separation with respect to local regularization.
>>
>> This paper is joint work with Sky Jafar and Julian Asilis, and appeared
>> in COLT 2025.
>>
>> *Host: **Avrim Blum* <avrim at ttic.edu>
>>
>>
>>
>> Mary C. Marre
>> Faculty Administrative Support
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue, Rm 517*
>> *Chicago, IL  60637*
>> *773-834-1757*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20251211/7b4ebb60/attachment-0001.html>


More information about the Colloquium mailing list