[Theory] [Theory Lunch] Yuwei Cheng

Alec Sun via Theory theory at mailman.cs.uchicago.edu
Tue May 13 00:07:53 CDT 2025


Please join us for the eighth theory lunch of the quarter tomorrow!

*Time: *Wednesday, May 14, 2025, 12pm-1pm in JCL 390

*Speaker:* Yuwei Cheng

*Title: *Personalized Ad Impact with Contextual Markov Decision Processes:
Long-Term Poisson Rewards and Near-Optimal Bidding Algorithms

*Abstract: *Online advertising platforms use automated auctions to connect
advertisers with potential customers, requiring effective bidding
strategies to maximize profits. Accurate ad impact estimation requires
considering three key factors: delayed and long-term effects, cumulative ad
impacts such as reinforcement or fatigue, and customer heterogeneity.
However, these effects are often not jointly addressed in previous studies.
To capture these factors, we model ad bidding as a Contextual Markov
Decision Process (CMDP) with long-term Poisson rewards. For efficient
estimation, we propose a two-stage maximum likelihood estimator combined
with data-splitting strategies, ensuring controlled estimation error based
on the first-stage estimator's (in)accuracy. Building on this, we design a
reinforcement learning algorithm to derive efficient personalized bidding
strategies. This approach achieves a near-optimal regret bound of
$\Tilde{\mathcal{O}}(dH^2\sqrt{T})$, where $d$ is the contextual dimension,
$H$ is the number of rounds, and $T$ is the number of customers.

On Tue, May 6, 2025 at 11:23 AM Alec Sun <alecsun at uchicago.edu> wrote:

> Please join us for the seventh theory lunch of the quarter tomorrow!
>
> *Time: *Wednesday, May 7, 2025, 12pm-1pm in JCL 390
>
> *Speaker:* Mohammad Reza Aminian
>
> *Title: *Stationary Online Contention Resolution Schemes
>
> *Abstract: *[See attachment]
>
> On Mon, Apr 28, 2025 at 4:36 PM Alec Sun <alecsun at uchicago.edu> wrote:
>
>> Please join us for the sixth theory lunch of the quarter tomorrow. Note
>> the change in day of the week, time, and location just for this week!
>>
>> *Time: *Tuesday, April 29, 2025, 12:30pm-1:30pm in JCL 298
>>
>> *Speaker: *Aditya Prasad
>>
>> *Title:* Supermodular Combinatorial Contracts
>>
>> *Abstract: *We present combinatorial contracts in the single-agent
>> combinatorial action model of [Duetting, Ezra, Feldman, Kesselheim, FOCS
>> 2021] and the multi-agent model of [Duetting, Ezra, Feldman, Kesselheim,
>> STOC 2023]. For the single-agent combinatorial action setting, we give a
>> poly-time algorithm for finding the optimal contract, provided that the
>> agent’s demand problem can be solved efficiently and there are poly-many
>> breakpoints in the agent’s demand. This implies an efficient algorithm for
>> supermodular rewards. For the multi-agent setting, our focus is on settings
>> with supermodular rewards. We give an additive PTAS for a natural class of
>> graph-based rewards when the agents have identical costs, and show that
>> even in this special case it’s NP-hard to obtain any finite multiplicative
>> approximation, or an additive FPTAS.
>>
>> *Note: *Theory lunch will return to its normal time on Wednesday 12pm
>> for the rest of the quarter.
>>
>> On Tue, Apr 22, 2025 at 12:51 PM Alec Sun <alecsun at uchicago.edu> wrote:
>>
>>> Please join us for the fifth theory lunch of the quarter tomorrow!
>>>
>>> *Time: *Wednesday, April 23, 2025, 12pm-1pm in JCL 390
>>>
>>> *Speaker:* Yifan Wu
>>>
>>> *Title: *Calibration Error for Decision Making
>>>
>>> *Abstract*: Calibration allows predictions to be reliably interpreted
>>> as probabilities by decision makers. We propose a decision-theoretic
>>> calibration error, the Calibration Decision Loss (CDL), defined as the
>>> maximum improvement in decision payoff obtained by calibrating the
>>> predictions, where the maximum is over all payoff-bounded decision tasks.
>>> Vanishing CDL guarantees the payoff loss from miscalibration vanishes
>>> simultaneously for all downstream decision tasks. We show separations
>>> between CDL and existing calibration error metrics, including the most
>>> well-studied metric Expected Calibration Error (ECE). Our main technical
>>> contribution is a new efficient algorithm for online calibration that
>>> achieves near-optimal O(logT / \sqrt{T}) expected CDL, bypassing the
>>> Ω(T^{−0.472}) lower bound for ECE by Qiao and Valiant (2021).
>>>
>>> *Note: *Next week JCL 390 and 298 are used for other events on
>>> Wednesday, so theory lunch will happen on *Tuesday, April 29, 2025,
>>> 12pm-1pm in JCL 298*.
>>>
>>> On Wed, Apr 16, 2025 at 11:15 AM Alec Sun <alecsun at uchicago.edu> wrote:
>>>
>>>> Please join us for the fourth theory lunch of the quarter today!
>>>>
>>>> *Time: *Wednesday, April 16, 2025, 12pm-1pm in JCL 390
>>>>
>>>> *Speaker:* Andrzej Kaczmarczyk
>>>>
>>>> *Title: *Learning Real-Life Approval Elections
>>>>
>>>> *Abstract*: We study how to learn an approval election, i.e., an
>>>> election in which each voter selects which candidates they approve.
>>>> Specifically, we focus on the independent approval model (IAM), where each
>>>> candidate has its own approval probability and is approved independently of
>>>> the other ones. We propose algorithms for learning IAMs and their mixtures
>>>> from data, using either maximum likelihood estimation or Bayesian learning.
>>>> We then apply these algorithms to a large set of real-life elections. In
>>>> particular, we find that single-component models are rarely sufficient to
>>>> capture the complexity of real-life data, whereas their mixtures perform
>>>> well.
>>>>
>>>> On Wed, Apr 16, 2025 at 11:01 AM Alec Sun <alecsun at uchicago.edu> wrote:
>>>>
>>>>> Please join us for the fourth theory lunch of the quarter today!
>>>>>
>>>>> *Time: *Wednesday, April 16, 2025, 12pm-1pm in JCL 390
>>>>>
>>>>> *Speaker:* Andrzej Kaczmarczyk
>>>>>
>>>>> *Title: *Learning Real-Life Approval Elections
>>>>>
>>>>> *Abstract*: We study how to learn an approval election, i.e., an
>>>>> election in which each voter selects which candidates they approve.
>>>>> Specifically, we focus on the independent approval model (IAM), where each
>>>>> candidate has its own approval probability and is approved independently of
>>>>> the other ones. We propose algorithms for learning IAMs and their mixtures
>>>>> from data, using either maximum likelihood estimation or Bayesian learning.
>>>>> We then apply these algorithms to a large set of real-life elections. In
>>>>> particular, we find that single-component models are rarely sufficient to
>>>>> capture the complexity of real-life data, whereas their mixtures perform
>>>>> well.
>>>>>
>>>>> On Tue, Apr 8, 2025 at 1:05 PM Alec Sun <alecsun at uchicago.edu> wrote:
>>>>>
>>>>>> Please join us for the third theory lunch of the quarter tomorrow!
>>>>>>
>>>>>> *Time: *Wednesday, April 9, 2025, 12pm-1pm in JCL 390
>>>>>>
>>>>>> *Speaker: *Subhodh Kotekal
>>>>>>
>>>>>> *Title: *Variance estimation in compound decision theory under
>>>>>> boundedness
>>>>>>
>>>>>> *Abstract*: The normal means model is often studied under the
>>>>>> assumption of a known variance. However, ignorance of the variance is a
>>>>>> frequent issue in applications and basic theoretical questions still remain
>>>>>> open in this setting. This article establishes that the sharp minimax rate
>>>>>> of variance estimation in square error is $(\log\log n/\log n)^2$ under
>>>>>> arguably the most mild assumption imposed for identifiability: bounded
>>>>>> means. The rate-optimal estimator proposed in this article achieves the
>>>>>> optimal rate by estimating $O\left(\log n/\log\log n\right)$ cumulants and
>>>>>> leveraging a variational representation of the noise variance in terms of
>>>>>> the cumulants of the data distribution. The minimax lower bound involves a
>>>>>> moment matching construction.
>>>>>>
>>>>>> On Tue, Apr 1, 2025 at 1:38 PM Alec Sun <alecsun at uchicago.edu> wrote:
>>>>>>
>>>>>>> Please join us for the second theory lunch of the quarter tomorrow!
>>>>>>>
>>>>>>> *Time: *Wednesday, April 2, 2025, 12pm-1pm in JCL 390
>>>>>>>
>>>>>>> *Speaker: *Dravy Sharma
>>>>>>>
>>>>>>> *Title: *Provable tuning of deep learning model hyperparameters
>>>>>>>
>>>>>>> *Abstract*: Modern machine learning algorithms, especially deep
>>>>>>> learning-based techniques, typically involve careful hyperparameter tuning
>>>>>>> to achieve the best performance. Despite the surge of intense interest in
>>>>>>> practical techniques like Bayesian optimization and random search-based
>>>>>>> approaches to automating this laborious and compute-intensive task, the
>>>>>>> fundamental learning-theoretic complexity of tuning hyperparameters for
>>>>>>> deep neural networks is poorly understood. Inspired by this glaring gap, we
>>>>>>> initiate the formal study of hyperparameter tuning complexity in deep
>>>>>>> learning under a powerful data-driven paradigm. A major difficulty is that
>>>>>>> the utility function as a function of the hyperparameter is very volatile
>>>>>>> and furthermore, it is given implicitly by an optimization problem over the
>>>>>>> model parameters. To tackle this challenge, we employ subtle concepts from
>>>>>>> differential/algebraic geometry and constrained optimization to show that
>>>>>>> the learning-theoretic complexity of the corresponding family of utility
>>>>>>> functions is bounded. We instantiate our results and provide sample
>>>>>>> complexity bounds for concrete applications—tuning a hyperparameter that
>>>>>>> interpolates neural activation functions and setting the kernel parameter
>>>>>>> in graph neural networks. The talk is based on joint work with Nina Balcan
>>>>>>> and Anh Nguyen.
>>>>>>>
>>>>>>> On Tue, Mar 25, 2025, 4:52 PM Alec Sun <alecsun at uchicago.edu> wrote:
>>>>>>>
>>>>>>>> Please join us for the first theory lunch of the quarter tomorrow!
>>>>>>>>
>>>>>>>> *Time: *Wednesday, March 26, 2025, 12pm-1pm in JCL 390
>>>>>>>>
>>>>>>>> *Speaker: *Olga Medrano
>>>>>>>>
>>>>>>>> *Title: *Short Visit to Regularity Lemmas
>>>>>>>>
>>>>>>>> *Abstract: *In this expository talk, we will overview Szémerédi’s
>>>>>>>> regularity lemma, a result from extremal graph theory with different
>>>>>>>> applications. We will then mention some considerations about this result,
>>>>>>>> including the existence of irregular pairs in the partitions obtained, as
>>>>>>>> well as the large size of those partitions. Time permitting, we very
>>>>>>>> briefly reflect on how the proof of Szémerédi's regularity lemma is not
>>>>>>>> algorithmic and mention a few lines of work that were focused on finding
>>>>>>>> algorithms to output regular partitions. In the second half of this talk,
>>>>>>>> we will describe versions of this lemma over certain classes of graphs. In
>>>>>>>> particular, we state both the Ultra-strong regularity lemma, which works
>>>>>>>> for the class of graphs of bounded VC dimension, and the Stable regularity
>>>>>>>> lemma, which works for the class of k-edge stable graphs (namely, those not
>>>>>>>> containing a bi-induced bipartite graph). We conclude by acknowledging that
>>>>>>>> algorithmic questions for both results remain open.
>>>>>>>>
>>>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20250513/8c43e105/attachment-0001.html>


More information about the Theory mailing list