<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"><b><span style="font-size: 12pt; font-family: Helvetica;">Khashayar Gatmiry</span></b><o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"><b><span style="font-size: 11pt; font-family: Helvetica;">Massachusetts Institute of Technology</span></b><o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"><span style="font-size: 11pt;"> </span><o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"><span style="font-size: 11pt;"> <img width="221" height="295" id="Picture_x0020_1" src="cid:036F9270-FBCB-491E-B00F-DE2509A2F350" _mf_state="1" title="null" alt="image001.png" style="width: 2.302in; height: 3.0729in;"></span><o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"> <o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"><b><span style="font-size: 11pt; background: yellow;"><span dir="ltr">Thursday,</span></span></b><b><span style="font-size: 11pt;"> </span></b><b><span style="font-size: 11pt;"><span dir="ltr">November 7</span><span dir="ltr">, 202</span><span dir="ltr">4</span><span dir="ltr"> at 3:30pm</span></span></b><o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"><b><span style="font-size: 11pt; background: yellow;">Location: John Crerar Library 298</span></b><o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"><span style="font-size: 11pt;"> </span><o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"><span style="font-size: 11pt;"> </span><b><i><span style="font-size: 14pt;">Title:</span></i></b><span style="font-size: 11pt;"> </span><span style="font-size: 12pt;">Computing Optimal Regularizers for Online Linear Optimization</span><o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"> <o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"><span style="font-size: 11pt;"> </span><o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"><b><i><span style="font-size: 14pt;">Abstract:</span></i></b><span style="font-size: 11pt;"> </span><span style="font-size: 12pt;">Follow-the-Regularized-Leader (FTRL) algorithms are a popular class of learning algorithms for online linear optimization (OLO) that<br>guarantee sub-linear regret, but the choice of regularizer can significantly impact dimension-dependent factors in the regret bound.<br>We present an algorithm that takes as input convex and symmetric action sets and loss sets for a specific OLO instance, and outputs a regularizer such that running FTRL with this regularizer guarantees regret within a universal constant factor of the best possible regret bound. In particular, for any choice of (convex, symmetric) action set and loss set we prove that there exists an instantiation of FTRL which<br>achieves regret within a constant factor of the best possible learning algorithm, strengthening the universality result of Srebro et al.,<br>2011.</span><o:p></o:p></p><p class="MsoNormal" style="-webkit-text-size-adjust: auto; margin: 0in; font-size: 10pt; font-family: Calibri, sans-serif;"><span style="font-size: 12pt;"><br>Our algorithm requires preprocessing time and space exponential in the dimension d of the OLO instance, but can be run efficiently online<br>assuming a membership and linear optimization oracle for the action and loss sets, respectively (and is fully polynomial time for the case<br>of constant dimension d). We complement this with a lower bound showing that even deciding whether a given regularizer is<br>α-strongly-convex with respect to a given norm is NP-hard.</span></p><div dir="ltr"></div></body></html>