<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div dir="ltr"><meta http-equiv="content-type" content="text/html; charset=utf-8"><b>Note non-standard day and location!</b><div dir="ltr"></div><div><b><br></b></div><div><span style="-webkit-text-size-adjust: auto;">Departments of Mathematics & Computer Science</span><br style="-webkit-text-size-adjust: auto;"><span style="-webkit-text-size-adjust: auto;">Combinatorics & Theory Seminar</span><br style="-webkit-text-size-adjust: auto;"><span style="-webkit-text-size-adjust: auto;"></span><br style="-webkit-text-size-adjust: auto;"><span style="-webkit-text-size-adjust: auto;"><b>Thursday, November 7</b>, 3:30pm</span><br style="-webkit-text-size-adjust: auto;"><span style="-webkit-text-size-adjust: auto;">Location: <b>Crerar 298</b></span><br style="-webkit-text-size-adjust: auto;"><span style="-webkit-text-size-adjust: auto;"></span><br style="-webkit-text-size-adjust: auto;"><span style="-webkit-text-size-adjust: auto;">SPEAKER: </span>Khashayar Gatmiry (MIT)<div style="-webkit-text-size-adjust: auto;"><br><div>TITLE: <span style="caret-color: rgb(31, 31, 31); color: rgb(31, 31, 31); font-family: "Google Sans", Roboto, Arial, sans-serif;">Computing Optimal Regularizers for Online Linear Optimization<font size="3"> </font></span></div><div><br>ABSTRACT: Follow-the-Regularized-Leader (FTRL) algorithms are a popular class of</div><div>learning algorithms for online linear optimization (OLO) that</div><div>guarantee sub-linear regret, but the choice of regularizer can</div><div>significantly impact dimension-dependent factors in the regret bound.</div><div>We present an algorithm that takes as input convex and symmetric</div><div>action sets and loss sets for a specific OLO instance, and outputs a</div><div>regularizer such that running FTRL with this regularizer guarantees</div><div>regret within a universal constant factor of the best possible regret</div><div>bound. In particular, for any choice of (convex, symmetric) action set</div><div>and loss set we prove that there exists an instantiation of FTRL which</div><div>achieves regret within a constant factor of the best possible learning</div><div>algorithm, strengthening the universality result of Srebro et al.,</div><div>2011.</div><div>Our algorithm requires preprocessing time and space exponential in the</div><div>dimension d of the OLO instance, but can be run efficiently online</div><div>assuming a membership and linear optimization oracle for the action</div><div>and loss sets, respectively (and is fully polynomial time for the case</div><div>of constant dimension d). We complement this with a lower bound</div><div>showing that even deciding whether a given regularizer is</div><div>α-strongly-convex with respect to a given norm is NP-hard.</div></div></div></div></body></html>