<div dir="ltr"><div dir="ltr"><br></div><div dir="ltr"><div dir="ltr"><br><div class="gmail_quote"><div dir="ltr"><span id="gmail-m_-840910721687754024gmail-m_-4770585944477743m_-947167383461397716gmail-m_9168911489282912842m_7082043948795332671gmail-m_3918681043343225718m_7776154040874388940gmail-docs-internal-guid-d34d626a-7fff-fa18-31b7-15dad8158c9a"><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b>    </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">  Monday, January 27th at 11:00 am</font></font></font></p></span><br><span id="gmail-m_-840910721687754024gmail-m_-4770585944477743m_-947167383461397716gmail-m_9168911489282912842m_7082043948795332671gmail-m_3918681043343225718m_7776154040874388940gmail-docs-internal-guid-d34d626a-7fff-fa18-31b7-15dad8158c9a"></span><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b>     </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">TTIC, 6045 S. Kenwood Avenue, 5th Floor, Room 526</font></font></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;text-align:justify;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Who: </b>        </font></font></font>Subhransu Maji, University Of Massachusetts Amherst<br></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Title:</span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> Task-specific Recognition by Modeling Visual Tasks and their Relations</span></p><span id="gmail-m_-840910721687754024gmail-m_-4770585944477743m_-947167383461397716gmail-m_9168911489282912842m_7082043948795332671gmail-m_3918681043343225718m_7776154040874388940gmail-docs-internal-guid-d34d626a-7fff-fa18-31b7-15dad8158c9a"><br><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Abstract: </span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The AI revolution powered in part by advances in deep learning has led to many successes in the last decade. Among others, it has enabled us to study various ecological phenomena by analyzing data from weather RADAR networks at an unprecedented scale (e.g., [1, 2]). Yet the vast majority of important applications remain beyond the scope of current AI systems. One of the barriers is that existing algorithms lack the ability to learn from limited training data. This is a fundamental challenge because most real-world data is heavy-tailed and supervision is hard to acquire. I argue that a principled framework for reasoning about AI problems can enable modular and data-efficient solutions. Towards this end I will describe our framework for modeling computer vision tasks and relations between them (e.g., [3, 4]). Our approach called "task2vec" computes a vector representation of a task using the Fisher information of a generic “probe” network. We show that the distance between these vectors correlates with natural metrics between the domains and labels of tasks. It is also predictive of transfer, i.e., how much does training a deep network on one task benefit another, and can be used for model recommendation. On a portfolio of hundreds of vision tasks the recommended network outperforms the current gold standard of fine-tuning an ImageNet pre-trained network. I’ll conclude with some of the life-cycle challenges that we need to address to make AI systems widely applicable.</span></p><br><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">References:</span></p><ol style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:decimal;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Kyle G. Horton, Frank A. La Sorte, Daniel Sheldon, Tsung-Yu Lin, Kevin Winner, Garrett Bernstein, Subhransu <span>Maji</span>, Wesley M. Hochachka, and Andrew Farnsworth. </span><a href="https://www.nature.com/articles/s41558-019-0648-9" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Phenology of nocturnal avian migration has shifted at the continental scale</span></a><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">, Nature Climate Change, Dec 2019</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Tsung‐Yu Lin, Kevin Winner, Garrett Bernstein, Abhay Mittal, Adriaan M. Dokter, Kyle G. Horton, Cecilia Nilsson, Benjamin M. Van Doren, Andrew Farnsworth, Frank A. La Sorte, Subhransu <span>Maji</span> and Daniel Sheldon.</span><a href="https://besjournals.onlinelibrary.wiley.com/doi/abs/10.1111/2041-210X.13280" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap"> MistNet: Measuring historical bird migration in the US using archived weather radar data and convolutional neural networks</span></a><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">, Methods in Ecology and Evolution 10(11):1908-1922, Aug 2019</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu <span>Maji</span>, Charless Fowlkes, Stefano Soatto, Pietro Perona,</span><a href="https://arxiv.org/abs/1902.03545" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Task2Vec: Task Embedding for Meta-Learning</span></a><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">, ICCV 2019</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Jong-Chyi Su, Subhransu <span>Maji</span>, Bharath Hariharan,</span><a href="https://arxiv.org/abs/1910.03560" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">When Does Self-supervision Improve Few-shot Learning?</span></a><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> arXiv:1910.03560, Oct 2019</span></p></li></ol><br></span><div><span id="gmail-m_-840910721687754024gmail-m_-4770585944477743m_-947167383461397716gmail-m_9168911489282912842m_7082043948795332671gmail-m_3918681043343225718m_7776154040874388940gmail-docs-internal-guid-d34d626a-7fff-fa18-31b7-15dad8158c9a"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Bio:</span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> I am an Assistant Professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst where I co-direct the Computer Vision Lab. I am also affiliated with the Center of Data Science and AWS AI. Prior to this I spent three years as a Research Assistant Professor at TTI Chicago, a philanthropically endowed academic institute in the University of Chicago campus. I obtained my Ph.D. in Computer Science from the University of California at Berkeley in 2011 and B.Tech. in Computer Science and Engineering from IIT Kanpur in 2006. For my work, I have received a Google graduate fellowship, NSF CAREER Award (2018), and a best paper honorable mention at CVPR 2018. I also serve on the editorial board of the International Journal of Computer Vision (IJCV). </span></span></div>

<div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><font face="arial, helvetica, sans-serif"><b><br></b></font></div><div dir="ltr"><font face="arial, helvetica, sans-serif"><b><br></b></font></div><div dir="ltr"><font face="arial, helvetica, sans-serif"><b>Jerome Allen</b><br></font><div><font face="arial, helvetica, sans-serif">Executive Assistant<br></font></div><div><font face="arial, helvetica, sans-serif" color="#3d85c6"><b>Toyota Technological Institute</b></font></div><div><font face="arial, helvetica, sans-serif" color="#3d85c6">6045 S. Kenwood Avenue</font></div><div><font face="arial, helvetica, sans-serif" color="#3d85c6">Room 518</font></div><div><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL  60637</font></div><div><font face="arial, helvetica, sans-serif">p:(773) 702-2311<br></font></div><div><i><b><a href="mailto:jallen@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif">jallen@ttic.edu</font></a></b></i></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
</div></div>
</div>
</div>