[Theory] NOW: 11/11 Thesis Defense: Naren Sarayu Manoj, TTIC
Mary Marre via Theory
theory at mailman.cs.uchicago.edu
Mon Nov 11 09:25:00 CST 2024
*When*: Monday, November 11th from *9**:30am - 10:30am CT*
*Where*: Talk will be given *live, in-person* at
TTIC, 6045 S. Kenwood Avenue
5th Floor, *Room 530*
*Virtually*: via *Zoom*
<https://uchicago.zoom.us/j/98071793141?pwd=EZ9xJtPzE8kkxUbXRZ0sqqRfzaCh7y.1>
*Who: * Naren Sarayu Manoj, TTIC
*Title:* High-dimensional phenomena in graph clustering and linear
regression
*Abstract: *Graph clustering and linear regression are foundational machine
learning primitives. In addition to being ubiquitously used in practice,
these problems are useful testbeds with which to evaluate new statistical
and algorithmic ideas. In this talk, we will look at variants of these
classical problems that demand new technical tools.
In the first part, we will analyze the behavior of popular graph clustering
algorithms in "helpfully" misspecified stochastic block models. This input
model will help us understand whether algorithms used in practice have
overfit to distributional assumptions on their input.
In the second part, we will describe a family of geometric constructions
that enables both improved sparsification results and a new algorithm for
socially fair least squares regression. In the sparsification problem, we
are interested in finding a small weighted subset of an objective function
that approximates the objective well on all inputs. In the socially fair
least squares regression problem, our goal is to find a parameter vector
that gets a uniformly small squared error across several subproblems, each
of which corresponds to data coming from some predefined subpopulation.
More generally, my thesis explores applications of high-dimensional
probability and geometry to problems in statistical and computational
learning.
This thesis is based on joint work with Aditya Bhaskara, Avrim Blum, Meghal
Gupta, Agastya Vibhuti Jha, Gene Li, Michael Kapralov, Yury Makarychev,
Davide Mazzali, Max Ovsiankin, Kumar Kshitij Patel, Aadirupa Saha, Weronika
Wrosz-Kaminska, and Chloe Yang. The presentation is based on joint work
with Aditya Bhaskara, Agastya Vibhuti Jha, Michael Kapralov, Davide
Mazzali, Max Ovsiankin, Kumar Kshitij Patel, and Weronika Wrosz-Kaminska.
*Thesis Committee: *Avrim Blum (co-chair), Michael Kapralov, Sepideh
Mahabadi, Yury Makarychev (co-chair)
Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue, Rm 517*
*Chicago, IL 60637*
*773-834-1757*
*mmarre at ttic.edu <mmarre at ttic.edu>*
On Mon, Nov 11, 2024 at 8:30 AM Mary Marre <mmarre at ttic.edu> wrote:
> *When*: Monday, November 11th from *9**:30am - 10:30am CT*
>
> *Where*: Talk will be given *live, in-person* at
> TTIC, 6045 S. Kenwood Avenue
> 5th Floor, *Room 530*
>
> *Virtually*: via *Zoom*
> <https://uchicago.zoom.us/j/98071793141?pwd=EZ9xJtPzE8kkxUbXRZ0sqqRfzaCh7y.1>
>
>
> *Who: * Naren Sarayu Manoj, TTIC
>
>
> *Title:* High-dimensional phenomena in graph clustering and linear
> regression
>
> *Abstract: *Graph clustering and linear regression are foundational
> machine learning primitives. In addition to being ubiquitously used in
> practice, these problems are useful testbeds with which to evaluate new
> statistical and algorithmic ideas. In this talk, we will look at variants
> of these classical problems that demand new technical tools.
>
> In the first part, we will analyze the behavior of popular graph
> clustering algorithms in "helpfully" misspecified stochastic block models.
> This input model will help us understand whether algorithms used in
> practice have overfit to distributional assumptions on their input.
>
> In the second part, we will describe a family of geometric constructions
> that enables both improved sparsification results and a new algorithm for
> socially fair least squares regression. In the sparsification problem, we
> are interested in finding a small weighted subset of an objective function
> that approximates the objective well on all inputs. In the socially fair
> least squares regression problem, our goal is to find a parameter vector
> that gets a uniformly small squared error across several subproblems, each
> of which corresponds to data coming from some predefined subpopulation.
>
> More generally, my thesis explores applications of high-dimensional
> probability and geometry to problems in statistical and computational
> learning.
>
> This thesis is based on joint work with Aditya Bhaskara, Avrim Blum,
> Meghal Gupta, Agastya Vibhuti Jha, Gene Li, Michael Kapralov, Yury
> Makarychev, Davide Mazzali, Max Ovsiankin, Kumar Kshitij Patel, Aadirupa
> Saha, Weronika Wrosz-Kaminska, and Chloe Yang. The presentation is based on
> joint work with Aditya Bhaskara, Agastya Vibhuti Jha, Michael Kapralov,
> Davide Mazzali, Max Ovsiankin, Kumar Kshitij Patel, and Weronika
> Wrosz-Kaminska.
>
> *Thesis Committee: *Avrim Blum (co-chair), Michael Kapralov, Sepideh
> Mahabadi, Yury Makarychev (co-chair)
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue, Rm 517*
> *Chicago, IL 60637*
> *773-834-1757*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Sun, Nov 10, 2024 at 2:20 PM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When*: Monday, November 11th from *9**:30am - 10:30am CT*
>>
>> *Where*: Talk will be given *live, in-person* at
>> TTIC, 6045 S. Kenwood Avenue
>> 5th Floor, *Room 530*
>>
>> *Virtually*: via *Zoom*
>> <https://uchicago.zoom.us/j/98071793141?pwd=EZ9xJtPzE8kkxUbXRZ0sqqRfzaCh7y.1>
>>
>>
>> *Who: * Naren Sarayu Manoj, TTIC
>>
>>
>> *Title:* High-dimensional phenomena in graph clustering and linear
>> regression
>>
>> *Abstract: *Graph clustering and linear regression are foundational
>> machine learning primitives. In addition to being ubiquitously used in
>> practice, these problems are useful testbeds with which to evaluate new
>> statistical and algorithmic ideas. In this talk, we will look at variants
>> of these classical problems that demand new technical tools.
>>
>> In the first part, we will analyze the behavior of popular graph
>> clustering algorithms in "helpfully" misspecified stochastic block models.
>> This input model will help us understand whether algorithms used in
>> practice have overfit to distributional assumptions on their input.
>>
>> In the second part, we will describe a family of geometric constructions
>> that enables both improved sparsification results and a new algorithm for
>> socially fair least squares regression. In the sparsification problem, we
>> are interested in finding a small weighted subset of an objective function
>> that approximates the objective well on all inputs. In the socially fair
>> least squares regression problem, our goal is to find a parameter vector
>> that gets a uniformly small squared error across several subproblems, each
>> of which corresponds to data coming from some predefined subpopulation.
>>
>> More generally, my thesis explores applications of high-dimensional
>> probability and geometry to problems in statistical and computational
>> learning.
>>
>> This thesis is based on joint work with Aditya Bhaskara, Avrim Blum,
>> Meghal Gupta, Agastya Vibhuti Jha, Gene Li, Michael Kapralov, Yury
>> Makarychev, Davide Mazzali, Max Ovsiankin, Kumar Kshitij Patel, Aadirupa
>> Saha, Weronika Wrosz-Kaminska, and Chloe Yang. The presentation is based on
>> joint work with Aditya Bhaskara, Agastya Vibhuti Jha, Michael Kapralov,
>> Davide Mazzali, Max Ovsiankin, Kumar Kshitij Patel, and Weronika
>> Wrosz-Kaminska.
>>
>> *Thesis Committee: *Avrim Blum (co-chair), Michael Kapralov, Sepideh
>> Mahabadi, Yury Makarychev (co-chair)
>>
>>
>> Mary C. Marre
>> Faculty Administrative Support
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue, Rm 517*
>> *Chicago, IL 60637*
>> *773-834-1757*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>>
>> On Tue, Nov 5, 2024 at 2:38 PM Mary Marre <mmarre at ttic.edu> wrote:
>>
>>> *When*: Monday, November 11th from *9**:30am - 10:30am CT*
>>>
>>> *Where*: Talk will be given *live, in-person* at
>>> TTIC, 6045 S. Kenwood Avenue
>>> 5th Floor, *Room 530*
>>>
>>> *Virtually*: via *Zoom*
>>> <https://uchicago.zoom.us/j/98071793141?pwd=EZ9xJtPzE8kkxUbXRZ0sqqRfzaCh7y.1>
>>>
>>>
>>> *Who: * Naren Sarayu Manoj, TTIC
>>>
>>>
>>> *Title:* High-dimensional phenomena in graph clustering and linear
>>> regression
>>>
>>> *Abstract: *Graph clustering and linear regression are foundational
>>> machine learning primitives. In addition to being ubiquitously used in
>>> practice, these problems are useful testbeds with which to evaluate new
>>> statistical and algorithmic ideas. In this talk, we will look at variants
>>> of these classical problems that demand new technical tools.
>>>
>>> In the first part, we will analyze the behavior of popular graph
>>> clustering algorithms in "helpfully" misspecified stochastic block models.
>>> This input model will help us understand whether algorithms used in
>>> practice have overfit to distributional assumptions on their input.
>>>
>>> In the second part, we will describe a family of geometric constructions
>>> that enables both improved sparsification results and a new algorithm for
>>> socially fair least squares regression. In the sparsification problem, we
>>> are interested in finding a small weighted subset of an objective function
>>> that approximates the objective well on all inputs. In the socially fair
>>> least squares regression problem, our goal is to find a parameter vector
>>> that gets a uniformly small squared error across several subproblems, each
>>> of which corresponds to data coming from some predefined subpopulation.
>>>
>>> More generally, my thesis explores applications of high-dimensional
>>> probability and geometry to problems in statistical and computational
>>> learning.
>>>
>>> This thesis is based on joint work with Aditya Bhaskara, Avrim Blum,
>>> Meghal Gupta, Agastya Vibhuti Jha, Gene Li, Michael Kapralov, Yury
>>> Makarychev, Davide Mazzali, Max Ovsiankin, Kumar Kshitij Patel, Aadirupa
>>> Saha, Weronika Wrosz-Kaminska, and Chloe Yang. The presentation is based on
>>> joint work with Aditya Bhaskara, Agastya Vibhuti Jha, Michael Kapralov,
>>> Davide Mazzali, Max Ovsiankin, Kumar Kshitij Patel, and Weronika
>>> Wrosz-Kaminska.
>>>
>>> *Thesis Committee: *Avrim Blum (co-chair), Michael Kapralov, Sepideh
>>> Mahabadi, Yury Makarychev (co-chair)
>>>
>>>
>>> Mary C. Marre
>>> Faculty Administrative Support
>>> *Toyota Technological Institute*
>>> *6045 S. Kenwood Avenue, Rm 517*
>>> *Chicago, IL 60637*
>>> *773-834-1757*
>>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>>
>>>
>>> On Tue, Oct 29, 2024 at 4:47 PM Mary Marre <mmarre at ttic.edu> wrote:
>>>
>>>> *When*: Monday, November 11 from* 09**:30am - 10:30am CT*
>>>>
>>>> *Where*: Talk will be given *live, in-person* at
>>>> TTIC, 6045 S. Kenwood Avenue
>>>> 5th Floor, *Room 530*
>>>>
>>>> *Virtually*: via *Zoom*
>>>> <https://uchicago.zoom.us/j/98071793141?pwd=EZ9xJtPzE8kkxUbXRZ0sqqRfzaCh7y.1>
>>>>
>>>>
>>>> *Who: * Naren Sarayu Manoj, TTIC
>>>>
>>>>
>>>> *Title:* High-dimensional phenomena in graph clustering and linear
>>>> regression
>>>>
>>>> *Abstract: *Graph clustering and linear regression are foundational
>>>> machine learning primitives. In addition to being ubiquitously used in
>>>> practice, these problems are useful testbeds with which to evaluate new
>>>> statistical and algorithmic ideas. In this talk, we will look at variants
>>>> of these classical problems that demand new technical tools.
>>>>
>>>> In the first part, we will analyze the behavior of popular graph
>>>> clustering algorithms in "helpfully" misspecified stochastic block models.
>>>> This input model will help us understand whether algorithms used in
>>>> practice have overfit to distributional assumptions on their input.
>>>>
>>>> In the second part, we will describe a family of geometric
>>>> constructions that enables both improved sparsification results and a new
>>>> algorithm for socially fair least squares regression. In the sparsification
>>>> problem, we are interested in finding a small weighted subset of an
>>>> objective function that approximates the objective well on all inputs. In
>>>> the socially fair least squares regression problem, our goal is to find a
>>>> parameter vector that gets a uniformly small squared error across several
>>>> subproblems, each of which corresponds to data coming from some predefined
>>>> subpopulation.
>>>>
>>>> More generally, my thesis explores applications of high-dimensional
>>>> probability and geometry to problems in statistical and computational
>>>> learning.
>>>>
>>>> This thesis is based on joint work with Aditya Bhaskara, Avrim Blum,
>>>> Meghal Gupta, Agastya Vibhuti Jha, Gene Li, Michael Kapralov, Yury
>>>> Makarychev, Davide Mazzali, Max Ovsiankin, Kumar Kshitij Patel, Aadirupa
>>>> Saha, Weronika Wrosz-Kaminska, and Chloe Yang. The presentation is based on
>>>> joint work with Aditya Bhaskara, Agastya Vibhuti Jha, Michael Kapralov,
>>>> Davide Mazzali, Max Ovsiankin, Kumar Kshitij Patel, and Weronika
>>>> Wrosz-Kaminska.
>>>>
>>>> *Thesis Committee: *Avrim Blum (co-chair), Michael Kapralov, Sepideh
>>>> Mahabadi, Yury Makarychev (co-chair)
>>>>
>>>>
>>>> Mary C. Marre
>>>> Faculty Administrative Support
>>>> *Toyota Technological Institute*
>>>> *6045 S. Kenwood Avenue, Rm 517*
>>>> *Chicago, IL 60637*
>>>> *773-834-1757*
>>>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20241111/ba575c35/attachment-0001.html>
More information about the Theory
mailing list