[Colloquium] Reminder - Zhengxu Xia MS Presentation/Apr 15, 2022

Megan Woodward meganwoodward at uchicago.edu
Fri Apr 15 08:04:13 CDT 2022


This is an announcement of Zhengxu Xia's MS Presentation
===============================================
Candidate: Zhengxu Xia

Date: Friday, April 15, 2022

Time:  2 pm CST

Remote Location:  https://uchicago.zoom.us/j/4302315616?pwd=ekoya3U3cFhNdFMxZTdLOCtRY1gyZz09<https://urldefense.com/v3/__https://uchicago.zoom.us/j/4302315616?pwd=ekoya3U3cFhNdFMxZTdLOCtRY1gyZz09__;!!BpyFHLRN4TMTrA!unUtGpHJrX-cAb12hx1ixSEGvvtsmIKLPpDxMVijVB4Tj6EqWWwVl93G7nUiOOLrES-SnS0P$>

M.S. Paper Title: Automatic Curriculum Generation for Learning Adaptation in Networking

Abstract: As deep reinforcement learning (RL) showcases its strengths in networking and systems, its pitfalls also come to the public’s attention—when trained to handle a wide range of net- work workloads and previously unseen deployment environments, RL policies often manifest suboptimal performance and poor generalizability.
To tackle these problems, we present Genet, a new training framework for learning better RL-based network adaptation algorithms. Genet is built on the concept of curriculum learning, which has proved effective against similar issues in other domains where RL is extensively employed. At a high level, curriculum learning gradually presents more difficult environments to the training, rather than choosing them randomly, so that the current RL model can make meaningful progress in training. However, applying curriculum learning in networking is challenging because it remains unknown how to measure the “difficulty” of a network environment.
Instead of relying on handcrafted heuristics to determine the environment’s difficulty level, our insight is to utilize traditional rule-based (non-RL) baselines: If the current RL model performs significantly worse in a network environment than the baselines, then the model’s potential to improve when further trained in this environment is substantial. There- fore, Genet automatically searches for the environments where the current model falls significantly behind a traditional baseline scheme and iteratively promotes these environ- ments as the training progresses. Through evaluating Genet on three use cases—adaptive video streaming, congestion control, and load balancing, we show that Genet produces RL policies which outperform both regularly trained RL policies and traditional baselines in each context, not only under synthetic workloads but also in real environments.

Advisors: Junchen Jiang

Committee Members: Junchen Jiang, Heather Zheng, and Francis Y. Yan


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20220415/16677ff3/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: genet.pdf
Type: application/pdf
Size: 2331734 bytes
Desc: genet.pdf
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20220415/16677ff3/attachment-0001.pdf>


More information about the Colloquium mailing list