[Colloquium] Reminder - Yuhan Liu MS Presentation/May 29, 2024
Megan Woodward via Colloquium
colloquium at mailman.cs.uchicago.edu
Wed May 29 08:00:00 CDT 2024
This is an announcement of Yuhan Liu's MS Presentation
===============================================
Candidate: Yuhan Liu
Date: Wednesday, May 29, 2024
Time: 2:30 pm CT
Remote Location: https://uchicago.zoom.us/j/6603596916?pwd=Z1E5MDRWUSt2am5XbEt4dTFkNGx6QT09<https://urldefense.com/v3/__https://uchicago.zoom.us/j/6603596916?pwd=Z1E5MDRWUSt2am5XbEt4dTFkNGx6QT09__;!!BpyFHLRN4TMTrA!6bzqYITO8ttrkZnjL4DIVNh7nz4DzSLWnLSTMCEQw7iWSlbimsyugjjenT01KNIO6gwq0oW4cAyC3ltjrvz0m1LSl-24FUFhN30G8N0XSw$>
Location: JCL 298
Title: KV Cache Compression and Streaming for Fast Language Model Serving
Abstract: As large language models (LLMs) take on complex tasks, their inputs are supplemented with longer contexts that incorporate domain knowledge or user-specific information. Yet using long contexts poses a challenge for responsive LLM systems, as nothing can be generated until the whole context is processed by the LLM. While the context-processing delay can be reduced by reusing the KV cache of a context across different inputs, fetching the KV cache, which contains large tensors, over the network can cause extra network delays.
CacheGen is a fast context-loading module for LLM systems. First, CacheGen uses a custom tensor encoder, which embraces KV cache’s distributional properties, to encode a KV cache into more compact bitstream representations with negligible encoding/decoding overhead. This reduces the bandwidth demand to fetch the KV cache. Second, to maintain low context-loading delay and high generation quality, CacheGen adapts the streaming strategies to cope with changes in available bandwidth. When available bandwidth drops, CacheGen may raise the compression level for a part of the context or choose to recompute its KV cache on the fly. We test CacheGen on four popular LLMs of various sizes and four datasets (662 contexts in total). Compared to the recent systems that reuse the KV cache, CacheGen reduces the KV cache size by 3.5-4.3x and the total delay in fetching and processing contexts by 3.2-3.7x while having negligible impact on the LLM response quality in accuracy or perplexity.
Advisors: Junchen Jiang and Shan Lu
Committee Members: Shan Lu, Junchen Jiang, and Arvind Krishnamurthy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20240529/a6f69e75/attachment-0001.html>
More information about the Colloquium
mailing list