[Colloquium] Maksim Levental Dissertation Defense/Apr 5, 2024

meganwoodward at uchicago.edu meganwoodward at uchicago.edu
Fri Mar 22 09:31:10 CDT 2024


This is an announcement of Maksim Levental's Dissertation Defense.
===============================================
Candidate: Maksim Levental

Date: Friday, April 05, 2024

Time: 10 am CT

Remote Location:  https://meet.google.com/qft-uvua-nur

Location: JCL 298

Title: An End-to-End Programming Model for AI Engine Architectures

Abstract: Coarse-Grained Reconfigurable Architectures (CGRAs) are becoming a promising alternative to conventional computing architectures such as CPUs, GPUs, and FPGAs when energy-efficiency and high-performance are required. Like CPUs and GPUs, CGRAs have processing elements (PEs) that can perform complex operations, such as vectorized arithmetic, and like FPGAs they support a reconfigurable topology of components. By virtue of their coarser grain reconfigurability they are less challenging to program than FPGAs but, nonetheless, more challenging than CPUs and GPUs. This paper presents an end-to-end programming model for AMD AI Engine CGRAs, which enables programming simultaneously at a high, end-user focused, level, and at a very low, implementation specific, level, all in the same language, all in the same “flow”. Our programming model allows users to specify, implement, and test on-device, enabling, for example, productive design of dataflow programs for streaming applications. The programming model is entirely open source and includes a language frontend (Python eDSL), a MLIR based compiler, export paths to target codegen compilers, and runtime infrastructure. We show that our approach to language and compiler design enables users to program with much less friction and much less ceremony while preserving access to all features and device APIs necessary for achieving performance competitive with existing AI Engine programming models.

Advisors: Ian Foster and Kyle Chard

Committee Members: Ian Foster, Kyle Chard, and Stephen Neuendorffer



More information about the Colloquium mailing list