[Colloquium] Shaw/Dissertation Defense/Jul 22, 2011

Margaret Jaffey margaret at cs.uchicago.edu
Fri Jul 8 09:45:36 CDT 2011



       Department of Computer Science/The University of Chicago

                     *** Dissertation Defense ***


Candidate:  Adam Shaw

Date:  Friday, July 22, 2011

Time:  10:00 AM

Place:  Ryerson 277

Title: Implementation Techniques for Nested Data-Parallel Languages

Abstract:
Nested data-parallel languages allow computation in parallel over
irregular nested data structures. The classic approach to compiling
nested data parallelism in high-level languages is to apply flattening
to nested structures. Flattening separates nested data and its shape
into distinct values: a flat data vector, and a representation of the
nesting information. In a parallel context, flattening is beneficial
because computation on flat data vectors maps easily onto parallel
hardware, and it is easier to partition work across processing
elements in flattened code.

Traditionally, flattening is a wholesale transformation that unravels
all nested data structures and correspondingly transforms the
operations on them. Such total flattening may not always yield best
performance: sometimes we might want to flatten part way, or not at
all. To accommodate such possibilities, we present hybrid flattening.
In hybrid flattening transformations, only certain structures are
flattened, and to varying degrees. This dissertation presents a formal
framework for defining hybrid flattening transformations.

We use our framework to define a novel flattening transformation on a
model programming language. Guided by our model, we implemented our
transformation in the compiler for Parallel ML, a nested data-parallel
language with implicitly-threaded features. Our implementation
demonstrates the utility of the transformation. Across various
benchmarks, transformed programs perform better than untransformed
ones, scale better, and compete favorably against efficient sequential
programs in C and SML. With our system, running PML programs on a
48-core machine yields as much as a thirtyfold improvement over their
sequential counterparts.

Adam's advisor is Prof. John Reppy

Login to the Computer Science Department website for details,
including a draft copy of the dissertation:

 https://www.cs.uchicago.edu/phd/phd_announcements#adamshaw

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Margaret P. Jaffey            margaret at cs.uchicago.edu
Department of Computer Science
Student Support Rep (Ry 156)               (773) 702-6011
The University of Chicago      http://www.cs.uchicago.edu
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


More information about the Colloquium mailing list