[Colloquium] Rainey/Dissertation Defense/Jul 14, 2010

Margaret Jaffey margaret at cs.uchicago.edu
Wed Jun 30 09:09:31 CDT 2010



       Department of Computer Science/The University of Chicago

                     *** Dissertation Defense ***


Candidate:  Michael Rainey

Date:  Wednesday, July 14, 2010

Time:  11:00 AM

Place:  Ryerson 277

Title: Effective scheduling techniques for high-level parallel
languages

Abstract:
In the not-so-distant past, parallel programming was mostly the
concern of programmers specializing in high-performance computing.
Nowadays, on the other hand, many of today's desktop and laptop
computers come equipped with a species of shared-memory multiprocessor
called a multicore processor, making parallel programming a concern
for a much broader range of programmers. Declarative languages, such
as Parallel ML (PML) and Haskell, seek to reduce the complexity of
programming multicore processors by providing programmers with
abstract execution models, such as implicit threading, where
programmers annotate their programs to suggest the parallel
decomposition. Implicitly-threaded programs, however, do not specify
the actual decomposition of computations or mapping from computations
to cores. The annotations act simply as hints that the language
implementation can ignore. The parallel decomposition itself is the
responsibility of the runtime system and, more specifically, of the
scheduling system.

Threads can take arbitrarily different amounts of time to execute, and
these times are difficult to predict in advance. Implicit threading
urges the programmer to divide the program into threads that are as
small as possible because doing so increases the flexibility the
scheduler in its duty to distribute work evenly across processors. The
downside of such fine-grain parallelism is that the total scheduling
cost can be significant. It is quite possible that, for a given
thread, the scheduling costs outweigh the benefits of parallelism.
This problem is the focus of this dissertation.

An effective runtime system is both scalable and robust. A runtime
system is scalable if performance, in terms of execution time,
improves in proportion to the number of processing elements. A runtime
system is robust when performance is consistently good under changing
conditions, e.g., a change of input data set, number of processors, or
different program structure. Robust runtime systems are predictable
across programs generally, not just those tuned for a particular set
of conditions.

In this dissertation, I present two new techniques, Lazy Promotion and
Lazy Tree Splitting, for increasing the effectiveness of runtime
systems.Lazy Promotion is a strategy that improves the performance of
a work-stealing scheduler by reducing the amount of load the scheduler
places on the garbage collector. Lazy Tree Splitting is a technique
for automatically scheduling the execution of parallel operations over
trees to yield scalable performance and eliminate the need for
per-application tuning. I use Manticore, PML's compiler and runtime
system, and a 16-core NUMA machine as a testbed for these techniques.

In addition, I present two empirical studies. In the first study, I
evaluate Lazy Promotion over three representative PML benchmarks. The
results demonstrate that Lazy Promotion either outperforms or performs
the same as an alternative scheme based on Eager Promotion. This study
also evaluates the design of the Manticore runtime system, in
particular, the split-heap memory manager, by comparing the system to
an alternative system based on a unified-heap memory manager, and
showing that the unified version has limited scalability due to poor
locality. In the second study, I evaluate Lazy Tree Splitting over
seven PML benchmarks by comparing Lazy Tree Splitting to its
alternative, Eager Tree Splitting. The results show that, although the
two techniques offer similar scalability, only Lazy Tree Splitting is
suitable as a component of a robust system.

Michael's advisor is Prof. John Reppy

Login to the Computer Science Department website for details,
including a draft copy of the dissertation:

 https://www.cs.uchicago.edu/phd/phd_announcements#mrainey

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Margaret P. Jaffey            margaret at cs.uchicago.edu
Department of Computer Science
Student Support Rep (Ry 156)               (773) 702-6011
The University of Chicago      http://www.cs.uchicago.edu
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


More information about the Colloquium mailing list