[Colloquium] Tomorrow: Rainey/MS Presentation/Dec. 7, 2006

Margaret Jaffey margaret at cs.uchicago.edu
Wed Dec 6 10:59:01 CST 2006


This is a reminder about Mike Rainey's MS Presentation tomorrow.  The  
abstract has been revised.

------------------
Date:  Thursday, December 7, 2006

Time:  1:30 p.m.

Place:  Ryerson 276

M.S. Candidate:  Mike Rainey

M.S. Paper Title:  The Manticore Runtime Model

Abstract:

Manticore is a new programming language that supports parallelism at  
multiple levels and at different granularities. It has a novel  
combination of data-parallel constructs to support fine-grained  
computations over sequences, and explicit threading to support  
concurrent systems programming and coarse-grained parallelism. The  
language also serves as a testbed for adding other parallel  
constructs such as futures with work stealing. Supporting this  
heterogeneous parallelism in an evolving language poses new problems  
for the runtime system.

Different parallel constructs have disparate demands for scheduling.  
Data-parallel computations, for instance, need a mechanism both to  
keep processors active and to throttle parallelism when it is  
overabundant, e.g., workcrews. On the other hand, threads need load  
balancing to encourage parallelism, e.g., work stealing. Threads also  
need timed preemption to simulate extra parallelism for GUI and  
network applications. When threads and data-parallel constructs  
coexist, the language must provide mechanisms for their scheduling  
policies to interact. For instance, suppose a thread launches a data- 
parallel computation across several processors. Some of these data- 
parallel jobs might be subject to timed preemption if they share a  
processor with other threads. Since a heterogeneous parallel language  
might incorporate several different parallel constructs, its compiler  
needs a general infrastructure that can both encode different  
scheduling policies, and let them coordinate.

The main contribution of this research is a runtime model that serves  
as a foundation for heterogeneous parallel languages. The model is an  
interface between code produced by the compiler and the parallel  
hardware. It hooks into the compiler's intermediate language at a  
midpoint between the surface syntax and machine code. At this stage,  
parallel constructs are expanded into explicit operations that can  
map onto the multiple levels of parallel hardware, and coordinate  
under a unified framework.

The initial design of this model includes a set of hardware  
abstractions and basic operations that compilers can target. The  
hardware abstractions include both computational elements signals for  
timed preemption. Operations are built atop these abstractions, and  
form a scheduling infrastructure for both nested and heterogeneous  
schedulers. This allows different scheduling policies to coordinate  
and share hardware resources. Several schedulers, including variants  
for data-parallel arrays and threads, are developed to show the  
feasibility of the model.

Advisor:  Prof. John Reppy

A draft copy of Mike Rainey's MS Paper is available in Ry 161A.

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Margaret P. Jaffey                             margaret at cs.uchicago.edu
Department of Computer Science
Student Support Rep (Ry 161A)        (773) 702-6011
The University of Chicago                  http://www.cs.uchicago.edu
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=




More information about the Colloquium mailing list