ColloquiaDissertation Defense/Ernesto Gomez/July 11
Margaret Jaffey
margaret at cs.uchicago.edu
Wed Jun 27 10:24:48 CDT 2001
Department of Computer Science/The University of Chicago
1100 E. 58th Street, Ryerson Hall
DISSERTATION DEFENSE ANNOUNCEMENT
Candidate: Ernesto Gomez
Defense Date: Wednesday, July 11, 2001
Time: 9:00 a.m.
Location: Ry 251
Dissertation Title: Single Program Task Parallelism
Candidate's Advisor: Prof. L. Ridgway Scott
Abstract:
We are here concerned with procedural parallelism in parallel execution;
specifically with the parallel programming and execution paradigm in
which multiple
copies of a single program execute concurrently, but in which each specific
copy of the program may execute a different sequence of statements and routines
than other copies. This parallel computing mode supports a very general form
of parallelism in which different code is able to act on different data at each
process.
The restriction to a single program text provides a shared context for all
processes which can be exploited to ease the programming task, enhance
communications efficiency and provide guarantees of determinism and freedom
from deadlock.
We introduce the abstraction of barrier communications to define the
semantics of interprocess communication, and show that it is
sufficiently general
to include all likely forms of interprocess data transfer, and that it is both
deterministic and free from deadlock under the restriction that the group of
communicating processes is known.
Previous work on communicating process groups during program execution has been
generally limited to providing explicit ways for programmers to declare process
groups. The theory of Pstreams is developed to identify process groups
that form implicitly as a result of program logic. It provides a
static analysis
of the control flow graph to find the places in the code where a process group
splits into multiple groups of processes, and identifies unique points in the
program code where multiple process groups can be said to merge into a single
group. In addition, the theory of Pstreams specifies the requirements for code
at split and merge points to support the runtime identification of the actual
process groups during execution.
Barrier communications, Pstream splits and merges are all logically
synchronizing.
We here propose a novel variation on the concept of overlapping to provide
an efficient mechanism for minimizing the synchronization costs of such an
execution
by communicating opportunistically in the intervals between definition and use
of communicated variables. This requires asynchronous, out of order
communications.
To support such communications efficiently and guarantee its correctness we
define a finite state machines which use semantic information from the program
and state information from processes in a communicating group to control each
communication.
Asynchronous execution further provides opportunities for super-scalar speedup
by exploiting information available in parallel to control the execution. Such
information would not be available at the same time in serial
execution, therefore
in some cases serial execution would perform extra work that would later be
known not to be needed. We investigate short-cutting as a way to use
information during a parallel execution in this way.
Finally we describe the SOS function library, which provides runtime support
to overlapping, Pstreams, and short-cutting, and give experimental results of
the application of these techniques to an example scientific program.
-----
A paper copy of Mr. Gomez's dissertation is available for viewing
purposes in Ry 161A.
Everyone is welcome to attend Mr. Gomez's defense.
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Margaret P. Jaffey margaret at cs.uchicago.edu
Department of Computer Science
Student Support Rep (Ry 161A) (773) 702-6011
The University of Chicago http://www.cs.uchicago.edu
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
More information about the Colloquium
mailing list