TUTORIAL PROPOSAL
1 page
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
1 page
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Description

How to Think Algorithmically in Parallel? stFull-day Tutorial in Conjunction with the 21 ACM International Conference on Supercomputing (ICS), Seattle, WA, Sunday, June 17, 2007 Background The abstract model used in standard provide for hands-on experience could have a special undergraduate courses on algorithms and data-structures appeal for the general audience of the conference. for serial computing is known as RAM, for random-access The approach emphasizes to the extent possible machine, or model. What should be the proper counterpart identification of parallelism in existing “serial” algorithms, for parallel computing? where the main issue is fitting new parallel data structures, Following a fierce “battle of ideas” during most of the as well a quest for new parallel algorithms where needed. 1980s regarding this very question a clear winner emerged: Another unique feature of the approach is the very small The parallel random-access machine, or model, widely fraction of the instruction time devoted to programming, as known as the PRAM (pronounced pee-RAM). The PRAM opposed to reasoning about algorithms. Programming is to has been the model of choice for the major theory-related be picked up from documents (e.g., programming tutorial communities. During 1988-90, a big PRAM chapter was and manual). This is a strength: (i) algorithm design is the included in at least 3 then-popular standard algorithms idea part of a program and where the real thinking ...

Informations

Publié par
Nombre de lectures 26
Langue English

Extrait

How to Think Algorithmically in Parallel?
Full-day Tutorial in Conjunction with the 21
st
ACM International Conference on
Supercomputing (ICS), Seattle, WA, Sunday, June 17, 2007
Background The abstract model used in standard
undergraduate courses on algorithms and data-structures
for serial computing is known as RAM, for random-access
machine, or model. What should be the proper counterpart
for parallel computing?
Following a fierce “battle of ideas” during most of the
1980s regarding this very question a clear winner emerged:
The parallel random-access machine, or model, widely
known as the PRAM (pronounced pee-RAM). The PRAM
has been the model of choice for the major theory-related
communities. During 1988-90, a big PRAM chapter was
included in at least 3 then-popular standard algorithms
textbooks: Baase-88, Cormen-Leiserson-Rivest-90, 1
st
Edition, Manber-89. In fact, in terms of magnitude and
breadth the PRAM has no serious contender. One can say
that the PRAM emerged as a clear winner in a natural
selection process that took place during the 1980s.
The PRAM did not fare as well with architecture
communities in the early 1990s, when chip
multiprocessing was not yet an option.
Its key assumption that the latency for arbitrary number of
memory accesses is the same as for one access drew much
criticism. The most relevant criticism argued correctly that
there was no way to get sufficient bandwidth from multi-
chip multiprocessors to allow the programmer to view the
machine as a PRAM.
The (relatively short) architecture part of the tutorial will
make the case that an on-chip parallel processor can
provide sufficient bandwidth between processors and
memories, and discuss the recent commitment to silicon of
an explicit multi-threaded (XMT) architecture guided by a
“PRAM-On-Chip” vision.
Target audience and what to expect In order to function in
the new world of multi-core computer systems, some
programming for parallelism cannot be avoided. Any one
of the conference participants who recognizes that should
find it helpful to understand the thinking process behind
such programming. In fact, the short history of parallel
computing provides many examples where parallel
architectures were built only to find out that their designers
expected that the thought process behind their
programming will emerge after their machine are
deployed. Unfortunately, build-first figure-out-how-to-
think/program-later has its limits.
A one-day tutorial is a bit too short. It typically takes the
better part of the semester to bring graduate students to a
point where they successfully think in parallel. From that
point on, producing one more parallel algorithm, or
program, becomes a matter of skill—similar to serial
algorithms and programming. In addition to the general
goal of teaching how to think in parallel, the tutorial seeks
to bring participant to a point where they can: (i) advance a
non-trivial parallel algorithm into a program and run it on
PRAM silicon, and (ii) reason about the analytic run time
of a PRAM algorithm, as well the practical implication of
such analysis. The opportunity that the tutorial will
provide for hands-on experience could have a special
appeal for the general audience of the conference.
The approach emphasizes to the extent possible
identification of parallelism in existing “serial” algorithms,
where the main issue is fitting new parallel data structures,
as well a quest for new parallel algorithms where needed.
Another unique feature of the approach is the very small
fraction of the instruction time devoted to programming, as
opposed to reasoning about algorithms. Programming is to
be picked up from documents (e.g., programming tutorial
and manual). This is a strength: (i) algorithm design is the
idea part of a program and where the real thinking takes
place, (ii) programming (including parallel programming),
on the other hand, should be a skill that can be acquired; in
the serial computing example programming is typically not
taught beyond the Freshman year, while algorithms and
data structure are taught all the way to graduate school.
Participants will be given the opportunity to develop code
for a non-trivial application and run it on the UMD FPGA-
based PRAM-On-Chip 64-processor 75MHz computer as
part of the tutorial.
In other words, the short time devoted to programming
reflects the fact that the approach provides an easy
programming methodology for parallel programming. It
should be clear that easy parallel programming is the
overriding objective of the overall approach.
Where is this tutorial coming from? Material for this
tutorial is taken from a graduate course on Parallel
Algorithms at the University of Maryland that is cross-
listed between computer science and electrical and
computer engineering. The course is a bit unique as it has
its own computer, own compiler and own programming
language. The tutorial will provide coherent snapshots
from this “UMD course planet”
.
The core material will be an updated shorter version of the
course class notes. Documents featuring a tutorial and a
manual of a modest extension to C (called XMT-C) used
for programming will be provided.
Organizer and presenter Uzi Vishkin
E-Mail: last name at umiacs.umd.edu. Home page:
http://www.umiacs.umd.edu/~vishkin/
Bio: Uzi Vishkin is a permanent member of the University
of Maryland Institute for Advanced Computer Science
(UMIACS) and Professor of Electrical and Computer
Engineering since 1988. He got his DSc in Computer
Science from the Technion—Israel Institute of Technology
and his MSc and BSc in Mathematics from the Hebrew
University, Jerusalem, Israel. He was Professor of
Computer Science at Tel Aviv University and the
Technion, research faculty at the Courant Institute, NYU
and a post-doc at IBM T.J Watson. He is Fellow of the
ACM and an ISI-Thompson Highly-Cited Researcher
  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents