computational thinking, carnegie mellon
Sponsored by
microsoft research
 
  PROBEs/2008 Parallel Thinking Seminar Series  
 

Schedule Fall 08

Wed 29-10-09
NSH 3305 at 10:30 AM

Andrew Chien
Vice President-Intel Research
Parallelism for the Masses: Opportunities and Challenges.
PDF Presentation
 

Abstract

Parallel programming is a difficult challenge which has been the subject of research for decades. The evolution of hardware technology dictates that parallelism must be a critical element of nearly every computer program if applications are to scale up in performance with hardware improvements driven by Moore's law. Quad-core and 6-core systems are already available in mainstream volume client and server platforms, and higher core counts, increasing with Moore's law cadence (2x every 2 years,) are planned. It is critical that we make parallelism dramatically easier for all applications - on all platforms from server to laptop to handheld mobile.
We will describe a selection of Intel's efforts in these areas (including Intel's Research Terascale chip - Polaris - with 80 cores), and outline major opportunities for research impact. The most important requirements for parallel software are forward scalability, higher-level programmability, and robustness. Of these, forward scalability is perhaps the most critical in order to fuel the software-hardware virtuous cycle that has so benefited technology and society, by providing applications that get faster on succeeding generations of hardware platforms.
Finally, the advent of pervasive parallelism suggests that major changes in computer science curricula are required - with parallelism as the central foundation, not at the edge. We will close with some speculation on the rate of progress of parallel programming into the mainstream software community and the implications of such proliferation.

Bio

Andrew Chien is vice president of the Corporate Technology Group and director of Research for Intel Corporation. Chien previously served as the Science Applications International Corporation Endowed Chair Professor in the department of computer science and engineering, and the founding director of the Center for Networked Systems at the University of California at San Diego. CNS is a university-industry alliance focused on developing technologies for robust, secure, and open networked systems.
For more than 20 years, Chien has been a global leader in research and development of high-performance computing systems. His expertise includes networking, Grids, high performance clusters, distributed systems, computer architecture, high speed routing networks, compilers, and object oriented programming languages. He is a Fellow of the American Association for Advancement of Science (AAAS), Fellow of the Association for Computing Machinery (ACM), Fellow of Institute of Electrical and Electronics Engineers (IEEE), and has published over 130 technical papers. Chien serves on the Board of Directors for the Computing Research Association (CRA), Advisory Board of the National Science Foundationís Computing and Information Science and Engineering (CISE) Directorate, and Editorial Board of the Communications of the Association for Computing Machinery (CACM).

Thurs 11-6-08 in
NSH 3305 at 3:00 PM

3:00-4:00 PM,
with 30 min Q&A to follow

Michael Scott
University of Rochester-Computer Science Department
Transactional Memory: The Surprising Complexity of a Simple Idea
PDF Presentation

 

Abstract

With the proliferation of multi-core processors, there is an increasingly urgent need to simplify the creation of parallel programs. Many recent hopes have been pinned on the promise of transactional memory (TM). In a TM-capable language, the programmer labels sections of code that need to execute atomically, and the underlying implementation attempts to run the transactions of different threads in parallel whenever possible, presumably by means of speculation and rollback.
TM is intended to replace traditional lock-based synchronization. To first approximation, it should combine the simplicity of a single global lock on atomic actions with the concurrency of many fine-grain locks. Unfortunately, recent work has revealed a host of semantic complications.  Some of these arise from unanticipated programming idioms, some from attempts to improve the performance of candidate implementations, and others from combinations of the two. This talk will survey the state of the art and explore the prospects for a two-layered semantics, in which simple programs are easy to explain and more complicated issues arise only in program components written by concurrency experts.

Bio

Michael L. Scott is a Professor and past Chair of the Department of Computer Science at the University of Rochester.  He received his Ph.D. from the University of Wisconsin-Madison in 1985.  His research interests span operating systems, languages, architecture, and tools, with a particular emphasis on parallel and distributed systems.  He is best known for work in synchronization algorithms and concurrent data structures, in recognition of which he shared the 2006 SIGACT/SIGOPS Edsger W. Dijkstra Prize.  Other widely cited work has addressed parallel operating systems and file systems, software distributed shared memory, and energy-conscious operating systems and microarchitecture.
His textbook on programming language design and implementation Programming Language Pragmatics, second edition, Morgan Kaufmann, Nov. 2005) has become a standard in the field.  In 2003 he served as General Chair for SOSP; more recently he has been Program Chair for TRANSACT'07 and PPoPP'08.  In 2001 he received the University of Rochester's Robert and Pamela Goergen Award for Distinguished Achievement and Artistry in Undergraduate Teaching.

Wed 11-19-08 in
NSH 3305 at 3:00 PM

Guy Steele
Sun Microsystems
The Future Is Parallel: What's a Programmer to Do? Breaking Sequential Habits of Thought

 

Abstract

Parallelism is here, now, and in our faces. It used to be just the supercomputers and servers, but now multicore chips are in desktops and laptops, and general practitioners, not just specialists, need to get used to parallel programming.

The sequential algorithms and programming tricks that have served us so well for 50 years are the wrong way to think going forward.  In this talk we illustrate the divide-and-conquer strategy with some small, cute programs that represent the necessary future approach to program structure.

Bio

Guy L. Steele Jr. (Ph.D., MIT, 1980) is a Sun Fellow and heads the Programming Language Research group within Sun Microsystems Laboratories in Burlington, MA.  Before coming to Sun in 1994, he held positions at Carnegie-Mellon University, Tartan Laboratories, and Thinking Machines Corporation.  He is the author or co-author of several books on programming languages (Common Lisp, C, High Performance Fortran, the Java Language Specification). He has served on accredited standards committees for the programming languages Common Lisp, C, Fortran, Scheme, and ECMAScript. He designed the original EMACS command set and was the first person to port TeX.

He is a Fellow of the Association for Computing Machinery (1994) and has received the ACM Grace Murray Hopper Award (1988), a Gordon Bell Prize (1990), and the ACM SIGPLAN Programming Languages Achievement Award (1996).  He has been elected to the National Academy of Engineering of the United States of America (2001) and to the American Academy of Arts and Sciences (2002).

Steering  Committee

Phil Gibbons (Intel), Uzi  Vishkin (University of Maryland), Charles Leiserson (MIT)

Archive: Spring 2008 Seminar Series