computational thinking, carnegie mellon
Sponsored by
microsoft research
 
  PROBEs/2008 Parallel Thinking Seminar Series  
 

Schedule Spring 2008

8 April
CIC, 4th floor, Panther Hollow Room

3:00-4:00 PM,
with 30 min Q&A to follow

James Larus
Microsoft Research Labs
Spending Moore’s Dividend
 

Abstract

Thanks in large measure to Moore’s Law, CPU performance has increased 40-50% per year over the past three decades. The advent of Multicore processors marks an end to sequential performance improvement and a radical shift to parallel programming. To understand the consequences of this change, it is worth looking back at where the previous, thousands-fold increase in computer performance was used and looking forward to how software might accommodate this abrupt shift in the underlying computing platform.

Bio

James Larus is a Research Area Manager for programming languages and tools in Microsoft Research, where he manages the Human Interaction in Programming, Runtime Analysis and Design, Software Reliability Research, and Concurrency Research groups and co-leads the Singularity research project. Larus joined Microsoft Research as a Senior Researcher in 1998 to start and, for five years, lead the Software Productivity Tools (SPT) group, which developed and applied a variety of innovative techniques in static program analysis and constructed tools that found defects (bugs) in software. This group's research has both had considerable impact on the research community, as well as being shipped in Microsoft products such as the Static Driver Verifier and FX/Cop and other, widely-used internal software development tools. Before joining Microsoft, Larus was an Assistant and Associate Professor of Computer Science at the University of Wisconsin-Madison, where he published approximately 60 research papers and co-led the Wisconsin Wind Tunnel (WWT) research project with Professors Mark Hill and David Wood. WWT was a DARPA and NSF-funded project investigated new approaches to simulating, building, and programming parallel shared-memory computers. Larus’s research spanned a number of areas: including new and efficient techniques for measuring and recording executing programs’ behavior, tools for analyzing and manipulating compiled and linked programs, programming languages for parallel computing, tools for verifying program correctness, and techniques for compiler analysis and optimization. Larus received his MS and PhD in Computer Science from the University of California, Berkeley in 1989, and an AB in Applied Mathematics from Harvard in 1980. At Berkeley, Larus developed one of the first systems to analyze Lisp programs and determine how to best execute them on a parallel computer. Larus has been an active contributor to the programming languages, compiler, and computer architecture communities. He has published many papers and served on numerous program committees and NSF and NRC panels. Larus became an ACM Fellow in 2006.

15 April
Newell-Simon Hall 3305

3:00-4:00 PM,
with 30 min Q&A to follow

Maurice Herlihy,
Brown University, Computer Science Dept.
Making Transactional Memory More Scalable

 

Abstract

While transactional memory promises to ease the task of programming emerging multicore architectures, questions remain concerning how well it scales to long transactions and many cores. In this talk, we identify identify two substantial limitations in the way current proposals handle synchronization and recovery. Synchronization is typically based on read/write conflicts: two transactions conflict if they access the same object (or location) and one access is a write. Recovery is (with some exceptions) typically all-or-nothing: a transaction either commits, and installs its changes, or aborts, and discards its changes. We argue that read-write synchronization and all-or-nothing recovery are not well-suited to environments with long-lived transactions, substantial contention, or both.

We describe ongoing research on how Transactional Memory can be extended to alleviate these obstacles to scalability. We describe how to exploit semantic knowledge to enhance concurrency, and how a checkpoint/continuation style of programming can support fine-grained recovery, and a novel application of Bloom filters to detect and avoid deadlocks.

Joint work with Eric Koskinen.

Bio

Maurice Herlihy received an A.B. degree in Mathematics from Harvard University and a Ph.D. degree in Computer Science from MIT. He has been an Assistant Professor in the Computer Science Department at Carnegie Mellon University, and a member of the research staff at Digital Equipment Corporation's Cambridge (MA) Research Lab. He is now a Professor of Computer Science at Brown University. Prof. Herlihy's research centers on practical and theoretical aspects of multiprocessor synchronization, with a focus on wait-free and lock-free synchronization. His 1991 paper "Wait-Free Synchronization" won the 2003 Dijkstra Prize in Distributed Computing, and he shared the 2004 Goedel Prize for his 1999 paper "The Topological Structure of Asynchronous Computation." He is a Fellow of the ACM.

22 April
Newell-Simon Hall 3305

3:00-4:00 PM,
with 30 min Q&A to follow

Charles Leiserson
MIT , Computer Science and Artificial Intelligence Laboratory
Cilk++: Multicore-Enabling Legacy C++ Code

 

Abstract

In September 2008, Cilk Arts, Inc., was founded to address the growing need for software to make it easy to program multicore computers. Based on the Cilk multithreading technology developed at MIT, the company's first product, Cilk++, which will be released for general use in Q4 of 2008, allows legacy C++ applications to be multicore-enabled by embedding a handful of keywords in the program source. The Cilk++ compiler and runtime platform work together to offer outstanding performance. In addition, the Cilkscreen race detector guarantees to find race bugs in ostensibly deterministic executions, thereby ensuring software reliability. To cope with legacy codes containing global variables, Cilk++ supports "hyperobjects" which allow races on nonlocal variables to be mitigated without lock contention or restructuring of code. This talk will overview the Cilk++ technology and contrast it with MIT-Cilk. Cilk, Cilk++, and Cilkscreen are registered trademarks of Cilk Arts, Inc. Bio: Charles E. Leiserson received the B.S. degree in computer science and mathematics from Yale University, New Haven, Connecticut, in 1975 and the Ph.D. degree in computer science from Carnegie Mellon University, Pittsburgh, Pennsylvania, in 1981. In 1981, he joined the faculty of the Massachusetts Institute of Technology, Cambridge, Massachusetts. He is now Professor of Computer Science and Engineering in the MIT Department of Electrical Engineering and Computer Science, member of the Theory of Computation research group in the MIT Computer Science and Artificial Intelligence Laboratory, and head of CSAIL's Super-computing Technologies research group. He is a former Director of the Computer Science Program of the Singapore-MIT Alliance, a distance-education initiative in which students in Singapore took MIT classes.

Bio

Charles E. Leiserson received the B.S. degree in computer science and mathematics from Yale University, New Haven, Connecticut, in 1975 and the Ph.D. degree in computer science from Carnegie Mellon University, Pittsburgh, Pennsylvania, in 1981. In 1981, he joined the faculty of the Massachusetts Institute of Technology, Cambridge, Massachusetts. He currently holds the position of Professor of Computer Science and Engineering in the MIT Department of Electrical Engineering and Computer Science. He leads the Supercomputing Technologies (SuperTech) research group and is member of the Theory of Computation research group in the MIT Computer Science and Artificial Intelligence Laboratory.

Steering  Committee

Phil Gibbons (Intel), Uzi  Vishkin (University of Maryland), Charles Leiserson (MIT)