Computer Architecture is Back: The Berkeley View of the Parallel Computing Research Landscape
David Patterson
 

ABSTRACT

The sequential processor era is now officially over, as the IT industry has bet its future on multiple processors per chip. The new trend is doubling the number of cores per chip every two years instead the regular doubling of uniprocessor performance. This shift toward increasing parallelism is not a triumphant stride forward based on breakthroughs in novel software and architectures for parallelism; instead, this plunge into parallelism is actually a retreat from even greater challenges that thwart efficient silicon implementation of traditional uniprocessor architectures.

A diverse group of University of California at Berkeley researchers from many backgrounds -- circuit design, computer architecture, massively parallel computing, computer-aided design, embedded hardware and software, programming languages, compilers, scientific programming, and numerical analysis -- met for nearly two years to discuss parallelism from these many angles. This talk and a technical report are the result. (See view.eecs.berkeley.edu)

We concluded that sneaking up on the problem of parallelism the way industry is planning is likely to fail, and we desperately need a new solution for parallel hardware and software. Here are some of our recommendations:

  • The overarching goal should be to make it easy to write programs that execute efficiently on highly parallel computing systems
  • The target should be 100s to 1000s of cores per chip, as these chips are built from processing elements that are the most
    efficient in MIPS (Million Instructions per Second) per watt, MIPS per area of silicon, and MIPS per development dollar.
  • Instead of traditional benchmarks, use 13 Dwarfs to design and evaluate parallel programming models and architectures. (A dwarf is an algorithmic method that captures a pattern of computation and communication.)
  • Autotuners should play a larger role than conventional compilers
    in translating parallel programs.
  • To maximize programmer productivity, future programming models must be more human-centric than the conventional focus on hardware or applications or formalisms.
  • Traditional operating systems will be deconstructed and operating system functionality will be orchestrated using libraries and virtual machines.
  • To explore the design space rapidly, use system emulators based on Field Programmable Gate Arrays that are highly scalable, low cost, and flexible. (see ramp.eecs.berkeley.edu)

Now that the IT industry is urgently facing perhaps its greatest challenge in 50 years, and computer architecture is a necessary but not sufficient component to any solution, this talk declares that computer architecture is interesting once again.

 

Bio

David Patterson is the Pardee Professor of Computer Science and the Director of the RAD Lab at the University of California at Berkeley. Past chair of the Computer Science Department and the Computing Research Association, he was elected President of Association for Computing Machinery (ACM) and served on the Information Technology Advisory Committee for the U.S. President.

He is one of the pioneers of both reduced instruction set computers (RISC) and redundant arrays of inexpensive disks (RAID) and is co-author of two popular textbooks on computer architecture. He has won several awards for research, teaching, and service, including the 2004 C&C Prize. He was also elected to the American Academy of Arts and Sciences, the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley Engineering Hall of Fame.

Visit http://www.eecs.berkeley.edu/Faculty/Homepages/patterson.html for more info.