15-745 Spring '03
In-Class Discussions

Here is some advice on how to lead a discussion.

Here is the list of papers that we will be discussing (PDF).

Day 1:
Wednesday, February 26th

Partial Redundancy Elimination:
What is the latest and greatest approach to eliminating redundant computation? How can it be implemented within an SSA framework?
Pointer Analysis (Parts 1 and 2):
A major stumbling block to many optimizations in real programs is the ambiguity regarding what pointers point to. If one has to assume the worst case (i.e. that a pointer could point to anything), then it is difficult to aggressively optimize the code. We will look at some of the latest work on how to analyze pointers. Because this is a challenging topic and there is much research in this area, we will cover it in two parts.

 
Table 1: Wednesday, February 26th
Topic Time Slot Discussion Leader
Partial Redundancy Elimination 1:35-2:00 Tom Murphy & Aleksey Kliger
Pointer Analysis (Part 1) [PS.GZ] 2:00-2:25 Kaustuv Chaudhuri & Stephen Magill
Pointer Analysis (Part 2) 2:25-2:50 Jason Reed & Lea Kissner

Day 2:
Monday, March 3rd

Eliminating Memory References:
Memory references are expensive. What optimizations can we perform to eliminate them?
Profiling Techniques:
As we have described data flow analysis so far, the compiler assumes that all theoretically possible paths are equally likely. What if the compiler could understand the actual likelihood of paths being taken? We will look at some techniques for collecting this type of profiling information in modern machines.
Exploiting Profiling Information in Data Flow Analysis:
Given that we have profiling information, how can we redesign data flow analysis to take advantage of it?

 
Table 2: Monday, March 3rd
Topic Time Slot Discussion Leader
Eliminating Memory References [PPT, PDF-1, PDF-2 ] 1:35-2:00 Alina Oprea & Joshua Dunfield
Profiling Techniques 2:00-2:25 Ryan Williams & Charlie Garrod
Expoiting Profiling Info in Data Flow Analysis [PPT] 2:25-2:50 Smarahara Misra & Trevor Carlson

Day 3:
Wednesday, March 5th

Dynamic Optimizations (Parts 1 and 2):
Taking the idea of using dynamic behavior in the optimization process a step further, what if we performed optimizations as the code was running? There is much research on this topic now, so we will cover it in two parts.
Code Layout Optimizations:
Instruction cache misses are expensive. In addition, the code layout can also potentially affect branch performance. How can the compiler enhance the code layout to improve performance?

 
Table 3: Wednesday, March 5th
Topic Time Slot Discussion Leader
Dynamic Optimizations (Part 1) 1:00-1:20 Girish Venkataramani & Douglas Yung
Dynamic Optimizations (Part 2) [PPT] 2:00-2:25 Daniel Spoonhower & Brendan McMahan
Code Layout Optimizations [PPT-1, PPT-2] 2:25-2:50 Chi Chen & Jared Smolens

Day 4:
Monday, March 10th

Improving Data Cache Performance:
Cache misses can be extremely expensive. What are some optimizations for improving the hit rate in the data cache?
IA-64 from a Compiler Optimization Perspective:
IA-64 is the new Intel instruction set. It was designed to support a number of aggressive compiler optimizations that are not normally performed on other commercial machines. We will examine the rationale for why these features were included in IA-64.
Debugging Optimized Code:
Since the output of an optimizing compiler can bear little resemblance to the original source code, getting useful information in a debugger can be challenging. A conservative approach is to simply disable optimization if you want to run a debugger, but we will look at techniques for enabling debugging on optimized code.

 
Table 4: Monday, March 10th
Topic Time Slot Discussion Leader
Improving Data Cache Performance [PPT-1, [PPT-2 ] 1:35-2:00 Nikos Hardavellas & Shelley Chen
IA-64 from a Compiler Optimization Perspective [PPT] 2:00-2:25 Alex Bobrek & Jonathan Bradbury
Debugging Optimized Code [PPT] 2:25-2:50 Steven Osman & Alok Ladsariya

Back to CS745 home page.