15-745 Fall '03
In-Class Discussions

Here is some advice on how to lead a discussion.

Day 1:
Friday, October 10th

Partial Redundancy Elimination:
What is the latest and greatest approach to eliminating redundant computation? How can it be implemented within an SSA framework?
Eliminating Memory References:
Memory references are expensive. What optimizations can we perform to eliminate them?
Pointer Analysis (Parts 1 and 2):
A major stumbling block to many optimizations in real programs is the ambiguity regarding what pointers point to. If one has to assume the worst case (i.e. that a pointer could point to anything), then it is difficult to aggressively optimize the code. We will look at some of the latest work on how to analyze pointers. Because this is a challenging topic and there is much research in this area, we will cover it in two parts.

 
Table 1: Friday, October 10th
Topic Time Slot Discussion Leader
Partial Redundancy Elimination 10:30-10:50 Shobha Venkataraman
Eliminating Memory References 10:50-11:10 Mike Bigrigg
Pointer Analysis (Part 1) 11:10-11:30 Benoit Hudson
Pointer Analysis (Part 2) 11:30-11:50 Manfred Lau

Day 2:
Monday, October 13th

Profiling Techniques:
As we have described data flow analysis so far, the compiler assumes that all theoretically possible paths are equally likely. What if the compiler could understand the actual likelihood of paths being taken? We will look at some techniques for collecting this type of profiling information in modern machines.
Exploiting Profiling Information in Data Flow Analysis:
Given that we have profiling information, how can we redesign data flow analysis to take advantage of it?
Dynamic Optimizations (Parts 1 and 2):
Taking the idea of using dynamic behavior in the optimization process a step further, what if we performed optimizations as the code was running? There is much research on this topic now, so we will cover it in two parts.

 
Table 2: Monday, October 13th
Topic Time Slot Discussion Leader
Profiling Techniques 3:00-3:20 Flavio Lerda
Exploiting Profiling Info in Data Flow Analysis 3:20-3:40 Stephen Somogyi
Dynamic Optimizations (Parts 1 & 2) 3:40-4:00 Indrayana Rustandi, Yevgen Voronenko & Joseph Slember

Day 3:
Wednesday, October 15th

Code Layout Optimizations:
Instruction cache misses are expensive. In addition, the code layout can also potentially affect branch performance. How can the compiler enhance the code layout to improve performance?
Improving Data Cache Performance:
Cache misses can be extremely expensive. What are some optimizations for improving the hit rate in the data cache?
IA-64 from a Compiler Optimization Perspective:
IA-64 is the new Intel instruction set. It was designed to support a number of aggressive compiler optimizations that are not normally performed on other commercial machines. We will examine the rationale for why these features were included in IA-64.
Debugging Optimized Code:
Since the output of an optimizing compiler can bear little resemblance to the original source code, getting useful information in a debugger can be challenging. A conservative approach is to simply disable optimization if you want to run a debugger, but we will look at techniques for enabling debugging on optimized code.

 
Table 3: Wednesday, October 15th
Topic Time Slot Discussion Leader
Code Layout Optimizations 3:00-3:20 Jon Derryberry
Improving Data Cache Performance 3:20-3:40 Alla Safonova
IA-64 from a Compiler Optimization Perspective 3:40-4:00 Jeffrey Stylos
Debugging Optimized Code 4:00-4:20 Maxim Likhachev

Back to CS745 home page.