problem and objective

The objective of my research is to develop software systems for building complex interactive graphics environments. Consider these `power-user' applications:

quickdraw
windows, framebuffers, video io, scan conversion, character generation.
photoshop
defines new brushes and filters on-the-fly.
renderman
3D geometry, motion, and the shading language.
doom-tv
over a LAN, with live video streams mapped onto walls and characters.
Simsian CAs
mutating cellular automata and video streams for user selection.

Current software practice is inadequate for such systems. Here's the essence of why: in graphics there are many pixels. on one hand, you want the per-pixel routines (ie the bits of code that actally draw each pixel on the screen) to be flexible to support user-specified operations. On the other hand, making the per-pixel code flexible and general kills performance by putting conditionals in a loop. this is the fundamental performance vs generality trade-off at work.

When prototyping, exploratory programming, evolutionary programming, toolkit building, and `builder' building, this problem is exacerbated because this is when software is at its most flexible.

The standard technique in this situation is batching (aka buffering and loop inversion), which uses temporary memory to avoid the inner conditionals and increase the bandwidth. unfortunately batching is hard to program and increases latency. but latency is critical to interactivity. (see proposal)

The only solution is run-time code generation (RTCG). here, each time the user requests a new operation, a loop is generated just for that operation. thus the conditionals are eliminated, no temporary memory is required, and latency is uneffected.

the problem-axis of my thesis then becomes: 1) how to support RTCG in a programming system? and 2) how can RTCG be used in interactive graphics?