preliminary results

readers who like graphics are probably wondering if they've been tricked: what does PE have to do with DOOM? so far i've only addressed one end of my problem axis: how to support RTCG, but not how to use it in interactive graphics. This section makes the connection; the proposal explains how the code for some of these might work.

quickdraw

consider quickdraw. video enters and leaves the machine through framebuffers, each with its own format. some images are grayscale, some are bitmaps, and some have indexed color. each pixop or scan-converter is implemented for each format, burdening the programmer and wasting memory.

instead, a language describing operations over generalized images could create scan-converters specialized to particular formats. the same line-drawing or bitblit code could work with RGB, RGBA, sequential, grayscale, indexed, and maybe even single-bit formats.

a simpler example is the 8-way symmetry of Bresenham's algorithm. ordinary lisp macros (or exotic cpp hackery) suffice for implementation since (i'm guessing) there's little advantage to not compiling all the cases. however, partial evaluation may simplify the implementation by allowing the programmer to convert a variable into a constant (in this case, the variable is which of the eight cases applies) just by a single annotation.

the previous example represents an interesting alternative use of cogen: rather than writing highly general procedures, write very small procedures. after a routine is debugged and identified as a candidate for optimization, the annotation is added. since the transformation is semantics-preserving, there is less that can go wrong, and it is easier to debug.

another application for specialization is rectilinear ops, that is the case where one component of a delta vector is zero. a line or rectangle routine can run much faster if its arguments are known to be so aligned, generating code for this case requires somehow making this logical relation static [note vs-on-line] [note binding-time-improvement].

how general can the image representation be? how general a space of image operations can be supported? a union image hands off pixel operations to one of its children images with a translation, depending on the location. this is the building block of the macintosh feature where the desktop spans several monitors, and windows may cross framebuffer boundaries. in theory, once window placement is known, alignment and overlap tests may be eliminated, but it remains to be seen if nitrous is actually the right tool.

photoshop

consider photoshop (or even more so, fractal paint). it comes with a large collection of brushes and effects, many of which are complex enough to have their own control panels and user interfaces. you can write your own plug-in modules, but it's too hard.

instead, consider a brush language where user-controls, brush-motion, and image arithemetic are highly integrated. now when the user feels the need for something new, she copies the closest existing brush, pops it open and edits the program, clicks ok, and tries it out. the experiment is safe because the brush language is safe.

consider the efficient implementation of filters, or more generally any image function where each output pixel is a function of several near-by input pixels---a neighborhood, perhaps 3x3 or 5x5. say your target machine has registers and cache. a sliding window increases register reuse, and tiling the path of the window increases cache use [Wolf]. the problem is this loop is much more complex than the obvious 4-nested-loop implementation.

the fortran approach hides the complexity in an optimizing compiler, the transformation happens automatically. this introduces brittleness and requires the compiler to search.

another path is to use a macro to abstract the window/tile pattern. my objective is to provide easy ways of writing such macros.

renderman

consider a renderman modeler. the user opens a dialog and edits a shader program. the program is compiled for better interactive performance as the geometry is dragged about. the user decides to adjust the some parameters to the shader. when she drags a slider the shader is specialized so that only values dependent on that slider are recomputed---the static sliders are `program'. [usoft].

on another front, the `display list' or `modeling hierarchy' can be converted from an interpreted data structure into executable code as needed.

Simsian CA

consider Simsian evolution of cellular automata on a PC. a genetic algorithm produces local pixel programs. the neighborhood functions combine with buffers to produce video streams. the user selects pleasing programs for reproduction by the genetic algorithm. the task of the programmer is then to create a language which mutates well.

dsp

Max is a data-flow graphical programming system for MIDI streams (named after Max Matthews, the father of computer music. the last time i saw it was 1990, when Andy Schloss was using it with `digital drumsticks'). Do that but for audio samples and video pixels. This might be a system like Danenberg's Actic music language, but suitable for realtime interactive performance.