Research Interests Eli Brandt, eli@cs.cmu.edu My earlier work at Carnegie Mellon was on Aura, a distributed real-time system for interactive event and audio processing. Aura is a C++ framework providing lock-free messaging and audio streaming among objects distributed over a LAN and running at different real-time priorities. The research goal here was to implement this programming model efficiently, and to explore its usefulness for writing interactive music software. I was involved with Aura's initial design and implementation, co-authoring papers on this work and on a comparison of popular OSes' real-time performance. In maintaining Aura, I ported it from Win32 to Irix and wrote several audio drivers for it. When the group extended Aura to be a distributed system, I worked on the problem of distributed time, applying control-theory techniques. Other work resulted in two collaborations, one as an undergraduate on wormhole routing in multicomputers, and one explaining how to measure MIDI transport performance with readily available hardware and two alligator clips, and giving measurements that show weaknesses of the standard USB MIDI transport. My side interest in signal processing resulted in a paper on artifact-free digital synthesis of a common analog waveform. My thesis work is about a particular style of programming, "applicative programming with temporal type constructors". This style represents musical DSP elegantly: at a high level, and from a viewpoint outside of time. To prototype a way to program in this style, I implemented an embedded language within the functional language O'Caml. This implementation serves its purpose, but is far from being a usable product: it's slow, it's limited in certain ways by being an embedded language, and normal people don't use O'Caml. The limitations are the most pressing problem from a research point of view; they require code generation, and might best be solved by working with someone who brings expertise in compilers to the problem. The speed and language- popularity problems might be solved by an implementation embedded in C++ and making heavy use of templates. Temporal type constructors introduce time structure into types. For example, the "alpha event" constructor attaches a timestamp to the type alpha, whatever it may be. The idea is simple, but I think it is still under-appreciated and under-used. File formats have tags to select particular time-structured types, when they should have a small but general set of constructors. Audio "plugins" and software synthesizers (DirectX(i) and VST(i)) communicate in a particular format, which manages to be both restrictive and absurdly complex, when they ought to construct it as necessary. Music software gives the user a painfully limiting selection of types -- the event list, for a few different kinds of events -- when, again, it should offer constructors. Concrete abstraction (http://www.grame.fr/Elody/Elody.html) is a way of letting the user manipulate data, and then abstract the difference between two states into an operator that can be applied. The lambda calculus is a powerful underlying framework -- for example, the Church numerals provide iteration. The research question is how to do abstraction. If you abstract C from [C E G], do you get (\x.[x E G]), or (\x . [x (transpose-up-major-third x) (transpose-up-fifth x)]), or one of a variety of other possibilities? It depends. The user might want any of these. I think a promising direction is to fold extra information into the user's manipulation of the data -- here, the notes might be marked as fixed, or as related through a major scale. Not much work has been done with concrete abstraction for non-music software, but I believe it should be. I think it may provide a powerful kind of programming facility that won't scare the users -- more useful than macros. You perform direct manipulation of the data just as usual, and then abstract away the data to leave the manipulation. Higher-order functions can be natural and powerful: "take the saturation plane of the image, apply the given function to it, and recombine with the hue and value planes." I've talked about what I see as near-at-hand research paths. Let me also toss out a few ideas further afield: * I think ubiquitous computing ought to make heavier use of audio: we have such a diversity of user interactions, and the sense of hearing spans the necessary range of salience, from subliminal background on up to alerting the unattentive user. Audio I/O also has practical advantages for mobile computing, such as precedent (cellphones and headsets). * C++ templates provide a Turing-complete programming language at compile time -- rather by accident, but people have used it very effectively. Wouldn't an intentional design of this kind of staged computation fix the problems of clunky syntax and inscrutable error reporting? * See to what extent vocal accents can be changed from one to another by warping in formant-tuple space according to a learned mapping. * Think about the Compose group's work on domain-specific languages for drivers to handle device interfaces, and how to extend it to OS-interface side. * How about a domain-specific language for debugging? * Follow up on Latanya Sweeney's eye-opening work on "computational disclosure control", how to maintain privacy and to control inference while preserving the usefulness of large databases. * Work on rendering impossible figures, such as the Penrose staircase, and nonstandard perspectives, such as in Escher's work. What I do will of course depend on where I am and who my colleagues are, but here you have some sense of how I think about research. I'd be glad to chat about any of these ideas.