go back to the main page

 SSS Abstracts 
Spring 1999

go back to the main page

Rapid Development of Custom Software Architecture Design Environments

Friday, January 15th, 1999 from 12-1 pm in WeH 4623.

Presented by Bob Monroe,

Software architecture provides a powerful way to manage the complexity of large software systems. It has emerged as a distinct form of abstraction for software systems with its own set of design issues, vocabulary, and goals. Like designers in other disciplines, software architects can gain significant leverage by using powerful and appropriate design environments and tools.

The cost and difficulty of creating these powerful design tools, however, prohibit their use for many software development projects. One of the primary reasons for the difficulty and cost of building these tools is that tool developers generally need to build a significant amount of supporting infrastructure before they can make use of the important architectural design expertise that the tools encapsulate. This infrastructure includes both the concepts underlying the tools' functionality and the implementation of the tools themselves.

In this talk I will present a new approach to capturing and using architectural design expertise in software architecture design environments. This approach uses a language and tools for capturing and encapsulating design expertise and a flexible software architecture design environment that can be rapidly and incrementally customized to create software architecture design for use with specific architectural styles.


Learning Melodic Pitch Preference in Real-time Interactive Jazz Improvisation

Friday, January 29th, 1999 from 12-1 pm in WeH 4601.

Presented by Belinda Thom,

My ultimate goal is to build an automated computer system that "jams" with a musician in the jazz/blues setting, providing a useful practice and performance tool for improvisers. To this end, I am developing a model of interactive solo improvisation that supports musician-specific customization, adaptation w/respect to what has just been played, and reasonable real-time musical response. My research therefore mainly consists of developing techniques for representing music data -- a temporal and stochastic process -- in such a way that learning from a specific musician's improvisation is possible.

Meaningful solo generation requires hierarchical temporal response; an agent that merely learns the surface structure contained within a "string of notes" will not necessarily be able to produce a solo that contains higher level structure, be it harmonic, tension vs. relaxation, etc. In this talk, I will discuss a new learning algorithm that I am developing that integrates two ML techniques -- unsupervised probabilistic finite automata learning -- in order to enable learning at both the surface and a higher-level temporal structure.

In particular, clustering is used to quantitize sequential bars of a solo. Given this transformation from strings-of-notes into an ordered-sequence-of-classifications, a PFA is learned that captures the trajectories through cluster-bar space that have been seen. Most importantly, the integration of these two algorithms allows me to use temporal predictive performance to address the very difficult problem of the clustering's validity.


Refinement for Parallel Programs

Friday, February 12th, 1999 from 12-1 pm in WeH 4601.

Presented by Jügen Dingel,

The stepwise development of a sequential program from a specification can be formalized through a notion of refinement. The resulting theory is elegant and powerful.

The stepwise development of a parallel program from a specification, however, turns out to be significantly harder to formalize. Fair parallelism and compositionality have to be reconciled, deadlock freedom and liveness properties need to be proved, and different kinds of parallelism (eg., shared-variable, message-passing) have to be dealt with.

In this talk, we will present a refinement calculus for parallel programs. It is based on a trace-theoretic semantic model that supports shared-variable and message-passing concurrency, fair parallel composition and the introduction of local variables and channels. Compositionality is achieved through assumption-commitment reasoning. The calculus is applicable to terminating and non-terminating programs and supports reasoning about liveness properties like termination and eventual entry. It allows the refinement of a shared-variable implementation into a distributed, message-passing implementation. Finally, it can be used to explore the ``degrees of implementation freedom'' of an algorithm and to uncover new and sometimes more efficient implementations.


Machine translation: past, present and future

Friday, February 19th, 1999 from 12-1 pm in WeH 8220.

Presented by Adam Berger,

The problem of machine translation has been characterized as "AI-complete"---at least as difficult as any other problem in artificial intelligence. And for good cause: despite four decades of research, MT researchers still haven't eliminated the need for human translators. However, they have come up with some powerful techniques in statistics and machine learning, with applications beyond translation. This talk begins with an overview of the state of the art in automatic translation, and focuses on some recent work applying methods of statistical translation to information retrieval.


Preliminary Results in Multi-Pass Parsing of Natural Language

Friday, February 26th, 1999 from 12-1 pm in WeH 8220.

Presented by Paul Placeway,

Parsing natural language is an attempt to discover some structure in a text (or textual representation) generated by a person.

Early NL systems used a strictly context-free model of grammar. Syntactic grammars using this formalism were difficult to maintain, and early parsers could suffer from exponential blow-up problems in the face of grammatically ambiguous input. The blow-up problem was solved by Tomita's local ambiguity packing (as applied in GLR, Carroll's chart parser, etc.). The maintenance problem was solved by adopting a unification-based grammar formalism; sadly this formalism can not only break straight-forward packing techniques, but also can make the (formal) parsing problem impossible to solve.

This talk begins with a brief overview of past work in syntactic parsing, then reviews unification parsing and why it is a hard problem. It concludes with a presentation of some preliminary work I've done to address these problems.


Low-Latency Music Software Using Off-The-Shelf Operating Systems

Friday, March 5th, 1999 from 12-1 pm in WeH 4601.

Presented by Eli Brandt,

Real-time music applications require operating system support for memory management, process scheduling, media I/O, and general development, including debugging. After giving some background on how such applications operate, I'll discuss the expections that they have of the OS, and how these are and are not met. We have performance data for some current operating systems, including NT4, Windows95, and Irix 6.4.

Surprise Bonus Talk - Programming with Temporal Data: Half-Baked Pre-Proposal Maunderings (To be tacked on at the end, since the main talk is only 15 or 20 minutes.)

Just a few type constructors can build up complex data types which correspond to the structure of sophisticated processing algorithms. With the right primitives, these algorithms become an order of magnitude easier to express than they are at present. Efficiency and real-time behavior should be acceptable if some care is taken.


Verification of Timed Systems using Partial Order Reduction

Friday, March 12th, 1999 from 12-1 pm in WeH 4601.

Presented by Marius Minea,

The demand for tools that check the correctness of timed systems is increasing as these systems become more widespread, from safety-critical applications to mass consumer products. However, the state space of concurrent systems grows exponentially with the number of components, a fact known as the state space explosion problem. Timing conditions significantly increase this complexity, often exceeding the current capabilities of verification tools.

We show how to alleviate this problem by using partial order reduction, a technique which has been successfully applied to the verification of untimed asynchronous systems. This method explores only a subset of representative trajectories in the state space, while preserving the verified property. We apply this technique to timed automata, a formalism based on state-transition graphs augmented with continuous-time clocks. Specifications are given in an extension of linear temporal logic, which can express timing relationships between events by specifying bounds on clock differences. We show how to extend a local time semantics for timed automata in order to preserve properties in the selected logic. The resulting model checking algorithm achieves reduction in both the control state space and the time space of the model.


Neural substrates of stimulus memory: A connectionist model of effects of lesions in perirhinal cortex

Friday, April 2nd, 1999 from 12-1 pm in WeH 4601.

Presented by Lisa Saksida,

I will begin this talk with an outline of issues relating to the neural substrates of memory, focusing particularly on processes of object recognition and identification. This will include a discussion of recent studies which have found that "rhinal" cortex makes a pivotal contribution to memory, and that the role of certain medial temporal lobe structures has been overemphasized. Finally, I will discuss a neural network model of the contribution of rhinal cortex to stimulus memory which seems to account for a puzzling pattern of effects observed following lesions of rhinal cortex.


The CMU Reconfigurable Computing Project

Friday, April 9th, 1999 from 12-1 pm in WeH 4601.

Presented by Mihai Budiu,

Reconfigurable hardware can change its functionality dynamically. This gives the user tremendous flexibility in implementing algorithms directly in hardware, at performance close to that achievable by custom integrated circuits, and often surpassing CPUs (or DSPs).

In this talk I will make an introduction in the area of reconfigurable circuits, indicating the trade-offs available to the designer. I will briefly survey the current research and commercial activity in this domain. I will focus on the CMU reconfigurable computing project, which develops a custom hardware device (PipeRench) and a suite of compilation tools which can be used to boost impressively the performance of some classes of applications; these include stream processing applications (i.e. media processing), making PipeRench a promising architecture in future desktop systems.


A Tool for Systematic Program Evolution and its Object-Oriented Effects System

Friday, April 16th, 1999 from 12-1 pm in WeH 4623.

Presented by Aaron Greenhouse,

To reorganize the source code of a program (for example, by abstracting a superclass for two similar classes) a programmer must understand how the code will be affected by the change. Even a simple reordering of code or movement of a declaration can have unintended effects. Programmers struggle to assure that these bureaucratic changes, often necessary for real functional change, are correct. But as a program ages and becomes more interconnected and harder to understand, it also becomes more difficult for the programmer to be sure of the soundness of a structural change, and therefore programs become brittle and hard to evolve. The Fluid/ACT project is developing a tool for semantic based program manipulation of Java programs, supported by analysis, annotations, and design exploration. The tool allows programmers to attempt a change, understand the impact of the change on the behavior of the program, and record the details of the change in an auditable design record.

The tool performs analyses to assure the soundness of changes. In contrast with a compiler, which does analyses to identify possible manipulations that can be done opportunistically, our tool is goal-directedthe programmer dictates what changes should be attempted, and ultimately these changes should be successful. The analyses used by our tool, therefore, may work harder to return useful information, for example by exploiting program annotations and the results of other analyses. Our approach also differs from whole program analyses in software engineering tools, in that we can use annotations as surrogates for missing components. To reorder code (which is a consequence of many manipulations), we rely on an effects analysis. Such an analysis determines how program state may be accessed during the execution of the program, and its results can be used to determine data dependencies between expressions.

We have developed a novel effects system for object oriented languages, which expresses effects in way that preserves the information hiding provided by Java.


Athena: a New Efficient Automatic Checker for Security Protocol Analysis

Friday, April 30th, 1999 from 12-1 pm in WeH 4623.

Presented by Dawn Song,

We propose an efficient automatic checking algorithm, Athena, for analyzing security protocols. Athena incorporates a logic that can express security properties including authentication, secrecy and properties related to electronic commerce. We have developed an automatic procedure for evaluating well-formed formulae in this logic. For a well-formed formula, if the evaluation procedure terminates, it will generate a counterexample if the formula is false, or provide a proof if the formula is true. Even when the procedure does not terminate when we allow any arbitrary configurations of the protocol execution, (for example, any number of initiators and responders), termination could be forced by bounding the number of concurrent protocol runs and the length of messages, as is done in most existing model checkers.

Athena also exploits several state space reduction techniques. It is based on an extension of the recently proposed Strand Space Model which captures exact causal relation information. Together with backward search and other techniques, Athena naturally avoids the state space explosion problem commonly caused by asynchronous composition and symmetry redundancy. Athena also has the advantage that it can easily incorporate results from theorem proving through unreachability theorems. By using the unreachability theorems, it can prune the state space at an early stage, hence, reduce the state space explored and increase the likely-hood of termination. As shown in our experiments, these techniques dramatically reduce the state space that needs to be explored.


Web contact: sss+www@cs