go back to the main page

 SSS Abstracts 
Fall 1998

go back to the main page

The Quest for the Holy Grail: Scalable, Deep, Useful Program Analysis

Friday, September 4th, 1998 from 12-1 pm in WeH 4601.

Presented by Rob O'Callahan,

One of the key difficulties encountered building large software systems is that people just can't keep enough information in their heads. One way to address this problem is to delegate detail-oriented tasks to computers. Unfortunately, this approach is crippled by the fact that machines generally have a very shallow understanding of programs. In this talk I will explain precisely what this means and why it is so --- and remains so in spite of decades of research. I will describe some of the mistakes of the past, and how those lessons are reflected in the design and implementation of the Ajax program analysis system, my thesis work.


Simplifying Triangulated Surfaces in 1 Minute or Less

Friday, September 18th, 1998 from 12-1 pm in WeH 4601.

Presented by Michael Garland,

Numerous applications in computer graphics and related fields require complex, highly detailed geometric surface models. However, the level of detail actually necessary may vary considerably at run time depending on how the surface is being viewed. For instance, objects which are far away, and consequently take up fewer pixels on the screen, require less detail than objects which are nearby. To control processing time, it is often desirable to substitute approximate surfaces which have only the amount of detail necessary and no more.

In this talk, I'll describe our surface simplification algorithm which can rapidly produce high quality approximations of polygonal surface models. I'll also outline some of the potential applications.

More details, the relevant papers, and my experimental software can be found http://www.cs.cmu.edu/~garland/quadrics/ here.


CM-Trio Training Camp: The inside story...

Friday, October 2nd, 1998 from 12-1 pm in WeH 4601.

Presented by Will Uther,

This year the CMU RoboSoccer project entered 3 of the 4 RoboCup'98 competitions. We won all three. This talk will describe the Legged Robot team. I'll give a brief description of the hardware platform, the Sony OPENR pet robot, and then a more technical description of the software architecture. There were two research areas that were particularly important for our victory. The first was machine learning for colour segmentation. This gave us significantly more reliable vision than other teams. The second was a Bayesian localization system that allowed our robots to know the direction to the goal at all times. There will also be a brief demonstration of one of the robots chasing the ball.


A Taxonomy of Modifications to Connectors

Friday, October 16th, 1998 from 12-1 pm in WeH 4601.

Presented by Bridget Spitznagel,

Increasingly, complex software systems are being constructed as compositions of reusable software components. One critical issue for such constructions is the design and implementation of the interaction mechanisms, or connectors, that permit the various software components to work together properly. Complex systems also have a number of non-functional requirements, which the system's software architecture, including its connectors, must take into account.

It is time-consuming and difficult to write these connectors "from scratch". Nor is it always possible to find an existing connector which satisfies the requirements of the system. Sometimes a connector is augmented or enhanced by hand, so that the resulting modified connector meets the system's functional and non-functional needs. We would like to be able to do this in a principled, compositional way so that complicated connectors could be generated cheaply to suit the needs of a system.

We begin by viewing existing connectors in these terms and organizing the space of possible modifications. In this talk I will describe a preliminary taxonomy of modifications, including examples from the area of distributed systems. Audience participation encouraged.


Learning to Extract Information from Messy Text

Friday, October 23rd, 1998 from 12-1 pm in WeH 4601.

Presented by Dayne Freitag,

Suppose we want a system that monitors the newsgroup cmu.misc.market.computers waiting for a computer that matches our specifications to be posted for sale. To do this our system needs to map each document it will encounter there to a structured representation of its essential contents, something like {Type="Pentium", Clockspeed="200 MHz.", RAM="64 MB", ...}. The contents of each field in this record can be a fragment of text taken verbatim from the document. This is the problem of information extraction (IE).

IE has been studied for over a decade now, but the primary emphasis has been on document collections characterized by well-formed prose (e.g., newswire articles) and hand-tuned natural language processing solutions. Such solutions may not work for the newsgroup-monitoring problem in which documents as often as not consist of "messy," syntactically unparsable text with lots of out-of-lexicon terms. And full-blown natural language processing may not even be necessary. What's more, rather than cook up a program for this problem over weeks or months, I would like a system I can train on some sample documents and expect to do a reasonable job of extracting on new ones, a system that will require a minimum of manual tuning as we move from problem to problem.

This is my thesis: What sorts of machine learning algorithms are suitable? What kinds of information might a learner exploit in such a domain? Is there a way to combine heterogeneous learners to get even better extraction performance? My talk will present some answers I've dug up in the course of my thesis.


Connectionist Modeling of Sentence Comprehension and Production

Friday, October 30th, 1998 from 12-1 pm in WeH 4601.

Presented by Doug Rohde,

Linguists in the "generative-grammar" tradition have long accepted that language could not be learned by a general mechanism and that humans' almost-universal ability to acquire language in childhood must be due to an elaborate, innate body of language-specific information or constraints. However, there is a growing movement in psycholinguistics, and particularly among connectionists, to return to a view that language could, in fact, be learned by a rather general system when exposed to the proper inputs. But such claims require support from actual demonstrations of systems that clearly are able to learn naturalistic languages, and behave similarly to humans, without relying on undue constraints. While a number of successful connectionist models of word-level recognition, reading, and pronunciation have been constructed, there has been relatively little work on the two "fundamental language tasks", sentence comprehension and production. In this talk I will present some initial results in an effort to construct an integrated sentence comprehension and production neural network and discuss some of the questions we hope to address with this model.


Monte Carlo Simulations of Proteins

Friday, November 13th, 1998 from 12-1 pm in WeH 4601.

Presented by Marc Fasnacht,

During the past two decades, computer simulations have become a very important tool in biochemistry. The complex structure and tiny size of proteins make it very difficult to analyze and understand them by using experimental techniques such as X-ray diffraction or nuclear magnetic resonance. Simulations on the other hand allow to address such questions at the atomic level and can provide detailed information about the structure and function of the molecules.

Molecular dynamics has traditionally been the method of choice for the simulation of biological molecules. However, Monte Carlo methods offer significant advantages for certain types of calculations. In this talk I will discuss how Monte Carlo methods can be used and optimized for the simulation of proteins.


Web contact: sss+www@cs