go back to the main page

 SSS Abstracts 
Fall 2000

go back to the main page

Improving Trigram Language Modeling with the World Wide Web

Friday, September 29th, 2000 from 12-1 pm in WeH 4601.

Presented by Jerry Zhu,

Statistical language modeling requires a large amount of training text. We propose a novel method to acquire trigram estimates from the World Wide Web for statistical language modeling. We submit a N-gram as a phrase query to web search engines. The search engines return the number of web pages containing the phrase, from which the N-gram count is estimated. With the N-gram counts we can form web trigram estimates. We discuss the properties of such web estimates, and methods to interpolate them with traditional corpus based trigram estimates. We show that the interpolated models improve perplexity and speech recognition word error rate significantly over a small test set.

Multiagent Reinforcement Learning using a Variable Learning Rate

Friday, October 20th, 2000 from 12-1 pm in NSH 1507.

Presented by Michael Bowling,

Multiagent Learning is a difficult problem since the normal definition of an optimal policy is "out the window". The optimal policy at any moment depends on the policies of the other agents, and so creates a situation of a moving target. Previous learning algorithms have one of two shortcomings depending on their approach. They either converge to a policy that may not be optimal against the specific opponents' policies, or they may not converge at all. I'll present a technique for reinforcement learning using a variable learning rate to overcome these shortcomings. I'll present some empirical results showing convergence to the game's equilibrium. I'll also present a new theoretical result proving convergence of this technique when applied to gradient descent in a restricted class of games. This talk will also for the first time publicly unveil (with much fanfare) the new name of this technique.

Reducing Web Latency with Hierarchical, Cache-based Prefetching

Friday, October 27th, 2000 from 12-1 pm in NSH 1507.

Presented by Dennis Strelow,

Proxy caches have become a central mechanism for reducing the latency of web document retrieval. While caching alone reduces latency for previously requested documents, web document prefetching could mask latency for previously unseen, but correctly predicted requests. We have designed and prototyped a prefetching algorithm suitable for use in a network of hierarchical web caches such as the National Laboratory for Applied Network Research's Global Caching Hierarchy of Squid caches. This algorithm observes requests to a cache and its ancestors, and initiates prefetching for predicted future requests if prefetching is likely to reduce the overall latency seen by the cache's clients.

This talk is in partial fulfillment of the speaking requirement.

Control of Rigid Body Simulations in Computer Animation

Friday, November 3rd, 2000 from 12-1 pm in NSH 1507.

Presented by Jovan Popovic,

In this talk, I will propose a system that uses a new set of techniques to make computer animation a more accessible and intuitive form of expression. I will focus on animating the motion of rigid objects. Physical simulation has become the preferred method for solving this problem because it produces highly realistic animations. It allows the animator to avoid the skillful but tedious hand drawing required by traditional animation techniques. Instead, she provides a few physical parameters, such as the objects' initial positions and velocities, and the simulator automatically generates realistic motions. The drawback is that the resulting motion is difficult to control because even a small adjustment of the input parameters can drastically affect the look of the subsequent motion as well as its outcome. For example, small differences in the way a die is thrown can make the motion of the die less interesting as well as change the final outcome of the roll.

I will describe two approaches to removing this drawback. The first is an interactive technique for intuitive editing of multi-body simulations. Our system allows the animator to select bodies at any time and simply drag them to desired locations. In response, the system interactively computes the required physical parameters and simulates the resulting motion. The second is a new off-line technique that constructs a physically realistic motion that matches the animator's rough sketch.

Evolutionary Game Theory: A brief Tutorial and Some Results

Friday, November 17th, 2000 from 12-1 pm in NSH 1507.

Presented by Karin Sigurd,

In this talk, we will give a tutorial on evolutionary game theory and give examples of how it can be applied to different types of non-zero sum games. We will show examples of convergence to the particular refinement of the Nash equilibrium conce pt known as the ESS (evolutionarily stable equilibrium) and will show how to cope with game players that have particular preferences or limitations. The statistics of the equilibrium distributions of individual players will be discussed and it will be shown how we can influence these distributions by manipulating the learning parameters. Finally, we will show an example of a signaling game where cheap talk plays a role in the behavior to which the system converges.

Using Thumbnails to Search the Web

Friday, December 1st, 2000 from 12-1 pm in NSH 1507.

Presented by Andrew Faulring,

Internet users spend a significant amount of time examining search engine results. The user must page through lists of Web documents, briefly evaluating each for possible relevance to a particular information need. Improving the efficiency of this tedious process directly benefits the end-user.

I will introduce a technique for creating novel, textually-enhanced thumbnails of web pages. These thumbnails combine the advantages of image thumbnails and text summaries to provide consistent performance on a variety of tasks.

In collaboration with others, I have performed a quantitative comparative study of textual and graphical summarization mechanisms applied to search engine results. Participants used three different types of summaries (enhanced thumbnails, plain thumbnails, and text summaries) to search web pages to find several different types of information. Participants took an average of 83 seconds to find the answer to a question. They were approximately 30 seconds faster with enhanced thumbnails than with text summaries, and 19 seconds faster with enhanced thumbnails than with plain thumbnails.

Based upon this data, I argue that graphical summaries of the documents - thumbnail images - can greatly increase the efficiency by which end-users process search engine result sets. For example, thumbnails allow users to classify a Web page's genre very rapidly. Most interestingly, our empirical results suggest that, if properly designed, textually-enhanced thumbnails deliver the efficiency benefits of both textual summaries and unenhanced thumbnails.

The work was performed at Xerox PARC during this previous summer.

The Data Locality of Work Stealing

Friday, December 8th, 2000 from 12-1 pm in NSH 1507.

Presented by Umut Acar,

As the gap between processor and memory speeds widens, data locality is becoming more and more critical in obtaining good performance from modern multiprocessors.

In this talk, I present lower and upper bounds on the data locality of multithreaded programs where the threads are scheduled with the work-stealing algorithm. Work stealing is a randomized, simple-yet-efficient, widely-used algorithm for thread scheduling. In our analysis, we measure data locality by the number of cache misses; the lower the number of cache misses the better the locality. We consider various cache organizations and replacement policies such as direct mapped caches and LRU replacement policy.

For the lower bound, I show a computation that incurs only a constant number of cache misses when executed on a uniprocessor. At least $ rac{1}{8}$ of all the instructions, however, causes a cache miss when the same computation is executed on a multiprocessor, even on $2$ processors. I generalize the computation to show an asymptotic lower bound. For the upper bound, I show that work stealing delivers good data locality for a large class of applications, nested-parallel applications. Based on the upper bound, I give strong execution time bounds for nested-parallel computations. Finally, I present experimental results showing that the data locality of work stealing can be further improved by associating affinities with threads. An affinity is a hint as to what processor should execute a thread. This idea is employed in most modern operating systems for process scheduling and is known as the affinity scheduling.

This is joint work with Guy Blelloch and Robert Blumofe.

This talk is in partial fulfillment of the speaking requirement.

What is Category Theory?

Friday, December 15th, 2000 from 12-1 pm in NSH 1507.

Presented by Andrej Bauer,

No prior knowledge of formal logic or category theory will be required to understand this talk--only willingness to think about things that are not related to Java and startup possibilities.

Category theory is an abstract branch of mathematics that generalizes and relates ideas from algebra, geometry, topology, and logic. I will explain the main concepts and ideas of category theory from the logical and computational perspective.

Categories may be understood as collections of generalized sets and functions. I will motivate this view by analyzing how the usual concept of sets as static collections of elements can be an unsuitable mathematical framework for dealing with problems and concepts from everyday life and computer science. Concrete examples and category-theoretic solutions will be suggested.

Web contact: sss+www@cs