From: IN%"peter@Nexus.YorkU.CA" "Peter Roosen-Runge" 13-MAY-1990 21:10:17.47 To: cs100006@YUSol CC: Subj: Return-path: peter@Nexus.YorkU.CA Received: from JNET-DAEMON by YUSol; Sun, 13 May 90 21:10 EDT Received: From YORKVM1(MAILER) by YUSOL with Jnet id 9730 for CS100006@YUSOL; Sun, 13 May 90 21:10 EDT Received: from YUOrion by VM1.YORKU.CA (Mailer R2.05) with BSMTP id 2847; Sun, 13 May 90 21:07:06 EDT Received: from nexus.yorku.ca by Orion.YorkU.CA; Sun, 13 May 90 12:36 EDT Received: by nexus.yorku.ca id 6345; Sun, 13 May 90 12:21:57 EDT Date: Sun, 13 May 90 12:21:47 EDT From: Peter Roosen-Runge To: cs100006@YUSol Message-Id: <90May13.122157edt.6345@nexus.yorku.ca> Path: yunexus!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!clyde .concordia.ca!uunet!tut.cis.ohio-state.edu!ucbvax!DEPT.CSCI.UNT.EDU!leff From: leff@DEPT.CSCI.UNT.EDU ("Dr. Laurence L. Leff") Newsgroups: comp.doc.techreports Subject: tr-input/rochester Message-ID: <9005122105.AA28928@ucbvax.Berkeley.EDU> Date: 12 May 90 17:03:51 GMT Article-I.D.: ucbvax.9005122105.AA28928 Posted: Sat May 12 13:03:51 1990 Sender: daemon@ucbvax.BERKELEY.EDU Organization: University of North Texas in Denton Lines: 587 Approved: trlist@smu.edu The University of Rochester Computer Science Department January 1990 Technical Report Abstract List Below are the abstracts of our recent technical reports. There is a nominal charge for each report, with prepayment required. The fee is waived for libraries in non-profit, educational institutions and for institutions with which we have exchange agreements. To order, send electronic mail to tr@cs.rochester.edu or write to the TR Librarian, Computer Science Dept., Univ. of Rochester, Rochester, NY, 14627, USA. Checks should be made payable to the University of Rochester. %A B.W. Miller %T The RHET Plan Recognition System (Version 1.0) %R TR 298 %I Univ. of Rochester Computer Science Dept. %D Jan. 1990 %Z 50 pages, $2.00 %X RPRS is a hierarchical plan recognition system built within the RHET knowledge representation system. It provides a powerful system for plan recognition based on the algorithms of Kautz with the general reasoning capabilities of RHET. RPRS takes special advantage of Rhet's type relations, constraints, equality and contextual reasoning abilities. It is also intended as a demonstration of the Rhet programming and knowledge representation system's hybrid reasoning capabilities. Utilizing the Lisp interface to Rhet, RPRS allows the user to use the Rhet structured type system to build plan types, and given some observation or set of observations have Rhet derive the set of plans that are consistent with these observations. Since RPRS includes the TEMPOS specialized reasoner for Rhet, steps and observations can have reference to time-intervals, and/or be temporally constrained with respect to one another. %A C.M. Brown %T Some computational properties of rotation representations %R TR 303 %I Univ. of Rochester Computer Science Dept. %D Aug. 1989 %Z 26 pages, $1.50 %X There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related quaternion components), as well as the matrix form of rotation representation, are particularly of interest in computer vision, graphics, and robotics. Some of their computational properties are explored here. Operation counts are given for primitive operations of normalization, conversion, and application of rotations as well as for sequences of rotations. The numerical accuracy of vector rotation calculations is investigated for some common tasks like iterated application of the same or different rotations. The measure of accuracy is taken to be the length and direction of the resulting rotated vector. Some analytical analysis appears, but most conclusions are based on empirical tests at artificially reduced numerical precision. %A S.D. Whitehead %T Thesis proposal: Scaling reinforcement learning systems %R TR 304 %I Univ. of Rochester Computer Science Dept. %D Aug. 1989 &Z 60 pages, $2.25 %X Reinforcement learning systems are interesting because they meet three major criteria for animate control, namely competence, responsiveness, and autonomous adaptability. Unfortunately, these systems have not been scaled to complex task domains. For my thesis I propose to study three separate problems that arise when scaling reinforcement learning systems to larger task domains: the propagation problem, the transfer problem, and the attention problem. The propagation problem arises when the number of states in the problem domain is scaled and the distance the system must go for reinforcement is increased. The transfer problem occurs when reinforcement learning systems are applied to problem solving tasks where it is desirable to transfer knowledge useful for solving one problem to another. The attention problem arises when a system with a fixed length input vector is applied to a task domain containing an arbitrary number of objects. Each of these problems are discussed along with possible approaches for their solution. A schedule for performing the research is also given. %A T.J. Olson %T An architectural model of visual motion understanding %R TR 305 and Ph.D. Thesis %I Univ. of Rochester Computer Science Dept. %D Aug. 1989 %Z 145 pages, $6.75 %X The past few years have seen an explosion of interest in the recovery and use of visual motion information by biological and machine vision systems. In the area of computer vision, a variety of algorithms have been developed for extracting various types of motion information from images. Neuroscientists have made great strides in understanding the flow of motion information from the retina to striate and extrastriate cortex. The psychophysics community has gone a long way toward characterizing the limits and structure of human motion processing. The central claim of this thesis is that many puzzling aspects of motion perception can be understood by assuming a particular architecture for the human motion processing system. The architecture consists of three functional units or subsystems. The first or low-level subsystem computes simple mathematical properties of the visual signal. It is entirely bottom-up, and prone to error when its implicit assumptions are violated. The intermediate-level subsystem combines the low-level system's output with world knowledge, segmentation information and other inputs to construct a representation of the world in terms of primitive forms and their trajectories. It is claimed to be the substrate for long-range apparent motion. The highest level of the motion system assembles intermediate-level form and motion primitives into scenarios that can be used for prediction and for matching against stored models. This architecture is the result of joint work with Jerome Feldman and Nigel Goddard. The description of the low-level system is in accord with the standard view of early motion processing, and the details of the high-level system are being worked out by Goddard. The secondary contribution of this thesis is a detailed connectionist model of the intermediate level of the architecture. In order to compute the trajectories of primitive shapes it is necessary to design mechanisms for handling time and Gestalt grouping effects in connectionist networks. Solutions to these problems are developed and used to construct a network that interprets continuous and apparent motion stimuli in a limited domain. Simulation results show that its interpretations are in qualitative agreement with human perception. %A L.K. Schubert %T Monotonic solution of the frame problem in the situation calculus: An efficient method for worlds with fully specified actions %R TR 306 %I Univ. of Rochester Computer Science Dept. %D Aug. 1989 %Z 48 pages, $2.00 %X This paper is concerned with the succinct axiomatization and efficient deduction of non-change, within McCarthy and Hayes' Situation Calculus. The idea behind the proposed approach is this: suppose that in a room containing a man, a robot and a cat as the only potential agents, the only action taken by the man within a certain time interval is to walk from one place to another, while the robot's only actions are to pick up a box containing the (inactive) cat and carry it from its initial place to another. We wish to prove that a certain object (such as the cat, or the doormat) did not change color. We reason that the only way it could have changed color is for the man or the robot to have painted or dyed it. But since these are not among the actions which actually occurred, the color of the object is unchanged. Thus we need no frame axioms to the effect that walking and carrying leave colors unchanged (which is in general false in multi-agent worlds), and no default schema that properties change only when we can prove they do (which is in general false in incompletely known worlds). Instead we use explanation-closure axioms specifying all primitive actions which can produce a given type of change within the setting of interest. A method similar to this has been proposed by Andrew Haas for single-agent, serial worlds. The contribution of the present paper lies in showing (1) that such methods do indeed encode non-change succinctly, (2) are independently motivated, (3) can be used to justify highly efficient methods of inferring non-change, specifically the "sleeping dog" strategy of STRIPS, and (4) can be extended to simple multiagent worlds with concurrent actions. An ultimate limitation may lie in the lack of a uniform strategy for deciding what fluents can be affected by what agents in a given domain. In this respect probabilistic methods appear promising. %A J.A.G.M. Koomen %T Reasoning about recurrence %R TR 307 and Ph.D. Thesis %I Univ. of Rochester Computer Science Dept. %D July 1989 %Z 122 pages, $5.50 %X This dissertation examines the issue of representing knowledge about recurring states and events, and develops mechanisms for reasoning about recurrence. A first-order axiomatization is developed based on the interval calculus that allows entities (events, properties, processes, etc.) to be associated, incidentally or repeatedly, with temporal intervals. We describe the implementation of a self-organizing interval database manager and inference engine, as well as a system that integrates the interval reasoner with a general purpose theorem prover and provides a restricted version of the recurrence logic. The recurrence formalism is applied to the domain of planning in a world with traffic control lights. An extensive example is developed, showing how the use of recurrence can enable a planner to determine that a plan to travel from a source to a destination is feasible, even if an intervening traffic light is red at the time it is encountered. In order to develop this example, a novel application of the interval logic to the spatial domain is presented, which allows us to reason about trajectories and their traversal over time. The combination of paths and temporal intervals, together with the recurrence axioms and actions such as traverse, allow us to prove several theorems about the use of traffic lights, and to show how an agent can validate plans to get past them. %A M.L. Scott %T An overview of Lynx $R TR 308 %I Univ. of Rochester Computer Science Dept. %D Aug. 1989 %Z 27 pages, $1.25 %X A programming language can provide much better support for interprocess communication than a library package can. Most message-passing languages limit this support to communication between the pieces of a single program, but this need not be the case. Lynx facilitates convenient, typesafe message-passing not only within applications, but also between applications, and among distributed collections of servers. Specifically, it addresses issues of compiler statelessness, late binding, and protection that allow run-time interaction between processes that were developed independently and that do not trust each other. Implementation experience with Lynx has yielded important insights into the relationship between distributed operating systems and language run-time support packages, and into the inherent costs of high-level message-passing semantics. %A M.L. Scott %A T.J. LeBlanc %A B.D. Marsh %T Evolution of an operating system for large-scale shared-memory multiprocessors %R TR 309 %I Univ. of Rochester Computer Science Dept. %D March 1989 %Z 24 pages, $1.25 %X Scalable shared-memory multiprocessors (those with non-uniform memory access times) are among the most flexible architectures for high-performance parallel computing, admitting efficient implementations of a wide range of process models, communication mechanisms, and granularities of parallelism. Such machines present opportunities for general-purpose parallel computing that cannot be exploited by existing operating systems, because the traditional approach to operating system design presents a virtual machine in which the definition of processes, communication, and grain size are outside the control of the user. Psyche is an operating system designed to enable the most effective use possible of large-scale shared memory multiprocessors. The Psyche project is characterized by (1) a design that permits the implementation of multiple models of parallelism, both within and among applications, (2) the ability to trade protection for performance, with information sharing as the default, rather than the exception, (3) explicit, user-level control of process structure and scheduling, and (4) a kernel implementation that uses shared memory itself, and that provides users with the illusion of uniform memory access times. %A P.F. Dietz %T Elimination of amortization from Breu's algorithm %R TR 310 %I Univ. of Rochester Computer Science Dept. %D Sept. 1989 %Z 5 pages, $1.00 %X Breu has given an algorithm for finding the maximum value between two pointers moving unidirectionally through an array. His algorithm achieves O(1) amortized time per operation. This paper describes how to eliminate the amortization from his algorithm, achieving O(1) worst-case time per operation. %A T.J. LeBlanc %A B.D. Marsh %A M.L. Scott %T Memory management for large-scale NUMA multiprocessors %R TR 311 %I Univ. of Rochester Computer Science Dept. %D March 1989 %Z 20 pages, $1.25 %X Large-scale shared-memory multiprocessors such as the BBN Butterfly and IBM RP3 introduce a new level in the memory hierarchy: multiple physical memories with different memory access times. An operating system for these NUMA (NonUniform Memory Access) multiprocessors should provide the traditional virtual memory management, facilitate dynamic and widespread memory sharing, and minimize the apparent disparity between local and nonlocal memory. In addition, the implementation must be scalable to configurations with hundreds or thousands of processors. This paper describes memory management in the Psyche multiprocessor operating system, under development at the University of Rochester. The Psyche kernel manages a multi-level memory hierarchy consisting of local memory, nonlocal memory, and backing store. Local memory stores private data and serves as a cache for shared data; nonlocal memory stores shared data and serves as a disk cache. The system structure isolates the policies and mechanisms that manage different layers in the memory hierarchy, so that customized data structures and policies can be constructed for each layer. Local memory management policies are implemented using mechanisms that are independent of the architectural configuration; global policies are implemented using multiple processes that increase in number as the architecture scales. Psyche currently runs on the BBN Butterfly Plus multiprocessor. %A J.M. Mellor-Crummey %T Debugging and analysis of large-scale parallel programs %R TR 312 and Ph.D. Thesis %I Univ. of Rochester Computer Science Dept. %D Sept. 1989 %Z 139 pages, $5.00 %X One of the most serious problems in the development cycle of large-scale parallel programs is the lack of tools for debugging and performance analysis. Parallel programs are more difficult to analyze than their sequential counterparts for several reasons. First, race conditions in parallel programs can cause non-deterministic behavior, which reduces the effectiveness of traditional cyclic debugging techniques. Second, invasive, interactive analysis can distort a parallel program's execution beyond recognition. Finally, comprehensive analysis of a parallel program's execution requires collection, management, and presentation of an enormous amount of information. This dissertation addresses the problem of debugging and analysis of large-scale parallel programs executing on shared-memory multiprocessors. It proposes a methodology for top-down analysis of parallel program executions that replaces previous ad-hoc approaches. To support this methodology, a formal model for shared-memory communication among processes in a parallel program is developed. It is shown how synchronization traces based on this abstract model can be used to create indistinguishable executions that form the basis for debugging. This result is used to develop a practical technique for tracing parallel program executions on shared-memory parallel processors so that their executions can be repeated deterministically on demand. Next, it is shown how these traces can be augmented with additional information that increases their utility for debugging and performance analysis. The design of an integrated, extensible toolkit based on these traces is proposed. This toolkit uses execution traces to support interactive, graphics-based, top- down analysis of parallel program executions. A prototype implementation of the toolkit is described explaining how it exploits our execution tracing model to facilitate debugging and analysis. Case studies of the behavior of several versions of two parallel programs are presented to demonstrate both the utility of our execution tracing model and the leverage it provides for debugging and performance analysis. %A M.A. Fulk %A S. Jain %T Approximate inference and scientific method %R TR 313 %I Univ. of Rochester Computer Science Dept. %D Oct. 1989 %Z 11 pages, $1.00 %X A new identification criterion, motivated by notions of successively improving approximations in the philosophy of science, is defined. It is shown that the class of recursive functions is identifiable under this criterion. This result is extended to permit somewhat more realistic types of data than usual. This criterion is then modified to consider restrictions on the quality of approximations. %A E. Allender %A L.A. Hemachandra %T Lower bounds for the low hierarchy %R TR 314 %I Univ. of Rochester Computer Science Dept. %D Oct. 1989 %Z 24 pages, $1.25 %X The low hierarchy in NP and the extended low hierarchy have been useful in characterizing the complexity of certain interesting classes of sets. However, until now, there have been no results establishing whether a given lowness result is the best possible. We prove absolute lower bounds on the location of classes in the extended low hierarchy, and relativized lower bounds on the location of classes in the low hierarchy in NP. In some cases, we are able to show that the classes are lower in the hierarchies than was known previously. In almost all cases, we are able to prove that our results are essentially optimal. We also examine the interrelationships among the levels of the low hierarchies and the classes of sets reducible to or equivalent to sparse and tally sets under different notions of reducibility. We feel that these results clarify the structure underlying the low hierarchies. %A D.G. Tilley %T Zebra for MaxVideo: An application of object-oriented microprogramming to register level devices %R TR 315 %I Univ. of Rochester Computer Science Dept. %D Feb. 1990 (revised) %Z 136 pages, $5.00 %X This report describes an object oriented programming interface to a set of image processing boards from Datacube's MaxVideo image processing board family. These boards are register programmable devices that perform operations on full frame video in real time. They contain no CPU and are controlled by a host processor. Typically, these devices are controlled by a function library supplied by the vendor, with one or two functions supplied for each bit or field of bits in each register in the board. Zebra takes a microprogramming-like approach. Rather than provide a function for each bit field in each register, the boards are programmed by filling the entire register set with an "instruction word" that completely describes the register space. These instruction words can then be considered as data items to be stored in files, edited by "instruction word editors," and loaded into programs at run time. Data encapsulation and abstraction are facilitated through the use of Object Oriented Programming (OOP) techniques. Zebra is written in C++ and provides object classes for each device. Through the use of inheritance, application objects can be created, allowing programmers to customize their programming interface. The techniques used in Zebra may be applicable to a wide range of register programmable devices. %A E. Kranakis %A D. Krizanc %A J. van den Berg %T Computing boolean functions on anonymous networks %R TR 316 %I Univ. of Rochester Computer Science Dept. %D Nov. 1989 %Z 19 pages, $1.00 %X We study the bit-complexity of computing boolean functions on anonymous networks. Let #N# be the number of nodes, #d# the diameter, and #delta# the maximal node degree of the network. For arbitrary, unlabeled networks we give a general algorithm of polynomial bit complexity #O(N sup 4 cdot delta cdot d sup 2 cdot log N)# for computing any boolean function which is computable on this network. This improves upon the previous best known algorithm which was of exponential bit complexity #O(d sup {N sup 2})#. For symmetric functions on arbitrary networks we give an algorithm with bit complexity #O(N sup 2 cdot delta cdot d sup 2 cdot log sup 2 N)#. This same algorithm is shown to have even lower bit complexity for a number of specific networks. We consider the class of distance regular unlabeled networks and show that on such networks symmetric functions can be computed efficiently in #O(N cdot delta cdot d cdot log N)# bits. We study in detail the case of the #n#-dimensional hypercube, with #N = 2 sup n# nodes. We show that on the oriented hypercube an arbitrary boolean function #f# is computable if and only if it is kept invariant under all the flipping automorphisms of the hypercube, in which case it can be computed in #O(N sup 2)# bits. Further we show that every symmetric function can be computed in #O(N cdot log sup 2 N)# bits on the oriented and in #O(N cdot log sup 3 N)# bits on the unlabeled hypercube. %A J.F. Allen %A S. Guez %A L.J. Hoebel %A E.A. Hinkelman %A K.J. Jackson %A A.I. Kyburg %A D.R. Traum %T The discourse system project %R TR 317 %I Univ. of Rochester Computer Science Dept. %D Nov. 1989 %Z 82 pages, $3.25 %X There has been significant work in the last decade on the processing of discourse for modeling two-person extended dialogues, story and text understanding, and extended question answering systems. While there has been important progress in the use of world knowledge in language interpretation, the use of plan-based models of language (e.g., speech act planning), models of reference and focus, and models of discourse structure itself, work in each area has not been related to work in the other areas. No one has determined how individual processing techniques can be combined to form a fully functional discourse system. Some suggestions on the organization of discourse have arisen recently, and while showing promise, have not been explored in enough detail for an actual application. This report describes an architecture for discourse systems that allows the integration of many different processing modules. It also describes an initial set of modules that have been implemented and tested using the architecture. %A L.A. Hemachandra %A A. Hoene %T On sets with efficient implicit membership tests %R TR 318 %I Univ. of Rochester Computer Science Dept. %D Nov. 1989 %Z 13 pages, $1.00 %X This paper completely characterizes the complexity of implicit membership testing in terms of the well-known complexity class OptP, optimization polynomial time, and concludes that many complex sets have polynomial-time implicit membership tests. %A C. Chronaki %A D.L. Baldwin %T A front end for CONSUL %R TR 319 %I Univ. of Rochester Computer Science Dept. %D Jan. 1990 %Z 25 pages, $1.25 %X CONSUL is a prototype constraint language for programming multiprocessors. In this paper we present a front end for CONSUL that makes the language easier to use than it originally was. The most important features of the front end are compact representation of constraints, type definitions, functional use of relations, and the ability to split programs into multiple files. %A G.J. Hauser,\0Jr. %T Communication complexity %R TR 320 and Ph.D. Thesis %I Univ. of Rochester Computer Science Dept. %D June 1989 %Z 180 pages, $7.00 %X A complete and formal model of computation for a network of two communicating processes is presented using an extension of the Turing Machine called a Communicating Turing Machine (CTM). The resources of number of symbols exchanged and maximum amount of local storage used between messages are identified and referred to as the communication time and communication space respectively. As a pair of processors, each with its own input, a fortiori accepts a set of pairs of strings, some consideration must be given to the mapping of problems to CTM inputs. A model parameter is this input mapping function. In addition to consideration of the usual partition mapping, we introduce a distribution mapping which bounds only the number of fragments into which an input is divided, and all partitions with this fragmentation are allowed. Complexity classes for each input mapping function are identified. A full, dense hierarchy is shown to exist in communication space and time from constant up to linear. For distribution input mappings, it is shown that the constant complexity classes are exactly the regular languages and that there is a gap between constant and log log n in the space hierarchy. For fair partition input mappings most of the structure of standard TM complexity obtains for communication complexity. An upper bound is given for the communication space complexity of context free languages and it is shown that the bound is met for bounded-fragmentation partitions. Example languages demonstrate that most complexity class relationships under fair partitions cannot be improved. An algorithm is presented that optimizes the communication time complexity of an existing communication protocol for a finite language. A new lower bound for communication complexity is presented that uses the number of equivalence classes of strings induced by the language and the input mapping function independently. It is shown that this lower bound is within a constant factor of the required minimum for communication time on fixed-cut partition input mappings. %A R. Raman %A P.F. Dietz %T A constant update time finger search tree %R TR 321 %I Univ. of Rochester Computer Science Dept. %D Dec. 1989 %Z 14 pages, $1.00 %X Levcopolous and Overmars described a search tree in which the time to insert or delete a key was O(1) once the position of the key to be inserted or deleted was known. Their data structure did not support fingers, pointers to points of high access or update activity in the set such that access and update operations in the vicinity of a finger are particularly efficient. They left open the question of whether a data structure could be designed that allowed updates in constant time and supported fingers. We answer the question in the affirmative by giving an algorithm in the RAM with logarithmic word size. %A P.F. Dietz %A D.D. Sleator %T Amortized analysis of a pebble game %R TR 322 %I Univ. of Rochester Computer Science Dept. %D Jan. 1990 %Z 3 pages, $1.00 %X We consider the following one person game, motivated by the African games of Owari and Kalah. The board consists of n pits containing a total of n pebbles. To perform a move we pick some pit, remove its k pebbles, and place them back into k distinct pits (possibly including the pit just emptied). The score of the move is k, the number of pebbles moved. The goal is to make a sequence of moves that obtains as large a score as possible. We prove that the amortized score of a move is at most #sqrt {2n + 1/4)} - 1/2#. Thus in the long run, starting from any initial configuration, one cannot obtain an average score per move that exceeds this. We also exhibit a strategy that achieves this amortized score for infinitely many values of n. The proof of the upper bound is by the method of potential functions. %A W.I. Gasarch %A L.A. Hemachandra %A A. Hoene %T On checking versus evaluation of multiple queries %R TR 323 %I Univ. of Rochester Computer Science Dept. %D Jan. 1990 %Z 19 pages, $1.00 %X The distinction between computing answers and checking answers is fundamental to computational complexity theory, and is reflected in the relationship of NP to P. The plausibility of computing the answers to many membership queries to a hard set with few queries is the subject of the theory of terseness. In this paper, we develop companion theories--both complexity-theoretic and recursion-theoretic--of characteristic vector terseness, which ask whether the answers to many membership queries to a hard set can be checked with fewer queries. %A N.G. Martin %A J.F. Allen %A C.M. Brown %T ARMTRAK: A domain for the unified study of natural language, planning, and active vision %R TR 324 %I Univ. of Rochester Computer Science Dept. %D Jan. 1990 %Z 43 pages, $2.00 %X ARMTRAK is a micro-world, based on the control of model trains, designed to integrate work in natural language, planning, vision and robotics. The primary advantage of the domain is that it provides examples that involve few objects but require sophisticated analysis. Because they involve few objects, the complex reasoning required is not intractable. On the other hand, more objects can be introduced to study techniques for tractable reasoning. Simple and complex examples in the same domain allow work at different levels to take place simultaneously. As a planning domain, ARMTRAK allows exercising planners in a real-time domain about which the planner has only imperfect knowledge. As a domain for natural language research, it allows research into the grounding of language in real situations and the problem of coordinating the behavior of agents through language. As a domain for active vision research, it is challenging because it requires extracting information whose parameters cannot be completely specified beforehand. Two implementations of ARMTRAK have been developed: a simulation and a version using the Rochester Robot. The simulation allows work on real-time planning, and a robot version shows the feasibility of a real working system based on model trains.