Journal of Artificial Intelligence Complete Table of Contents (with abstracts) Volume1 Wellman, M.P. (1993) "A Market-Oriented Programming Environment and its Application to Distributed Multicommodity Flow Problems", Volume 1, pages 1-23. PostScript: volume1/wellman93a.ps (348K) Abstract: Market price systems constitute a well-understood class of mechanisms that under certain conditions provide effective decentralization of decision making with minimal communication overhead. In a {\em market-oriented programming\/} approach to distributed problem solving, we derive the activities and resource allocations for a set of computational agents by computing the competitive equilibrium of an artificial economy. {\sc Walras} provides basic constructs for defining computational market structures, and protocols for deriving their corresponding price equilibria. In a particular realization of this approach for a form of multicommodity flow problem, we see that careful construction of the decision process according to economic principles can lead to efficient distributed resource allocation, and that the behavior of the system can be meaningfully analyzed in economic terms. Ginsberg, M.L. (1993) "Dynamic Backtracking", Volume 1, pages 25-46. PostScript: volume1/ginsberg93a.ps (211K) Online Appendix: volume1/ginsberg93a-appendix.lisp (7K), Crossword data Abstract: Because of their occasional need to return to shallow points in a search tree, existing backtracking methods can sometimes erase meaningful progress toward solving a search problem. In this paper, we present a method by which backtrack points can be moved deeper in the search space, thereby avoiding this difficulty. The technique developed is a variant of dependency-directed backtracking that uses only polynomial space while still providing useful control information and retaining the completeness guarantees provided by earlier approaches. Gent, I.P. and Walsh, T. (1993) "An Empirical Analysis of Search in GSAT", Volume 1, pages 47-59. PostScript: volume1/gent93a.ps (500K) Abstract: We describe an extensive study of search in GSAT, an approximation procedure for propositional satisfiability. GSAT performs greedy hill-climbing on the number of satisfied clauses in a truth assignment. Our experiments provide a more complete picture of GSAT's search than previous accounts. We describe in detail the two phases of search: rapid hill-climbing followed by a long plateau search. We demonstrate that when applied to randomly generated 3SAT problems, there is a very simple scaling with problem size for both the mean number of satisfied clauses and the mean branching rate. Our results allow us to make detailed numerical conjectures about the length of the hill-climbing phase, the average gradient of this phase, and to conjecture that both the average score and average branching rate decay exponentially during plateau search. We end by showing how these results can be used to direct future theoretical analysis. This work provides a case study of how computer experiments can be used to improve understanding of the theoretical properties of algorithms. Schlimmer, J.C. and Hermens, L.A. (1993) "Software Agents: Completing Patterns and Constructing User Interfaces", Volume 1, pages 61-89. PostScript: volume1/schlimmer93a.ps (1.2 M) compressed, volume1/schlimmer93a.ps.Z (278K) Online Appendix: volume1/schlimmer93a-appendix.hqx (1.3M), Quicktime Demo Abstract: To support the goal of allowing users to record and retrieve information, this paper describes an interactive note-taking system for pen-based computers with two distinctive features. First, it actively predicts what the user is going to write. Second, it automatically constructs a custom, button-box user interface on request. The system is an example of a learning-apprentice software- agent. A machine learning component characterizes the syntax and semantics of the user's information. A performance system uses this learned information to generate completion strings and construct a user interface. Description of Online Appendix: People like to record information. Doing this on paper is initially efficient, but lacks flexibility. Recording information on a computer is less efficient but more powerful. In our new note taking softwre, the user records information directly on a computer. Behind the interface, an agent acts for the user. To help, it provides defaults and constructs a custom user interface. The demonstration is a QuickTime movie of the note taking agent in action. The file is a binhexed self-extracting archive. Macintosh utilities for binhex are available from mac.archive.umich.edu. QuickTime is available from ftp.apple.com in the dts/mac/sys.soft/quicktime. Bergadano, F., Gunetti, D. and Trinchero, U. (1993) "The Difficulties of Learning Logic Programs with Cut", Volume 1, pages 91-107. PostScript: volume1/bergadano93a.ps (185K) Abstract: As real logic programmers normally use cut (!), an effective learning procedure for logic programs should be able to deal with it. Because the cut predicate has only a procedural meaning, clauses containing cut cannot be learned using an extensional evaluation method, as is done in most learning systems. On the other hand, searching a space of possible programs (instead of a space of independent clauses) is unfeasible. An alternative solution is to generate first a candidate base program which covers the positive examples, and then make it consistent by inserting cut where appropriate. The problem of learning programs with cut has not been investigated before and this seems to be a natural and reasonable approach. We generalize this scheme and investigate the difficulties that arise. Some of the major shortcomings are actually caused, in general, by the need for intensional evaluation. As a conclusion, the analysis of this paper suggests, on precise and technical grounds, that learning cut is difficult, and current induction techniques should probably be restricted to purely declarative logic languages. Buchheit, M., Donini, F.M. and Schaerf, A. (1993) "Decidable Reasoning in Terminological Knowledge Representation Systems", Volume 1, pages 109-138 PostScript: volume1/buchheit93a.ps (311K) Abstract: Terminological knowledge representation systems (TKRSs) are tools for designing and using knowledge bases that make use of terminological languages (or concept languages). We analyze from a theoretical point of view a TKRS whose capabilities go beyond the ones of presently available TKRSs. The new features studied, often required in practical applications, can be summarized in three main points. First, we consider a highly expressive terminological language, called ALCNR, including general complements of concepts, number restrictions and role conjunction. Second, we allow to express inclusion statements between general concepts, and terminological cycles as a particular case. Third, we prove the decidability of a number of desirable TKRS-deduction services (like satisfiability, subsumption and instance checking) through a sound, complete and terminating calculus for reasoning in ALCNR-knowledge bases. Our calculus extends the general technique of constraint systems. As a byproduct of the proof, we get also the result that inclusion statements in ALCNR can be simulated by terminological cycles, if descriptive semantics is adopted. Nilsson, N. (1994) "Teleo-Reactive Programs for Agent Control", Volume 1, pages 139-158 PostScript: volume1/nilsson94a.ps (477K) compressed, volume1/nilsson94a.ps.Z (155K) Abstract: A formalism is presented for computing and organizing actions for autonomous agents in dynamic environments. We introduce the notion of {\it teleo-reactive (T-R) programs} whose execution entails the construction of circuitry for the continuous computation of the parameters and conditions on which agent action is based. In addition to continuous feedback, T-R programs support parameter binding and recursion. A primary difference between T-R programs and many other circuit-based systems is that the circuitry of T-R programs is more compact; it is constructed at run time and thus does not have to anticipate all the contingencies that might arise over all possible runs. In addition, T-R programs are intuitive and easy to write and are written in a form that is compatible with automatic planning and learning methods. We briefly describe some experimental applications of T-R programs in the control of simulated and actual mobile robots. Koppel, M., Feldman R. and Segre, A.M. (1994) "Bias-Driven Revision of Logical Domain Theories", Volume 1, pages 159-208 PostScript: volume1/koppel94a.ps (465K) compressed, volume1/koppel94a.ps.Z (203K) Abstract: The theory revision problem is the problem of how best to go about revising a deficient domain theory using information contained in examples that expose inaccuracies. In this paper we present our approach to the theory revision problem for propositional domain theories. The approach described here, called PTR, uses probabilities associated with domain theory elements to numerically track the ``flow'' of proof through the theory. This allows us to measure the precise role of a clause or literal in allowing or preventing a (desired or undesired) derivation for a given example. This information is used to efficiently locate and repair flawed elements of the theory. PTR is proved to converge to a theory which correctly classifies all examples, and shown experimentally to be fast and accurate even for deep theories. Ling, C.X. (1994) "Learning the Past Tense of English Verbs: The Symbolic Pattern Associator vs. Connectionist Models", Volume 1, pages 209-229 PostScript: volume1/ling94a.ps (247K) Online Appendix: volume1/ling-appendix.Z (109K) data file, compressed Abstract: Learning the past tense of English verbs - a seemingly minor aspect of language acquisition - has generated heated debates since 1986, and has become a landmark task for testing the adequacy of cognitive modeling. Several artificial neural networks (ANNs) have been implemented, and a challenge for better symbolic models has been posed. In this paper, we present a general-purpose Symbolic Pattern Associator (SPA) based upon the decision-tree learning algorithm ID3. We conduct extensive head-to-head comparisons on the generalization ability between ANN models and the SPA under different representations. We conclude that the SPA generalizes the past tense of unseen verbs better than ANN models by a wide margin, and we offer insights as to why this should be the case. We also discuss a new default strategy for decision-tree learning algorithms. Cook, D.J. and Holder, L.B. (1994) "Substructure Discovery Using Minimum Description Length and Background Knowledge", Volume 1, pages 231-255. PostScript: volume1/cook94a.ps (750K) compressed, volume1/cook94a.ps.Z (266K) Online Appendix: volume1/cook94a-appendix.tar.Z (36K), SUBDUE source code Abstract: The ability to identify interesting and repetitive substructures is an essential component to discovering knowledge in structural data. We describe a new version of our SUBDUE substructure discovery system based on the minimum description length principle. The SUBDUE system discovers substructures that compress the original data and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. SUBDUE uses a computationally-bounded inexact graph match that identifies similar, but not identical, instances of a substructure and finds an approximate measure of closeness of two substructures when under computational constraints. In addition to the minimum description length principle, other background knowledge can be used by SUBDUE to guide the search towards more appropriate substructures. Experiments in a variety of domains demonstrate SUBDUE's ability to find substructures capable of compressing the original data and to discover structural concepts important to the domain. Description of Online Appendix: This is a compressed tar file containing the SUBDUE discovery system, written in C. The program accepts as input databases represented in graph form, and will output discovered substructures with their corresponding value. Murphy, P.M. and Pazzani, M.J. (1994) "Exploring the Decision Forest: An Empirical Investigation of Occam's Razor in Decision Tree Induction", Volume 1, pages 257-275. PostScript: volume1/murphy94a.ps (868K) compressed, volume1/murphy94a.ps.Z (215K) Abstract: We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set of consistent decision trees and the factors that affect the accuracy of individual trees. In particular, we investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data. The experiments were performed on a massively parallel Maspar computer. The results of the experiments on several artificial and two real world problems indicate that, for many of the problems investigated, smaller consistent decision trees are on average less accurate than the average accuracy of slightly larger trees.