MIME-Version: 1.0 Server: CERN/3.0 Date: Tuesday, 07-Jan-97 15:56:57 GMT Content-Type: text/html Content-Length: 11618 Last-Modified: Monday, 11-Dec-95 16:42:34 GMT
A new inductive learning system, LAB (Learning for ABduction), is presented which acquires abductive rules from a set of training examples. The goal is to find a small knowledge base which, when used abductively, diagnoses the training examples correctly and generalizes well to unseen examples. This contrasts with past systems that inductively learn rules that are used deductively. Each training example is associated with potentially multiple categories (disorders), instead of one as with typical learning systems. LAB uses a simple hill-climbing algorithm to efficiently build a rule base for a set-covering abductive system. LAB has been experimentally evaluated and compared to other learning systems and an expert knowledge base in the domain of diagnosing brain damage due to stroke.
Cynthia Thompson
M.A. Thesis, Department of Computer Sciences, University of Texas at Austin, 1993.
A new system for learning by induction, called LAB, is presented. LAB
(Learning for ABduction) learns abductive rules based on a set of training
examples. Our goal is to find a small knowledge base which, when used
abductively, diagnoses the training examples correctly, in addition to
generalizing well to unseen examples. This is in contrast to past systems,
which inductively learn rules which are used deductively. Abduction is
particularly well suited to diagnosis, in which we are given a set of symptoms
(manifestations) and we want our output to be a set of disorders which explain
why the manifestations are present. Each training example is associated with
potentially multiple categories, instead of one, which is the case with typical
learning systems. Building the knowledge base requires a choice between
multiple possibilities, and the number of possibilities grows exponentially
with the number of training examples. One method of choosing the best
knowledge base is described and implemented. The final system is
experimentally evaluated, using data from the domain of diagnosing brain damage
due to stroke. It is compared to other learning systems and a knowledge base
produced by an expert. The results are promising: the rule base learned is
simpler than the expert knowledge base and rules learned by one of the other
systems, and the accuracy of the learned rule base in predicting which areas
are damaged is better than all the other systems as well as the expert
knowledge base.
Siddarth Subramanian
Technical Report AI92-179, Artificial Intelligence Lab,
University of Texas at Austin, March 1991. This proposal presents an approach to explanation that
incorporates the paradigms of belief revision and abduction. We
present an algorithm that combines these techniques and a system
called BRACE that is a preliminary implementation of this
algorithm. We show the applicability of the BRACE approach to a wide
range of domains including scientific discovery, device diagnosis and
plan recognition. Finally, we describe our proposals for a new
implementation, new application domains for our system and extensions
to this approach.
Hwee Tou Ng and Raymond J. Mooney
Submitted for journal publication.
A diverse set of intelligent activities, including natural language
understanding and diagnosis, requires the ability to construct
explanations for observed phenomena. In this paper, we view
explanation as abduction, where an abductive explanation is a
consistent set of assumptions which, together with background
knowledge, logically entails a set of observations. We have
successfully built a domain-independent system, ACCEL, in which
knowledge about a variety of domains is uniformly encoded in
first-order Horn-clause axioms. A general-purpose abduction
algorithm, AAA, efficiently constructs explanations in the
various domains by caching partial explanations to avoid redundant
work. Empirical results show that caching of partial explanations can
achieve more than an order of magnitude speedup in run time. We have
applied our abductive system to two general tasks: plan recognition in
text understanding, and diagnosis of medical diseases, logic circuits,
and dynamic systems. The results indicate that ACCEL is a
general-purpose system capable of plan recognition and diagnosis, yet
efficient enough to be of practical utility.
Hwee Tou Ng and Raymond J. Mooney
Proceedings of the Third International Conference on Principles
of Knowledge Representation and Reasoning, pp. 499-508,
Cambridge, MA, October 1992.
While it has been realized for quite some time within AI that abduction
is a general model of explanation for a variety of tasks, there have
been no empirical investigations into the practical feasibility of a
general, logic-based abductive approach to explanation. In this paper
we present extensive empirical results on applying a general abductive
system, ACCEL, to moderately complex problems in plan recognition
and diagnosis. In plan recognition, ACCEL has been tested on 50
short narrative texts, inferring characters' plans from actions
described in a text. In medical diagnosis, ACCEL has diagnosed 50
real-world patient cases involving brain damage due to stroke
(previously addressed by set-covering methods). ACCEL also uses
abduction to accomplish model-based diagnosis of logic circuits (a full
adder) and continuous dynamic systems (a temperature controller and the
water balance system of the human kidney). The results indicate that
general purpose abduction is an effective and efficient mechanism for
solving problems in plan recognition and diagnosis.
Bradley L. Richards, Ina Kraan, and Benjamin J. Kuipers
Proceedings of the Tenth National Conference on
Artificial Intelligence, San Jose, CA, July 1992.
We describe a method of automatically abducing qualitative models from
descriptions of behaviors. We generate, from either quantitative or
qualitative data, models in the form of qualitative differential equations
suitable for use by QSIM. Constraints are generated and filtered both by
comparison with the input behaviors and by dimensional analysis. If the
user provides complete information on the input behaviors and the
dimensions of the input variables, the resulting model is unique,
maximally constrainted, and guaranteed to reproduce the input behaviors.
If the user provides incomplete information, our method will still
generate a model which reproduces the input behaviors, but the model
may no longer be unique. Incompleteness can take several forms: missing
dimensions, values of variables, or entire variables.
Hwee Tou Ng and Raymond J. Mooney
Proceedings of the Ninth National Conference on
Artificial Intelligence, pages 494-499, Anaheim, CA, July 1991.
This paper presents an algorithm for first-order Horn-clause abduction
that uses an ATMS to avoid redundant computation. This algorithm is
either more efficient or more general than any other previous
abduction algorithm. Since computing all minimal abductive
explanations is intractable, we also present a heuristic version of
the algorithm that uses beam search to compute a subset of the
simplest explanations. We present empirical results on a broad range
of abduction problems from text understanding, plan recognition, and
device diagnosis which demonstrate that our algorithm is at least an
order of magnitude faster than an alternative abduction algorithm that
does not use an ATMS.
Hwee Tou Ng and Raymond J. Mooney
Proceedings of the Eighth National Conference on
Artificial Intelligence, pages 337-342, Boston, MA, 1990.
Abduction is an important inference process underlying much of human
intelligent activities, including text understanding, plan
recognition, disease diagnosis, and physical device diagnosis. In
this paper, we describe some problems encountered using abduction to
understand text, and present some solutions to overcome these
problems. The solutions we propose center around the use of a
different criterion, called explanatory coherence, as the
primary measure to evaluate the quality of an explanation. In
addition, explanatory coherence plays an important role in the
construction of explanations, both in determining the appropriate
level of specificity of a preferred explanation, and in guiding the
heuristic search to efficiently compute explanations of sufficiently
high quality.