From timm@cse.unsw.edu.au Wed May 11 16:03:52 EDT 1994 Article: 1579 of comp.ai.shells Xref: glinda.oz.cs.cmu.edu comp.ai.shells:1579 Newsgroups: comp.ai.shells Path: honeydew.srv.cs.cmu.edu!das-news.harvard.edu!noc.near.net!howland.reston.ans.net!agate!msuinfo!harbinger.cc.monash.edu.au!news.uwa.edu.au!nodecg.ncc.telecomwa.oz.au!netbsd08.dn.itg.telecom.com.au!orca1.vic.design.telecom.com.au!picasso.cssc-syd.tansu.com.au!wabbit.cc.uow.edu.au!metro!usage!teal.spectrum.cse.unsw.EDU.AU!timm From: timm@cse.unsw.edu.au (Tim Menzies) Subject: Re: Explanation Facility in Expert Systems Message-ID: <1994May6.060533.5383@usage.csd.unsw.OZ.AU> Sender: news@usage.csd.unsw.OZ.AU Nntp-Posting-Host: teal.spectrum.cse.unsw.edu.au Reply-To: timm@cse.unsw.edu.au (Tim Menzies) Organization: none Date: Fri, 6 May 1994 06:05:33 GMT Lines: 162 i asked my bibliography for "explanations". this is what it returned (highly recommend the stuff by Leake & Paris. i believe that "abduction" will evolve as the underlying computational process of "explanation"> %0 Journal Article %A Clancey, W. %D 1983 %T The epistemology of rule-based systems: a framework for explanation. %B Artificial Intelligence %V 27 %P 289-350 %O Explanation requires higher level constructs than just domain rules. %0 Conference Proceedings %A Gautier, P.O. %A Gruber, T.R. %D 1993 %T Generating Explanations of Device Behaviour Using Compositional Modelling and Causal Ordering %B AAAI '93 %C Washington, USA %P 264-270 %O Combining explanation with dialogue generation using causal ordering and composition modelling (a Forbus technique). Nice literature review on causality: curious omission- the Pearl stuff. %0 Journal Article %A Leake, D.B. %D 1991 %T Goal-Based Explanation Evaluation %B Cognitive Science %V 15 %P 509-545 %O Rather than assessing an explanation in terms of context-independent syntactic criteria, Leake argues that the "goodness" is a goal-dependent measure. Reads like a more general version of Paris's thesis. "Explanation" in ES does not mean "rule traces". Like Paris, Leake argues that: "An explainer should tailor explanations towards serving the goal for which it was intended." Gives long lists of different computational approaches to explanation (e.g. CONSTRUCTIVE: explanation in terms of knowledge structures (scripts, plans); CONTRASTIVE: Note: as near as I can tell, Levesque '89 argues for minimal explanations (syntactically shortest) while Simmons argues for specificity as an explanations criteria (syntactically longest). Interestingly, in the internals of Poole '93 (#235) abductive reasoner, we also see the use of prior explanations (but which are not persistent between runs of the program). Note also: i feel rule trace are more a debugging aid than an explanation facility. Boasts regarding the merits of symbolic AI on explanation grounds seem to confuse the need to debug a program in mid-development vs creating constructs for the user to assist their comprehension of the system via selecting a subset of the data structures generated at runtime and filtering them in some way. %0 Conference Proceedings %A Leake, D.B. %D 1993 %T Focusing Construction and Selection of Abductive Hypotheses %B IJCAI '93 %P 24-29 %O construction and selection of abductive hypothesis focused by specific explanations of prior episodes (as well as goal-based criteria reflecting current information needs). see also spohrer for "focusing via prior explanations" and Leake '91 for lots more on "goal based criteria reflecting current information needs" %0 Conference Proceedings %A Mittal, V.O. %A Paris, C.L. %D 1993 %T Generating Natural Language Descriptions with Examples: Difference between Introductory and Advanced Texts %B AAAI '93 %C Washington, USA %P 271-276 %O More on how explanation differs between experts and novices. This time, a set of descriptors is proposed for examples and rules defined for what style of example should be shown to who. %0 Conference Proceedings %A Ng, H.T. %A Mooney, R.J. %D 1990 %T The Role of Coherence in Constructing and Evaluating Abductive Explanations %B Working Notes of the 1990 Spring Symposium on Automated Abduction. %I UC Irvine %V TR 90-32 %P 13-17 %O Death to context independent criteria for assessing explanations. %0 Book Section %A Paris, C.L. %D 1989 %T The Use of Explicit User Models in a Generation System for Tailoring Answers to the User's Level of Expertise %B User Models in Dialogue Systems %E A. Kobsa and W. Wahlster %I Springer-Verlag %P 200-232 %O Explanations can be characterised in terms of a constituency trace or a process trace . Novices need process traces (descriptions of this doing that to the other, etc. Experts don't need the process information (its already internalised). So their explanations can be offered in shorter terms (i.e. merely mentioning the parts used). But more than that, explanations can be tailored to leap from process to constituency traces (and back again) as the explanation algorithm moves over the KB and we access the user's profile of what bits they know and don't know. That is, users are not just expert/novice but are treated on a continuum of knowledge. Sounds a lot like a general formalism for explaining X to Y when Y's level of knowledge is known. Great stuff. %0 Journal Article %A Poole, D. %D 1988 %T A Logical Framework for Default Reasoning %B Artificial Intelligence %V 36 %P 27-47 %O Much more approachable that Etherington & Reiter. Lottsa nice egs User gives true facts and a pool of possible hypothesis they are prepared to accept as part of an explanation to predict the expected behaviour (i.e. together with the facts implies the observations) which is consistent with the facts (i.e. does not predict anything known to be false). The explanation should e viewed as a "scientific theory" based on a restricted sets of possible hypotheses. It is also useful to view the explanation a scenario in which some goal is true. The user provides what is acceptable in such a scenario. %0 Journal Article %A Swartout, W.R. %D 1983 %T XPLAIN: A system fro creating and explaining expert consulting systems. %B Artificial Intelligence %V 21 %N 3 %P 285-325 %O Computer programs are implementations of a design. Not everything in a design exists in the implementation. Explaining a program may require access to information not available within the program. XPLAIN includes a compiler that retains the intermediary constructs it uses when turning a high-level requirements into a knowledge-based system. These intermediaries are useful (indispensable) during explanation. -- Tim Menzies =timm@cse.unsw.edu.au| I don't know why I did it, I ,-_|\ AI Lab, Computer Science, | don't know why I enjoyed it, / \ Univeristy of NSW, P.O. Box 1, | and I don't know why I'll do \_,-._* <- Kensington, Australia, 2033 | it again. v ph: +61-49-676-096 | -- Bart Simpson