Newsgroups: comp.ai,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!europa.eng.gtefsd.com!howland.reston.ans.net!pipex!warwick!uknet!festival!castle.ed.ac.uk!cam
From: cam@castle.ed.ac.uk (Chris Malcolm)
Subject: Re: Is Common Sense Explicit or Implicit?
References: <1994Aug22.042736.25458@news.media.mit.edu> <CvFMAI.3qG@festival.ed.ac.uk> <CvKC9v.Lr2@aisb.ed.ac.uk>
Message-ID: <CvpFB3.Iot@festival.ed.ac.uk>
Sender: news@festival.ed.ac.uk (remote news read deamon)
Organization: University of Edinburgh
Date: Tue, 6 Sep 1994 10:47:24 GMT
Lines: 104
Xref: glinda.oz.cs.cmu.edu comp.ai:24108 comp.ai.philosophy:20274

In article <CvKC9v.Lr2@aisb.ed.ac.uk> alasdt@aisb.ed.ac.uk (Alasdair Turner) writes:
>In article <CvFMAI.3qG@festival.ed.ac.uk>, cam@castle.ed.ac.uk (Chris Malcolm) writes:
>> In article <1994Aug22.042736.25458@news.media.mit.edu> minsky@media.mit.edu (Marvin Minsky) writes:

>> >What I'd like to see is an attempt to discuss
>> >the relations between **three or more** categories of knowledge types
>> >because I consider the explicit-vs.-not" to be evidently unproductive.

>> EXPLICIT, IMPLICIT, and TACIT.

>> Explicit knowledge: you could (in principle) find the representation
>> of the knowledge in the creature, and it is used by the creature in
>> generating the observed knowledgeable behaviour ...

>> Implicit knowledge: not explicitly represented, but capable of being
>> made explicit by the available reasoning machinery operating on the
>> explicit knowledge.

>> Tacit knowledge: designed into the structure of the creature
>> (algorithmically, physically, etc.), and so correctly governing its
>> behaviour, but not available as meaningful knowledge to the creature

>Surely this still does not remove the `unproductive' Implicit vs
>Explicit?  What you have done is tacked a third `enviromentally
>determined knowledge' group onto a PSS format, leaving the old
>implicit / explicit distinction in place.  Effectively this is simply
>adding the constraints of the hardware of the machine to
>the equation.

The unproductive nature of the explicit/implicit debate was in part
due to two problems:-

	1. Implicit was compounded with the very different category of
tacit, which I have clearly distinguished.

	2. The categorisation of knowledge is relative to the knower
and the level of description of the knower. I have pointed this out
and made clear the criteria.

So, rather than "tacking on" a third category, I have distinguished
two categories from within (some people's version of) implicit. Note
too that "tacit" is more than just hardware and environment: it could
be (and in our robots often is) software.

>You may argue that this is all that is required to force a reasonable
>interpretation of implicit and explicit knowledge.  Not true.
>Although it is clearly true that the knowledge within the being
>includes its physical structure, I think that that your explicit and
>implicit knowledge assume an internal program within that being, a
>program which claims no relation to the tacit knowledge.

I don't assume an internal program. Since these two categories as I
define them involve knowledge which can be reasoned about by the
knower, then some kind of machinery capable of doing this reasoning is
an essential part of the knowledge/knower package under discussion. It
can be (and in AI almost always is) a program, but that is an
implementation choice, not a requirement or a presumption.

You seem to presume that my "tacit" knowledge has nothing to do with
any internal knowledge representation and reasoning program in the
machine. Not so. In most reasoning programs the facts the program
reasons about come under my category of explicit knowledge, whereas
the rules of reasoning it uses come under my category of tacit. This
derives straightforwardly from the definitions I gave (quoted by you
above).

>However,
>tacit knowledge (as defined) must play some part in the determination
>of implicit and explicit knowledge (is your knowledge not affected by
>the way your brain is structured?).

I think I have answered this already. Note that from the point of view
of one fragment of software, another fragment (which may operate by
means of its own explicit knowledge) may constitute the environment of
the first. This is very obvious in the case of artificial life and
robot simulations, but is a very general and widely applicable kind of
realtionship.

>In this case, [I'm afraid I've
>missed a few of the steps to this conclusion --- see sig] implicit and
>explicit knowledge become part of the tacit knowledge of the machine
>and our three separate catagories are destroyed.

There are inter-relationships between them, and I suspect that it is
impossible to build a reasoning system which does not involve all
three, but the three categories are not destroyed by this interaction.
I constructed this system in order to be able to discuss the
experimental reasoning systems I build, since the "explicit/implicit"
distinction, although widely used, is a morass. I find that in most
cases the distinctions are clear and obvious, with sometimes one or
two awkward cases which require a little debate.

If you find the categories evaporating and merging as you try to use
them, this is usually a sign that you are confusing the viewpoints of
two different knowers. Don't forget that complex knowers (such as
ourselves) include sub-knowers.

>Paf Turner (Alasdair Turner) alasdt@aisb.ed.ac.uk
>MSc student, Dept AI, University of Edinburgh
-- 
Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205
"The mind reigns, but does not govern" -- Paul Valery
