From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo Tue Feb 11 15:26:07 EST 1992
Article 3621 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.psychology:2062 comp.ai.philosophy:3621
Newsgroups: sci.psychology,comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo
>From: christo@psych.toronto.edu (Christopher Green)
Subject: KANT AND AI 
Message-ID: <1992Feb10.220014.2561@psych.toronto.edu>
Organization: Dept. of Psychology, University of Toronto
Date: Mon, 10 Feb 1992 22:00:14 GMT


It seems pretty clear to me that not many people are understanding the
distinction I've been trying to push.  Perhaps I was not as clear as I
might have been because I thought that all these distinctions were pretty
standard in epistemology. As a source for the chart below, I credit Norm
Swartz at Simon Fraser University (whose opinions about AI I know nothing,
but who taught me most of the epistemology I know).  He is co-author of
_Possible worlds: An introduction to logic and its philosophy_ with Ray
Bradley (Hackett, 1979). It's worth a look if the below chart is new to
you.

                          EPISTEMIC STATUS
                        analytic/  synthetic/
                        necessary  contingent
                       ----------------------
                       | logic & | Kant's   | 
      M       a priori |  math   |  big     |
      E       (reason) |         | question |   
      T                ----------------------
      H                |         | empirical|
      O   a posteriori |   AI?   |  science |
          (observation)|         |          |
                       ----------------------

Although cognitive science asks empirical questions such as "Is the mind
a digital computer?", AI, per se, does not. Much of AI isn't the least bit 
interested in the cognitivie scientist's question. The questions of AI go 
something like, "Will this sort of a program be able to compute function
such-and-such?"  These sorts of questions, although AI-ists may answer
them a posteriori, they are not contingent, and therefore not empirical.
If it turns out that every mental process is computable,
then AI will have answered Cognitive Science's emprical question, but 
without having done any empirical work. Andy Kukla of the Univeristy of  
Toronto likens the work of AI, I think correctly, to the work of a 
mathematician who is trying to see if Fermat's last theorem is provable.
If it is, then s/he has also proven the contenigent proposition, "All of
Fermat's mathematical intuitions were correct," but without having done
any empirical work.

The reason "empirical" and "a posteriori" are often confused is because
of the empiricists' attempts to dismiss Kant's efforts to find examples
of synthetic a priori knowledge.  If none exists, as most philosophers now
believe, then all synthetic knowledge must be discovered a posteriori and,
thus, all synthetic knowledge is empirical.  This does not imply, however,
that all knowledge gained via a posteriori means is contingent. That is a
simple fallacy; one that I find AI falls into constantly.
As far as I can see, this is air-tight, and shows that Newell and Simon
were simply confusing the terms "empirical" and "a posteriori" when they
wrote that AI is an empirical science.
-- 
Christopher D. Green                christo@psych.toronto.edu
Psychology Department               cgreen@lake.scar.utoronto.ca
University of Toronto
---------------------


