Newsgroups: comp.lang.prolog,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!delmarva.com!imci4!newsfeed.internetmci.com!news.mathworks.com!uhog.mit.edu!news!ml.media.mit.edu!minsky
From: minsky@ml.media.mit.edu (Marvin Minsky)
Subject: Re: please help me out with this!!
Message-ID: <1996Mar19.195052.15319@media.mit.edu>
Sender: news@media.mit.edu (USENET News System)
Organization: MIT Media Lab
References: <4i79o3$9a4@holly.cc.uleth.ca> <1996Mar14.163730.4374@media.mit.edu> <826928550snz@longley.demon.co.uk>
Date: Tue, 19 Mar 1996 19:50:52 GMT
Lines: 45
Xref: glinda.oz.cs.cmu.edu comp.lang.prolog:15004 comp.ai.philosophy:39108

In article <826928550snz@longley.demon.co.uk> David@longley.demon.co.uk writes:
>In article <1996Mar14.163730.4374@media.mit.edu>
>           minsky@ml.media.mit.edu "Marvin Minsky" writes:

>> Generally, my impresion is that prolog has not
>> been adequate for some, but not good for most other AI research,
>> because it (a) made it hard to apply knowledge to the search process
>> itself and (b) made it hard to use representations other than
>> predicate calculus.

>My question here is less directly to do with the merits of PROLOG but
>rather the objectives of AI itself. If one is trying to model human
>'reasoning' (as heuristics and all their biases) I can readily see 
>that any models based on the Predicate Calculus will be deemed
>inadequate. But if one wants to build systems which are intelligent
>by "normative" extensional standards, the picture is very different.
>
>This is, I believe, a theme which has been spluttering along (sometimes
>quite passionately <g>) in this newsgroup for some time now.
 [...]
>Surely it is one thing to model human "natural assessments" using
>technology such as Artificial Neural Nets (cf Gluck and bower 1988;1990)
>and another to build rational models along the lines of expert systems
>based on empirical data drawn from extensional analysis.

I don't agree.  

1. To solve a hard problem, you cannot search exhaustively.  Instead,
you need to engage knowledge about what to try, and how to modifty the
search by using the results of previous failed attempts.  This means
that you need valuable knowledge about which knowledge to engage.

2. So the question is how the system can "know" which empirical data
to apply to what.  This could in turn be regarded as "empirical data"
but the fact remains that you need to apply that knowledge to the
other knowledge.  I don't see any important difference between
"rational" and "human" models in that area.

3. Finally, I don't think that "rational" means anything important,
when it comes to searching through unknown realms.  You need good
heuristic search knowledge, and that will be needed for *any* system
that faces new problems.


