Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!nagle
From: nagle@netcom.com (John Nagle)
Subject: Re: Is There A TRUE Industry LEADER in AI?
Message-ID: <nagleD2xFBx.10v@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <thomas-1801951023400001@obc.is.net> <nagleD2o6Lp.JC5@netcom.com> <1995Jan24.005510.5628@driftwood.cray.com>
Date: Tue, 24 Jan 1995 20:23:57 GMT
Lines: 39

chuckm@willow129.cray.com (Charles Matthews) writes:
>In article <nagleD2o6Lp.JC5@netcom.com>, nagle@netcom.com (John Nagle) writes:
>>       As it turned out, "expert systems" aren't "intelligent"; they're
>> a useful way to store certain types of reference-book data and not much more.
>> 

>Hmmm, I don't agree. Most of the debates regarding the level of intelligence
>exhibited by expert systems tend to end up in philosophical discussions 
>trying to define what intelligence actually is. If you take a more pragmatic
>view and evaluate expert systems according to the following simplistic 
>criteria: 
>1) Does the ES solve a useful problem?
>2) Does the ES perform at a level of expertise similar to that of a 
>   human expert in the problem domain?

       Those are evaluation criteria for an application program generally.
A good payroll or personal-finance program meets those criteria.

       The basic problem with expert systems was stated years ago, by 
one of the developers of Mycin (an antibiotic-advice program), who wrote 
"Mycin doesn't know about bacteria".  That's still the problem; there's 
an inadequate underlying model of what's really going on in almost all 
expert systems.  This is related to the "common-sense problem".
Most expert systems don't know what their predicates mean.  (But check
out NQTHM, the Computational Logic theorem prover, which does know
what its predicates mean, because the underlying theory is solid.)

       Worse, there isn't much feedback.  Most expert systems don't 
learn from their mistakes.  Many attempts to do something about this
have been made, and in specific domains there are some successes.  But
the state of the art in general "learning" is still weak.  

       The practical effect of this remains that expert systems are useful
for advice but can't be relied on too far.

       It's not an issue of what "intelligence" is; expert systems are still
so dumb they aren't even in the range where that's worth arguing about.

					John Nagle
