Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!cam-news-feed3.bbnplanet.com!news.bbnplanet.com!cam-news-hub1.bbnplanet.com!howland.erols.net!ix.netcom.com!nagle
From: nagle@netcom.com (John Nagle)
Subject: Re: Does CYC really understand natural language? How?
Message-ID: <nagleE4rEn6.EwC@netcom.com>
Organization: Netcom On-Line Services
X-Newsreader: NN version 6.5.0 CURRENT #9
References: <5cmjh2$8jp@atlantis.utmb.edu>
Date: Wed, 29 Jan 1997 07:40:18 GMT
Lines: 27
Sender: nagle@netcom6.netcom.com

rshen@marlin.utmb.edu (Rong Shen) writes:
>	I was just re-reading Lenat's article in the Sept. 1995 
>Scientific American. On page 82, he wrote that "Similarly, CYC could 
>parse the request 'Show me happy people' and deliver a picture whose 
>caption reads 'A man watching his daughter learning to walk.'" 

>	It sounds like the CYC could understand the request as we humans 
>do and could generate an appropriate response. I checked the web site at 
>www.cyc.com (CYC-NL System), and what I found was a simple description of 
>how the 
>lexicon, the syntactic parser, and the semantic interpreter worked. But 
>what happened between the semantic interpreter and the final response to 
>the request was vague.

     Yeah.  A lot about CYC is vague.  Prof. Vaughn Pratt from Stanford
tried it about two years ago, and he wasn't very impressed.  It's
not at all clear that CYC is much smarter than the Julia bot found in
some MUDs, although CYC has a much bigger database and budget.

     The basic problem with CYC is that it's based on the premise that
if you put in enough rules about properties of the real world, some
sort of common sense will emerge.  That's an interesting conjecture,
but it's not at all clear if it's true.  I made the remark to one of the
Cyc people a decade ago that I didn't think it would work, but it was
worth trying that approach to find out exactly why it wouldn't work.

						John Nagle
