Newsgroups: rec.arts.books,comp.ai,comp.ai.philosophy,sci.cognitive,sci.psychology.theory
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!news.dfci.harvard.edu!camelot.ccs.neu.edu!chaos.dac.neu.edu!usenet.eel.ufl.edu!news.mathworks.com!uunet!in2.uu.net!allegra!alice!rhh
From: rhh@research.att.com (Ron Hardin <9289-11216> 0112110)
Subject: Re: Does AI make philosophy obsolete?
Message-ID: <DFwEEE.J85@research.att.com>
Organization: AT&T Bell Labs, Murray Hill, NJ
References: <DFnG0u.1Gu@research.att.com> <44h0ga$dqh@scotsman.ed.ac.uk> <DFp1px.IHE@research.att.com> <44jp46$9p8@scotsman.ed.ac.uk> <DFqyp6.9oD@research.att.com> <JMC.95Oct1094721@Steam.stanford.edu> <DFsBDo.1x9@research.att.com> <JMC.95Oct1163339@Steam.stanford.edu> <DFstCz.MJ3@research.att.com> <JMC.95Oct1195337@Steam.stanford.edu> <DFtMqy.9tD@research.att.com> <JMC.95Oct2092236@Steam.stanford.edu> <DFu7D2.EFz@research.att.com> <JMC.95Oct2190847@Steam.stanford.edu> <DFvHFv.1Hp@research.att.com> <JMC.95Oct3084423@Steam.stanford.edu>
Date: Wed, 4 Oct 1995 00:23:01 GMT
Lines: 27
Xref: glinda.oz.cs.cmu.edu comp.ai:33834 comp.ai.philosophy:33332 sci.cognitive:9849 sci.psychology.theory:924

John McCarthy writes:
>     It seems AI is attackable from two sides, a literary one and
>     a mechanical one.  The literary one demonstrates what AI
>     cannot imagine, and the mechanical one makes AI
>     unimaginable.
>
>This is too deep for me - or rather too cute.

I think that if way of doing a problem is mechanical enough,
you can't rhapsodize on its displaying intelligence (the BDDs),
but exactly the same problem in another form (logical sentences)
seems to call for intelligence -  you can imagine its working.

It's a literary experiment, to change the context to a mechanical
one, and see if that call disappears.  I think it does.

Supposing you wanted to specify the missionary problem with BDD's
as input instead of logical sentences, and you want to reuse an old
result, you'd be looking for homomorphisms between bdd's or something,
decide it's NP and the hell with it.  You could maybe do the missionary
problem but it wouldn't scale.

The very same specification in sentence form seems to produce a problem
requiring intelligence.

I'm not familiar with how the story goes in AI, but how does it
do on NP problems?
