Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!gatech!news.mathworks.com!zombie.ncsc.mil!simtel!lll-winken.llnl.gov!fnnews.fnal.gov!gw1.att.com!nntpa!news
From: shaw@lotus.lc.att.com (Andrew M. Shaw)
Subject: Re: Does AI make philosophy obsolete?
Message-ID: <DG1C9z.3HL@nntpa.cb.att.com>
Sender: news@nntpa.cb.att.com (Netnews Administration)
Nntp-Posting-Host: lotus.lc.att.com
Reply-To: shaw@lotus.lc.att.com
Organization: AT&T Bell Laboratories
References: <DG00Kn.L47@research.att.com>
Date: Fri, 6 Oct 1995 16:25:11 GMT
Lines: 43

In article L47@research.att.com, rhh@research.att.com (Ron Hardin <9289-11216> 0112110) writes:
>
> [NP-complete defeats AI, so why is it different than any other method]
>
> I was thinking that blind dumb computation is exponential in problem size
> and so the problem is undoable.
> 
>  >Are thought by whom?  As I mentioned above, humans can't solve NP-complete
>  >problems well either.  I've never encountered (outside of this discussion)
>  >a claim that NP-completeness has anything to do with intelligence.
> 
> I think we agree, though I didn't want to speculate on whether humans
> can solve NP-complete problems.  All I wanted was that for these problems,
> the AI people - and also their detractors! - see no possibility of
> help from AI.   So for these problems, therefore, the domains of
> blind dumb computation and artificially intelligent computation are identical.
> AI people are not inclined to see their techniques as special here.
> I believe this is a rigorous step of this novel argument?

Most likely I'm missing the point again, but is it not true that people
*do* solve NP-complete problems in the sense that they are satisfied with
a good-enough result?

That is, neither AI nor people can "solve" NP-complete through exhaustive
computation, but people are smart enough to recognize this.  Therefore, if
your AI were smart enough to realize that a minimum solution to TSP was out
of reach, but offered one that was fairly good, wouldn't that validate the
technique?

> Therefore, I wanted to continue, the emergent intelligence and other
> such things were thought not to occur when such a problem was being
> computed;  whereas on the other hand the very same AI techniques
> were thought to give rise to magical powers elsewhere.  I wanted this to
> lead to wonderment about the mechanism that might govern the shift
> in thinking in AI people.

I'm not sure what the magical powers are supposed to be ... is it the
shift from thinking about the problem (ie trying to solve it) to thinking
about solving the problem (ie trying to generate a solution method)?




