From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!psinntp!norton!brian Mon Mar  9 18:33:08 EST 1992
Article 4067 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!psinntp!norton!brian
>From: brian@norton.com (Brian Yoder)
Subject: Re: Strong AI and panpsychism
Message-ID: <1992Feb27.023234.49@norton.com>
Organization: Symantec / Peter Norton
References: <1992Feb25.202744.27815@organpipe.uug.arizona.edu>
Date: Thu, 27 Feb 1992 02:32:34 GMT
Lines: 100

bill@NSMA.AriZonA.EdU (Bill Skaggs) writes:
> In article <1992Feb25.105322.24546@norton.com> 
> brian@norton.com (Brian Yoder) writes:
>>This is nonsense. Arbitrary positions such as the one that rocks are intelligent
>>should not be considered "possible"...they should be tossed out as meaningless.
>>Of course I can't prove that there are not intelligent processes going on inside
>>rocks, but then you can't expect me to prove negatives like that anyway. Where's
>>your evidence that rocks have any intelligence?  Until you can come up with some,
>>you have no business claiming that they might have some.
 
>   This is a misunderstanding.  Nobody is claiming that rocks are
> intelligent.  The argument is that a certain definition of "intelligence"
> that seems reasonable is actually not reasonable because it implies
> that rocks (and everything else) are intelligent.
 
>   I will briefly repeat the argument.  
 
>   Proposed definition:  An object is "intelligent" if it implements
> some sufficiently sophisticated set of programs.

On what basis would anyone define "intelligent" this way?  Is this related
to Searle's arguments about AI?
 
>   This raises at least two questions:  1) What is a program; 2) What
> does it mean to "implement"?
 
As well as "What is 'sophisticated'?".  Actually I wouldn't even let someone
get past this much.  Does anyone seriously claim that this is what intelligence
is?  I would define it more along the lines of "An entity is intelligent if
it is conscious of reality and can retain and operate on the information it
collects."  with the "more intelligent" vs. "less intelligent" distinctions
being made with regard to a combination of how much information is retained 
by the entity, how quickly it operates, and how well it can derivitive
new (true) knowledge from previous knowledge.

One of my criticisms of Searle is that he spends a lot of time talking about
"programs" and "instructions" rather than talking about information, perception,
consciousness, concepts, and the like.  He could just as easily say that 
"Humans have brains.  Brains are made of atoms.  Atoms cannot think.  Therefore
brains cannot think.".  What mecahnisms are used to produce an effect are 
unimportant when discussion how (or if) the process can be generated on principle.

>   If you accept the (modified) Church-Turing thesis (which most people do),
> a program can be identified with a finite state automaton, so the first
> question is no problem.

I agree that one aspect of a program can be characterized as a FSA, but I
disagree with Searle's definition of what a "program" is and I don't agree
that just because a FSA can be used as a description of what a program does
that it is a useful way of understanding intelligence or programs.  To analogize,
I agree that music can be completely represented as a series of digital samples,
but to attempt to describe and discuss concepts like "melody" or "passion" or
"skill" in those terms is worthless.  This fact doesn't mean that "melody", 
"skill", and "passion" are invalid concepts, it just means that we are looking
in the wrong place for them and in the wrong way.  Searle makes the same kind
of mistake.
 
>   The obvious-seeming answer to the second question is that an
> object implements a program (= FSA) if there is a mapping
> from states of the object to states of the FSA such that
> the state-transition rules of the Turing machine are respected by
> the mapping.

Just because any process CAN BE thought of as a FSA doesn't mean that it IS
a FSA.
 
>   What is a "mapping"?  There is no ambiguity here:  "mapping" just
> means a function, in the mathematical sense.

That certainly is not obvious to me!  It is taking a huge jump in context from
one area to another which has nothing to do with it, but there are more 
important problems than that...
 
>   Now Putnam's argument, which I will not repeat, is that this
> seemingly natural definition is bad, because with such a broad
> notion of implementation it can be shown that every physical object
> (such as a rock) implements every program, or at least an enormous
> set of programs.  The conclusion is that some more restrictive
> notion of implementation is needed.   

I believe I have heard that argument (though it was some time ago).
As I remember it was another case of horrible context jumping.  To say that
intelligence is a program and that programs are FSAs and that everything 
is an FSA therefore everything is intelligent is just ludicrous.  It reminds me
of an old proof that all horses have infinite numbers of legs:

1) Horses have an even number of legs.
2) They have two legs behind and forelegs in the front.
3) Two legs plus four legs is six legs.
4) Six legs is an odd number of legs for a horse to have.
5) The only number that is both even and odd is infinity.
6) Therefore all horses have infinitely many legs. 
QED.


-- 
-- Brian K. Yoder (brian@norton.com) - Q: What do you get when you cross     --
-- Peter Norton Computing Group      -    Apple & IBM?                       --
-- Symantec Corporation              - A: IBM.                               --
--


