From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:33:24 EST 1992
Article 4093 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism
Message-ID: <1992Feb27.221933.1168@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb25.202744.27815@organpipe.uug.arizona.edu> <1992Feb27.023234.49@norton.com>
Date: Thu, 27 Feb 1992 22:19:33 GMT

In article <1992Feb27.023234.49@norton.com> brian@norton.com (Brian Yoder) writes:
>bill@NSMA.AriZonA.EdU (Bill Skaggs) writes:
>>   This is a misunderstanding.  Nobody is claiming that rocks are
>> intelligent.  The argument is that a certain definition of "intelligence"
>> that seems reasonable is actually not reasonable because it implies
>> that rocks (and everything else) are intelligent.
> 
>>   I will briefly repeat the argument.  
> 
>>   Proposed definition:  An object is "intelligent" if it implements
>> some sufficiently sophisticated set of programs.
>
>On what basis would anyone define "intelligent" this way?  Is this related
>to Searle's arguments about AI?

This is the way in which *AI*, not Searle, defines intelligence.  If you
don't like it, then you don't like AI...

[stuff deleted]


>One of my criticisms of Searle is that he spends a lot of time talking about
>"programs" and "instructions" rather than talking about information, perception,
>consciousness, concepts, and the like. 

Searle is discussing AI.  AI is composed of programs carrying out instructions.


> He could just as easily say that 
>"Humans have brains.  Brains are made of atoms.  Atoms cannot think.  Therefore
>brains cannot think.".  

But we *know* subjectively that brains *do* think.  We don't know the same for
computers.

> What mecahnisms are used to produce an effect are 
>unimportant when discussion how (or if) the process can be generated on principle.
>
>>   If you accept the (modified) Church-Turing thesis (which most people do),
>> a program can be identified with a finite state automaton, so the first
>> question is no problem.
>
>I agree that one aspect of a program can be characterized as a FSA, but I
>disagree with Searle's definition of what a "program" is and I don't agree
>that just because a FSA can be used as a description of what a program does
>that it is a useful way of understanding intelligence or programs. 

Again, your complaint is with AI, and not Searle.

> To analogize,
>I agree that music can be completely represented as a series of digital samples,
>but to attempt to describe and discuss concepts like "melody" or "passion" or
>"skill" in those terms is worthless.  This fact doesn't mean that "melody", 
>"skill", and "passion" are invalid concepts, it just means that we are looking
>in the wrong place for them and in the wrong way.  Searle makes the same kind
>of mistake.
> 
>>   The obvious-seeming answer to the second question is that an
>> object implements a program (= FSA) if there is a mapping
>> from states of the object to states of the FSA such that
>> the state-transition rules of the Turing machine are respected by
>> the mapping.
>
>Just because any process CAN BE thought of as a FSA doesn't mean that it IS
>a FSA.
> 
>>   What is a "mapping"?  There is no ambiguity here:  "mapping" just
>> means a function, in the mathematical sense.
>
>That certainly is not obvious to me!  It is taking a huge jump in context from
>one area to another which has nothing to do with it, but there are more 
>important problems than that...
> 
>>   Now Putnam's argument, which I will not repeat, is that this
>> seemingly natural definition is bad, because with such a broad
>> notion of implementation it can be shown that every physical object
>> (such as a rock) implements every program, or at least an enormous
>> set of programs.  The conclusion is that some more restrictive
>> notion of implementation is needed.   
>
>I believe I have heard that argument (though it was some time ago).
>As I remember it was another case of horrible context jumping.  To say that
>intelligence is a program and that programs are FSAs and that everything 
>is an FSA therefore everything is intelligent is just ludicrous. 

That's a reasonable response.  It's the one I have.  Now, which premise
do you want to discard?

- michael



