From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:35:06 EST 1992
Article 4252 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism
Organization: Department of Psychology, University of Toronto
References: <1992Feb27.221933.1168@psych.toronto.edu> <1992Mar03.061159.16651@norton.com>
Message-ID: <1992Mar4.203455.23960@psych.toronto.edu>
Date: Wed, 4 Mar 1992 20:34:55 GMT

In article <1992Mar03.061159.16651@norton.com> brian@norton.com (Brian Yoder) writes:
>michael@psych.toronto.edu (Michael Gemar) writes:
>> In article <1992Feb27.023234.49@norton.com> brian@norton.com (Brian Yoder) writes:
>> >bill@NSMA.AriZonA.EdU (Bill Skaggs) writes:
>> >>   Proposed definition:  An object is "intelligent" if it implements
>> >> some sufficiently sophisticated set of programs.
>
>> >On what basis would anyone define "intelligent" this way?  Is this related
>> >to Searle's arguments about AI?
> 
>> This is the way in which *AI*, not Searle, defines intelligence.  If you
>> don't like it, then you don't like AI...
> 
>Now that's a bold claim!  Why can't I define AI as "The study of man-made 
>systems that perceive reality and react on the basis of that knowledge in
>a self-generated goal-directed manner."?  Such a description would encompass
>the systems that people want to build and does not suffer from the myriad
>of problems Searle arrives at.  I should comment that I think Searle DOES reach
>the proper conclusions given his starting point, but I don't buy his definition
>of AI.  If I am not talking about AI then what AM I talking about?

I don't know.  Ask the AI types. The only point I was making is that
the definition provided is in accord with that given by most AI types
I know of.  If you have a way of producing the kinds of systems you
describe that *don't* involve programs, or at least do not have their
essential functions *describable* computationally, I (and others) would
like to see them.

>> >One of my criticisms of Searle is that he spends a lot of time talking about
>> >"programs" and "instructions" rather than talking about information, perception,
>> >consciousness, concepts, and the like. 
> 
>> Searle is discussing AI.  AI is composed of programs carrying out instructions.
> 
>Could you justify that definition?  It seems horribly tied to ONE WAY OF 
>BUILDING an intelligent system.  What Searle proves is that that method cannot
>work.  I buy that much.  How can you justify his definitions are the correct 
>ones?

What Searle attempts to show is that the method that AI chooses to attack
the problem won't work.  Again, if you have a method that is *not*
like that of AI, then great.  But, I take it as given that you then
have to surrender functionalism.  This is a *big* step.

>> > He could just as easily say that 
>> >"Humans have brains.  Brains are made of atoms.  Atoms cannot think.  Therefore
>> >brains cannot think.".  
> 
>> But we *know* subjectively that brains *do* think.  We don't know the same for
>> computers.
>
>So what you are saying is that huamns can think and we know that for a fact
>(which I hearily agree with).  So tell me.  If I could build a machine that
>does what the brain does and it didn't follow Searle's paradigm, would such
>a machine fall under what you would call AI?  This seems to be a very simple
>distinction to me.

Look, Searle explicitly states that it may be possible to construct
devices that have understanding.  However, his position is that such
devices will not have understanding *solely* in virtue of their
functional relations.  Yes, if you are able to clone a brain, Searle would
be happy to say that such a thing could have understanding (or qualia, or
whatever).  But it would in virtue (according to Searle) of reproducing
the *non-computational* aspects of the brain. 

>> >I believe I have heard that argument (though it was some time ago).
>> >As I remember it was another case of horrible context jumping.  To say that
>> >intelligence is a program and that programs are FSAs and that everything 
>> >is an FSA therefore everything is intelligent is just ludicrous. 
> 
>> That's a reasonable response.  It's the one I have.  Now, which premise
>> do you want to discard?
> 
>The premise that AI is to be achieved through "instructions and states".  
>Perception (which Searle completely ignores if memory serves) and induction
>and goal-orientation are the primary things we need to discuss in terms of
>intelligent systems.  Talking about intelligence in terms of instructions and
>states is like talking about "melody" in terms of air pressure changes.  It
>gets you nowhere even though the two subjects are interrelated.

And how does "perception...induction and goal-orientation" arise?  
At least AI offers an answer (through functional organization).  You
don't seem here to offer any alternative account.

- michael




