From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!udel!rochester!kodak!ispd-newsserver!psinntp!norton!brian Mon Mar  9 18:35:41 EST 1992
Article 4304 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!udel!rochester!kodak!ispd-newsserver!psinntp!norton!brian
>From: brian@norton.com (Brian Yoder)
Newsgroups: comp.ai.philosophy
Subject: Re: Strong AI and panpsychism
Message-ID: <1992Mar06.011031.8634@norton.com>
Date: 6 Mar 92 01:10:31 GMT
References: <1992Mar4.203455.23960@psych.toronto.edu>
Organization: Symantec / Peter Norton
Lines: 120

michael@psych.toronto.edu (Michael Gemar) writes:
> In article <1992Mar03.061159.16651@norton.com> brian@norton.com (Brian Yoder) writes:
> >michael@psych.toronto.edu (Michael Gemar) writes:
> >> In article <1992Feb27.023234.49@norton.com> brian@norton.com (Brian Yoder) writes:
> >> >bill@NSMA.AriZonA.EdU (Bill Skaggs) writes:
> >> >>   Proposed definition:  An object is "intelligent" if it implements
> >> >> some sufficiently sophisticated set of programs.
 
> >> This is the way in which *AI*, not Searle, defines intelligence.  If you
> >> don't like it, then you don't like AI...
 
>>Now that's a bold claim!  Why can't I define AI as "The study of man-made 
>>systems that perceive reality and react on the basis of that knowledge in
>>a self-generated goal-directed manner."?  Such a description would encompass
>>the systems that people want to build and does not suffer from the myriad
>>of problems Searle arrives at. I should comment that I think Searle DOES reach
>>the proper conclusions given his starting point, but I don't buy his 
>>definition of AI.  If I am not talking about AI then what AM I talking about?
 
> I don't know.  Ask the AI types. The only point I was making is that
> the definition provided is in accord with that given by most AI types
> I know of.  If you have a way of producing the kinds of systems you
> describe that *don't* involve programs, or at least do not have their
> essential functions *describable* computationally, I (and others) would
> like to see them.

My claim is not that programs would not be used in constructing such a machine,
but that looking at the workings of the machine in terms of instructions and
states is a fruitless direction (as Searle points out).  Much like describing
human intelligence in terms of neurons, neurotransmitters, and brain waves
would be useless.  Certainly they are involved, but not in a way that can
be intelligently mapped from one domain to the other.  Another area of 
consideration Searle leaves out is interaction with the outside world.  How
can a thing be said to be conscious if it isn't conscious OF anything?
 
>>>>One of my criticisms of Searle is that he spends a lot of time talking about
>>>>"programs" and "instructions" rather than talking about information, 
>>>>perception, consciousness, concepts, and the like. 
 
>>>Searle is discussing AI. AI is composed of programs carrying out instructions.
 
>>Could you justify that definition?  It seems horribly tied to ONE WAY OF 
>>BUILDING an intelligent system.  What Searle proves is that that method cannot
>>work.  I buy that much.  How can you justify his definitions are the correct 
>>ones?
 
> What Searle attempts to show is that the method that AI chooses to attack
> the problem won't work.  Again, if you have a method that is *not*
> like that of AI, then great.  But, I take it as given that you then
> have to surrender functionalism.  This is a *big* step.

But hold on.  I am AGREEING with Searle that IF you try to build a system of
the type he describes, it can't work.  That means that any intelligent 
machine would have to be built differently than his "intelligent program".
To put it in philosophical terms, Searle's machine is rationalistic and 
suffers from all of the same problems a (consistently) rationalist human being
would suffer from.  Spinning your wheels inside your head gets you nowhere
whether you are a man or a machine.
 
>>>> He could just as easily say that 
>>>>"Humans have brains.  Brains are made of atoms.  Atoms cannot think.  Therefore
>>> >brains cannot think.".  

>>> But we *know* subjectively that brains *do* think.  We don't know the same 
>>> for computers.

>>So what you are saying is that huamns can think and we know that for a fact
>>(which I hearily agree with).  So tell me.  If I could build a machine that
>>does what the brain does and it didn't follow Searle's paradigm, would such
>>a machine fall under what you would call AI?  This seems to be a very simple
>>distinction to me.
 
> Look, Searle explicitly states that it may be possible to construct
> devices that have understanding.  However, his position is that such
> devices will not have understanding *solely* in virtue of their
> functional relations.  Yes, if you are able to clone a brain, Searle would
> be happy to say that such a thing could have understanding (or qualia, or
> whatever).  But it would in virtue (according to Searle) of reproducing
> the *non-computational* aspects of the brain. 

Sure, such as sensory apparatus for example?  That is exactly what I have been
saying all along.  To consider the philosophical terms for this again, the
various theories of truth: intrinsic, subjective, objective, and skeptical.
The last of course is out since who would want an intelligent machine if it
couldn't ever know anything?  The intrinsic theory is the one Searle attacks
(actually, a rationalist appraisal of intrinsic truth) and his conclusions are
right...it's hopeless.  The subjective theory is almost as bad as the skeptical
one since the machine could just make up anything it wanted and it would be
(somehow) "true".  What is left is the objective theory which says that
knowledge derives from the interaction between the knower and the world, and 
that the "knowledge" is not in one or the other, but in the union of the two.
 
>>> That's a reasonable response.  It's the one I have.  Now, which premise
>>> do you want to discard?
 
>>The premise that AI is to be achieved through "instructions and states".  
>>Perception (which Searle completely ignores if memory serves) and induction
>>and goal-orientation are the primary things we need to discuss in terms of
>>intelligent systems.  Talking about intelligence in terms of instructions and
>>states is like talking about "melody" in terms of air pressure changes.  It
>>gets you nowhere even though the two subjects are interrelated.
 
> And how does "perception...induction and goal-orientation" arise?  
> At least AI offers an answer (through functional organization).  You
> don't seem here to offer any alternative account.
 
I have not fully fleshed this out yet (which I guess is why I'm not obscenely
wealthy yet ;-) but these sub-systems could be constructed out of mecahnical
parts, processors, programs, and the like, just as humans are composed of cells
and organs, but what would be intelligent is not "the program" as Searle points
out, but the whole system.  Remember too, that "a program" without a hardware
platform and all the rest can't do ANYTHING, much less be intelligent.

--Brian 
 
-- 
-- Brian K. Yoder (brian@norton.com) - Maier's Law:                          --
-- Peter Norton Computing Group      - If the facts do not fit the theory,   --
-- Symantec Corporation              - they must be disposed of.             --
--                                   -                                       --


