From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!munnari.oz.au!uunet!tdatirv!sarima Mon Dec  9 10:47:49 EST 1991
Article 1841 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!munnari.oz.au!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle (was .....)
Message-ID: <292@tdatirv.UUCP>
Date: 3 Dec 91 20:13:30 GMT
References: <EdBeY9i00WBME1JoNV@andrew.cmu.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 109

In article <EdBeY9i00WBME1JoNV@andrew.cmu.edu> fb0m+@andrew.cmu.edu (Franklin Boyle) writes:
|Stanley Friesen writes:
|>Why should this be that important?  Unless you accept the 'tabula rosa'
|>model of human development, then even we are born with already existing
|>routines the 'know' what to 'look' for. ... *We* are preprogrammed
|>to find meaning, so why should it matter if the computer is also?
|
|In what sense are *we* preprogrammed to "find meaning"? ... On what basis 
|do "most current neurologists and psychologists accept" that we are pre-
|programmed throughout the cortex, especially in those areas where "higher
|level" cognitive processing presumably takes place?

Let's see if I can clarify this.  We have an innate tendency to find patterns,
a tendency so strong that we often see patterns where there are none. [The
constellations in the night sky are a good example - individually resolvable
stars are *randomly* distributed in the sky, the constellations are purely
human constructs].

One of the best examples is language learning.  Most current linguists believe
that there is some sort of 'prototype' for language built into a human infant.
This is based on the fact that they(we) learn language on seemingly insufficient
data. (I suspect that this 'prototype' will prove to be identical with some
general pattern matching algorithm used by the human brain for other purposes,
but this does not change the fact that it exists).

An infant's *instinctive* (that is built-in) exploratory behavior is designed
to produce a correlation between the various sensory modalities (such as
to associate certain visual cues with certain tactile and motor responses).
This association seems to be the basis of internalized mental models.

This is just a brief scratch of the surface of innate response results
pertaining to humans.  It is in *this* sense that I mean we are pre-programmed;
we have built-in, innate response patterns that lead us to produce certain
types of mental models.

|And what does it mean for *us* to be preprogrammed?  For a computer, it 
|means building a set of pattern matchers that enable the system to function 
|according to the meanings of its inputs, meanings which *we* project onto 
|them  ...

How about a set of algorithms for building pattern matchers?  Perhaps along
with suggestions about which sensory modalities each one is best suited to
dealing with.

In addition there would be a meta-program to drive the meta-generators.
This itself would be a pattern matching program that attempts to apply the
appropriate generator or set of generators to each unrecognized environmental
stimulus.

And of course each level is programmed to use prior successes and failures
in deriving heuristics for selecting pattern matching mechanisms.

The above is only a rough model, based on my wide reading in linguistics,
neurology, evolutionary theory, and general psychology.  How to convert it
into a practical program is left to AI experts.

|...  Now, is the physical process of pattern 
|matching as takes place in digital computers the same as the physical 
|processes which store and manipulate input to the brain (and I don't mean the 
|same medium or physical objects which are obviously different -- I mean 
|causally the same in the sense of "how" the input brings about a physical 
|change)?

Again, I am not entirely sure why this is even relevant!  The preprogrammed
part is the *meta* level.  Any given pattern matching process in a 'mature'
human, or AI system, would be constructed internally by the system to meet
some established need.  (Oops, oh yes, *basic* *needs* are also preprogrammed).
[Of course the process of generating pattern mathcers and mental models can
easily lead to the development of derived needs].

Note, I really am implying that a 'real' AI will probably have to be 'grown',
not simply programmed.

|>Intrinsic meaning?????  What is that?  I know of no examples of any such
|>thing even in humans. All human symbol systems (aka languages) are purely
|>arbitrary, and *learned*.  There is nothing *intrinsic* about them.
|
|Insofar as there are "mental symbols" (call them mental states if you like) 
|in the head, they are about things in the external world by virtue of their 
|physical forms and the physical processes which manipulate them, only.  

But are they?  As I have pointed out above all 'mental symbols' seem to be
*learned* *associations*.  They are in no way a priori, they are derived.
Thus any system that is capable of deriving its own internal symbology in
the course of interacting with its environment (whatever that environment
may be) is, at least potentially, intelligent.

You seem to believe that the mental symbols in your mind are somehow primary.
This is understandable since, unless you are utterly unique, your conscious
memories only date back to the age at which you started to form them. Your
life prior to the formation of mental models of the world is inaccessible
to your memory.  (At least that is true every human I have ever heard about,
but maybe you are different and can remember thing prior to age 1.5 years).

|The meanings of symbols in computers, on the other hand, are projected onto 
|them by us, that is, they are not intrinsic and so do not arise solely by 
|virtue of *their* forms and the physical processes that manipulate them.  

In all existing programs this is true.  I doubt it is *intrinsically* true.
At least giving the above sentence a reasonable operational meaning, since
I doubt that even human symbols can be properly said to be 'intrinsic', only
self-generated.

[That is, I am suggesting that a program is *possible* that can generate
its own symbols, and that this would be the basis of intelligence].
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



