From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!rutgers!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Tue Mar 24 09:54:52 EST 1992
Article 4391 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!rutgers!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
>From: fb0m+@andrew.cmu.edu (Franklin Boyle)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of Understanding
Message-ID: <AdjWsY600UzxM1dYIJ@andrew.cmu.edu>
Date: 11 Mar 92 15:58:28 GMT
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Lines: 60

Andrzej Pindor writes:

>Very good! Then how do humans recognize patterns? Are you saying that
the actual
>_physical_ process is crucial for the functions of the mind? How do you know
>that? Neural nets are presumably closer to the way the brain functions, would
>you accept that a neural net computer might duplicate mind? If I
understand it,
>anti-AI crusaders wouldn't have this either.

Yes, where the physical process is the causal mechanism for *how* change is
brought about.  I don't "know" that it's true, only a hypothesis.  
I don't see why an artificial neural net would not in principle be able
to have an intrinsic capacity for reference, but current neural networks
(say, of the layered, feed-forward type) are physical pattern matching 
systems, just as digital computers are.  The only difference is that
matchers for the latter are deliberately programmed whereas the former
are trained, meaning that connection strengths (sets of which constitute
matchers) are adjusted according to input and output values.

>I am not sure if I understand you correctly. Why is it important that
structures
>for the pattern/matcher pair have something to do with the I/O _behaviour_?
>And of course you exaggerate saying 'any structures'. Only the
structures which
>are rich (or flexible) enough to to do sufficiently discriminatory job.
>I agree that it presumably still leaves us a lot of alternatives.

What I mean is that in computers, the behavior, or function, is independent
of form.  Presumably in brains this is not the case.  "Any structures" means
that the actual combinations of high and low voltages do not make any 
difference. Sure, there have to be a rich enough set, but any set,
sufficiently rich, will do.

>I do not understand your point at all. We can express the same things using
>English, Chinese or a sign language, right? Does it mean there is no informa-
>tion exchanged using these languages?

Not for computers.  To the computer they're just arbitrary symbol strings 
that happen to make sense to us.  All that's required is functional
consistency.  The computer could be shuffling around bitmaps in some
manner and still produce the same behavior.
 
>See a comment above about different languages. We can encode the same info
>in arbitrary sequence of sounds and for the info to be exchanged we just need
>a matcher suitable for the encoding used, do you agree? So what is your
>argument about?

The key here is "encode".  Is it the same info?  Not unless you have a
matcher that can decode it.  *In a computer* decoding is just getting
the encoded forms to cause certain behaviors.  However, the encoded
forms can be *any* forms because the matchers physically look like those
forms.  Insofar as content is considered to be encoded in structure,
for a computer, different languages would have different content.  On
the other hand, if you believe that content is determined by behavior,
then that involves interpretation (we interpret the language), thus 
begging the question of mind, which is where all this started and which is 
Searle's complaint.

-Frank


