From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor Tue Mar 24 09:55:24 EST 1992
Article 4431 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Definition of Understanding
Message-ID: <1992Mar12.155333.26748@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <AdjWsY600UzxM1dYIJ@andrew.cmu.edu>
Date: Thu, 12 Mar 1992 15:53:33 GMT

In article <AdjWsY600UzxM1dYIJ@andrew.cmu.edu> fb0m+@andrew.cmu.edu (Franklin Boyle) writes:
>Andrzej Pindor writes:
>
>I don't see why an artificial neural net would not in principle be able
>to have an intrinsic capacity for reference, but current neural networks
>(say, of the layered, feed-forward type) are physical pattern matching 
>systems, just as digital computers are.  The only difference is that
>matchers for the latter are deliberately programmed whereas the former
>are trained, meaning that connection strengths (sets of which constitute
>matchers) are adjusted according to input and output values.
>
You still have not answered my question how do you imagine humans to recognize
patterns withouth matching them to something. You repeatedly point out that
computers (digital or neural networks) are 'physical pattern matching systems'
but how is it different from what human brains do?

>
>What I mean is that in computers, the behavior, or function, is independent
>of form.  Presumably in brains this is not the case.  "Any structures" means

Below I point out that the same may happen in brains. If you know two languages,
the same behaviour may be evoked by two different sound trains (and hence two
different sets of electrical signals entering the brain from ear).

>that the actual combinations of high and low voltages do not make any 
>difference. Sure, there have to be a rich enough set, but any set,
>sufficiently rich, will do.
>
>>I do not understand your point at all. We can express the same things using
>>English, Chinese or a sign language, right? Does it mean there is no informa-
>>tion exchanged using these languages?
>
>Not for computers.  To the computer they're just arbitrary symbol strings 
>that happen to make sense to us.  All that's required is functional
>consistency.  The computer could be shuffling around bitmaps in some
>manner and still produce the same behavior.
> 
See above. Brain may be shuffling around different sets of firing patterns and
still produce the same behaviour.

>>in arbitrary sequence of sounds and for the info to be exchanged we just need
>>a matcher suitable for the encoding used, do you agree? So what is your
>>argument about?
>
>The key here is "encode".  Is it the same info?  Not unless you have a
>matcher that can decode it.  *In a computer* decoding is just getting
>the encoded forms to cause certain behaviors.  However, the encoded

Isn't it what happens in humans too to large extent?
>forms can be *any* forms because the matchers physically look like those
>forms.  Insofar as content is considered to be encoded in structure,
>for a computer, different languages would have different content.  On
>the other hand, if you believe that content is determined by behavior,
>then that involves interpretation (we interpret the language), thus 
>begging the question of mind, which is where all this started and which is 
>Searle's complaint.

Sorry, I fail to see your point. Wasn't it Searle's point that behaviour is
not enough?
>
>-Frank


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


