From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima Wed Dec 18 16:02:27 EST 1991
Article 2217 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Scaled up slug brains
Message-ID: <329@tdatirv.UUCP>
Date: 17 Dec 91 19:23:55 GMT
References: <12709@pitt.UUCP> <40677@dime.cs.umass.edu> <12723@pitt.UUCP> <40705@dime.cs.umass.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 94

In article <40705@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
|In article <12723@pitt.UUCP> geb@dsl.pitt.edu (gordon e. banks) writes:
|>Perhaps we disagree on the meaning of the word "computation".  I consider
|>that neural networks (real ones) being activated by internal and
|>external stimuli are likely the basis for our "minds".  If this is
|>a computation, then I guess I agree with Friesen.  The fact that
|
|Generally, "computation" has to do with Turing machines or some variant. ...
|
|Thus, if you argue that what neurons do is "computation", I will take it
|that you believe that neurons act like digital computers or Turing machines.
|Sometimes you seem to be defining "computation" as equivalent to 
|"physical process", and this is quite a weaker statement.

I would tend to define "computation" as something like "data transformation".

However, since all existing neural network models can be implemented on
digital computers, I believe this may prove them to be Turing computable
as well.  The issue then becomes: do biological neurons have operationally
relevant properties that are not captured in the digital models?

If not, then neurons are performing Turing computation.  If so, then, and
only then, is the operation of the brain not computable in principle.

The current synergism between neurology and neural network research tends
to suggest that the digital models are close enough for practical purposes.
[That is all of the *relevant* properties are sufficiently similar for
the two to produce congruent results].

|I'm not
|convinced that "mind" is purely a product of neuronal activity, although
|it seems possible, ...

Fine, but I will require evidence of some other influence on cognition before
I will even seriously entertain the ideas that "mind" is *not* the result of
neuronal activity.

| but to suggest that facile analogies between minds and
|computers should not be taken too seriously and that premature  conclusions
|about the nature of mental functioning be avoided.

I would not characterize my comparisons as facile.
I do *not* assume that current standard computing methods are congruent with
mental operations.  I just do not accept the sort of ill-defined hand-waving
that Searle uses as significant.

Until either "causal properties" or "intentionality" have been characterized
sufficiently so that they can be recognized (by *observation*) when they are
present, the terms are un-scientific.  They have no *determinable* *referents*,
and are therefor outside of the scope of observational/experimental science.

And when they are refined to the point they are operationally recognizable,
then it will, finally, be meaningful to ask whether a computer may possess
them.  Right now ther is *no* *way* of objectively determining whether a
computer may instantiate these phenomena, because they are ill-defined.

They *may* refer to something meaningful, but I do not even have an *objective*
way of telling if *I* show these features!  And given how pervasive self-
delusions are, I do not trust my "intuition" on this issue.

|There is no a priori reason to
|believe that thoughts can arise from transistors, or that collections of
|devices which mimic some of the electrical behavior of a neuron will be
|sufficient to produce consciousness or even complex problem solving. We
|can flap our hands up and down all day, but we still won't fly.

But we *can* add up numbers all day and get a total.  The anaology with a
physical process is only valid if cognition is *primarily* a physical process.
If it is primarily an informational process, then informational processes
can duplicate it in reality.

And I believe that the "mind" shows all of the properties of an informational
process.  So your analogy is irrelevant.

|>another.  If we use phonon pumping in our neurons, it is a good bet
|>that the worms do too.
|>
|
|We have abilities which are not present at all in worms: as mentioned
|before, language seems to represent a radical step.

But chimpanzees share many of our linguistic capabilities, and the existence
of stone tool technologies suggests strongly that Homo habilis shared even
more with us in this regard, but not everything.

Yes, there is a big gap between slugs and humans, but there are various other
animals that fill in almost the entire intervening span.  There are very few,
if any, human mental traits that are not at least foreshadowed in the apes.
We just carry them further, and use them in strange, unheard of synergies.
The individual components remain similar to our ancestors'.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



