From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!wupost!darwin.sura.net!Sirius.dfn.de!math.fu-berlin.de!news.netmbx.de!Germany.EU.net!mcsun!uknet!edcastle!edcogsci!sharder Tue Jun  9 10:06:35 EDT 1992
Article 6052 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!wupost!darwin.sura.net!Sirius.dfn.de!math.fu-berlin.de!news.netmbx.de!Germany.EU.net!mcsun!uknet!edcastle!edcogsci!sharder
>From: sharder@cogsci.ed.ac.uk (Soren Harder)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Message-ID: <9640@scott.ed.ac.uk>
Date: 2 Jun 92 19:11:02 GMT
References: <9597@scott.ed.ac.uk> <1992Jun1.014731.28528@mp.cs.niu.edu> <9614@kesson.ed.ac.uk> <1992Jun2.014828.2768@mp.cs.niu.edu>
Organization: Centre for Cognitive Science, Edinburgh, UK
Lines: 85

rickert@mp.cs.niu.edu (Neil Rickert) writes:

>In article <9614@kesson.ed.ac.uk> sharder@cogsci.ed.ac.uk (Soren Harder) writes:
>>rickert@mp.cs.niu.edu (Neil Rickert) writes:
>>
>>>  You should ask Harnad, not me.  Harnad assumed that there was intelligence
>>>in his TTT.

>>It ain't necessarily so. The intelligence can be in the combination of
>>the transducers and the rest.

> Given that Harnad has broadened the idea of transducer to include all of
>the internal components, transducers then become important.  After all,
>as I said in a response to Harnad, all of the components of a computer
>are transducers too.

I don't think Harnad has changed his ideas, you just didn't pick them
up right away because you think about the field in another way than he
does. 

I believe that there might be a confusion of levels when you say that
'computers are transducers'. They do not work the way they do
(input-output-wise) _because_ they are made of transducers; actually
you might say it is _in spite of_ their being made of transducers.

> The point remains, however, that the major importance must lie within the
>information and the way it is processed.  Whether the particular information
>receptors happen to be analog or digital is completely irrelevant.  Whether
>the processing (or transformation, or whatever) of the information is
>purely digital or purely analog, or a combination, is also irrelevant.

I believe you have a very strong argument there. (That is the second bit).
The first bit I disagree with you. All systems have a limited
set of forms they need their information in. If the information is not
in that form there has to be a form of transduction. If you remove the
transduction, e.g. cut the wires from keyboards to computers and
replacing all wires from the network with waterhoses, there is no
capacity of information processing. You might say this is trivial, but
it is still true.

Also for there to be intelligence, there has to be *some kind of*
representation. (I believe more in the representation you 'conjure up'
when using Dennetts 'intentional stance' than Fodors 'language of
thought'-type representation). One thing that characterizes that
representation is that it is different from the thing itself. (The cup
on the table in front of me is not a _representation_ of itself, it
*is* itself). To get this difference you need some kind of
transduction (if you are not a parallelist dualist). You might also
say this is trivial.

Thirdly, as I have mentioned before we might want to think about this
not as what is theoretically possible (proved by logic to be
possible), but what is potentially possible (what might actually be
done in the future). Building a brain from old beer cans is easy if
you have 3 universes with the same mass as the one we have now, and
plenty of time for trial and error; but it is not likely to be done.
Constructing a mind in the way you suggest is (or might be) possible
under less favorable circumstances, but I think it would be a wrong
way of doing it, *if* the processes in the brain are analog, *and* of
such a type that it would take huge amounts of computational power to
model it. I believe I gave an example of this earlier: the cochlea is
(I believe) not impossible to construct working analog models of, but
(I believe) not yet possible to model on a computer.

>>                              This is exactly the reason I ask you to
>>think a bit further, and consider how intelligence could be
>>implemented.

> You just want me to post a complete solution to the whole problem all
>at once?  Be serious.  However I will say that learning is a key
>ingredient.

I meant what I said. (I don't usually do, though :-)). It might be a
good idea to have some basic theories about how the human mind is
constructed. You can't (with your insight) go much more wrong than
Descartes, and he is still (deservedly) thought to be one of the great
thinkers on the topic.

Soren

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Soren Harder, (MSc student)
Centre for Cognitive Science, 2 Buccleuch Place, Edinburgh
E-mail: sharder@cogsci.ed.ac.uk
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


