From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!apple!netcomsv!kmc Thu Dec 26 23:57:05 EST 1991
Article 2269 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!apple!netcomsv!kmc
>From: kmc@netcom.COM (Kevin McCarty)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle's response to silicon brain?
Message-ID: <1991Dec19.072826.28385kmc@netcom.COM>
Date: 19 Dec 91 07:28:26 GMT
References: <40822@dime.cs.umass.edu> <40825@dime.cs.umass.edu> <356@idtg.UUCP> <40858@dime.cs.umass.edu>
Organization: Netcom - Online Communication Services  (408 241-9760 guest)
Lines: 18

yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>>using digital computers.  Where have you been?  

>The "counter-experiment" is of no value unless the performance of the
>silicon neurons is identical to that of real neurons. But, this is the
>point at issue.

So what if the performance of the silicon neurons is identical to that of
real neurons?  Why isn't this the naive behaviorism of Turing's Imitation
Game applied not only at the level of human behavior, but also down to the
level of neural behavior?

How do we get a clue to discern at what level of detail a re-enactment
of behavior stops being a simulation and becomes an emulation?
If ever?  In other words, what does "identical" mean?
-- 
Kevin McCarty                   kmc@netcom.COM
                                {amdahl,claris}!netcom!kmc


