Newsgroups: comp.ai.alife,comp.ai.philosophy,comp.ai,alt.consciousness
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!news.sprintlink.net!siemens!princeton!flagstaff.princeton.edu!schechtr
From: schechtr@flagstaff.princeton.edu (Joshua B. Schechter)
Subject: Re: Thought Question
Message-ID: <1995Jan19.214105.19878@Princeton.EDU>
Originator: news@hedgehog.Princeton.EDU
Sender: news@Princeton.EDU (USENET News System)
Nntp-Posting-Host: flagstaff.princeton.edu
Organization: Princeton University
References: <sa209.104@utb.shv.hb.se> <1995Jan12.022935.26572@Princeton.EDU> <3fmba3$nmf@vixen.cso.uiuc.edu>
Date: Thu, 19 Jan 1995 21:41:05 GMT
Lines: 64
Xref: glinda.oz.cs.cmu.edu comp.ai.alife:1873 comp.ai.philosophy:24822 comp.ai:26651

In article <3fmba3$nmf@vixen.cso.uiuc.edu> smithjj@cat.com (Jeff
Smith) writes: >In article <1995Jan12.022935.26572@Princeton.EDU>,
schechtr@flagstaff.princeton.edu (Joshua B. Schechter) writes: 
>|> I believe the issue is not whether or not a computer can simulate a
>|> brain. It seems that a majority of people here seem to agree
(whether
>|> or not they are correct) that a computer can simulate a
brain. The
>--maybe -- we haven't yet simulated a brain.  
>|> hardware of a brain seems to be accepted to be a type of universal
>|> turing machine and as such, can be simulated (as soon as it is
>|> understood) by any other turing machine.

>Why do some people accept the hardware of the brain to be a turing machine?
>Please define what you mean by a turing machine and express how the human
>brain fits this definition.  It's hard for me to see how anyone can have
>definitive answers on this question when we don't fully understand how the
>brain works, at the sub-neuron, neuron, or higher levels. 


Granted. I mean, of course, that a brain is Turing machine
equivalent. That is, it has the same (theoretical) problem-solving
capabilities as a turing machine, an IBM mainframe, a CM5, etc., it is
just perhaps is quicker, runs a more complex program, etc... I guesss
what I really meant is that a brain can be viewed as running a program
(though, there is very little distinction between hardware and
software.) It is deterministic (or probabilistic?,) and obeys certain
laws in some sort of causal sequence. I hope this is somewhat clear.

To answer your objection on the ground that we don't know enough about
the brain's functionality on various levels, I would ask how much
information is enough? We know roughly how a neuron works; (roughly)
when it fires, etc... It seems to follow some standard
cauality. Non-deterministic effects don't seem to play much role on
that high a level. Even higher levels (symbolic?) would seem to be
even less-nodeterministic, even though we cannot even begin to
interpret them. Based on that information, and Occam's razor, we have
enough information to make assumptions. Unless quantum mechanical
effects make a drastic difference, or high level effect behave
strangely in a way I cannot even conceive of, the brain can be assumed
to work as some sort of machine. Anyway, do we truly know how anything
works?  Even the mechanical wonder in front of you. How does a
transitor work (in all its detail?)

>|> 
>|> The issue seems more to be "Can a simulation of a brain think?"
>|> And, of course, this brings up the question of what we mean by thinking...
>|> 
>|> 
>Turing test roughly defines thinking as conversing in a manner indistinguishable
>from a human being.  Thinking is what humans do.

I always interpreted the Turing test to be a test which gives evidence
for thinking, and neither proves or defines what thinking is. In other
words, if some machine passes the Turing test, it gives us a good
indication that it may be carrying out processes which we define to be
"thinking." This evidence is about as strong as the evidence we get
when interacting with someone else (minus reasoning by analogy, which
is fairly weak anyway.) The strength of the Turing test, IMHO, is that
it DOES NOT define what thinking is.



		--Josh
