Newsgroups: comp.ai,comp.ai.philosophy,alt.consciousness,comp.ai.alife
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!usenet.eel.ufl.edu!spool.mu.edu!howland.reston.ans.net!vixen.cso.uiuc.edu!uchinews!kimbark!gal2
From: gal2@kimbark.uchicago.edu (Jacob Galley)
Subject: Re: Computers--Next stage in evolution? Hmmmmmm.....
X-Nntp-Posting-Host: midway.uchicago.edu
Message-ID: <D648yp.LDt@midway.uchicago.edu>
Sender: news@midway.uchicago.edu (News Administrator)
Reply-To: gal2@midway.uchicago.edu
Organization: The University of Chicago
References: <3jgqon$gke@usenet.INS.CWRU.Edu> <mws.4.00A71980@pond.com> <mws.5.00280780@pond.com> <1995Mar26.184723.13417@galileo.cc.rochester.edu>
Date: Mon, 27 Mar 1995 20:30:25 GMT
Lines: 58
Xref: glinda.oz.cs.cmu.edu comp.ai:28543 comp.ai.philosophy:26306 comp.ai.alife:2855

stevens@prodigal.psych.rochester.edu (Greg Stevens) writes:
>
>In <mws.5.00280780@pond.com> mws@pond.com (Fred Mitchell) writes:
>>
>>stevens@prodigal.psych.rochester.edu (Greg Stevens) writes:
>>>
>>>Part of the reason we get so much meaning out of symbols is because of purely
>>>internal reconstruction.  We construct meaning based on our associations
>>>quite independant of the intent or associations of the speaker.  This is
>>>not information being "transmitted."  
>
>>Sure it is. The whole ideal of "encoding" is to use a set of symbols that we 
>>both mutually agree as to their meaning. We substitute entire blocks of 
>>thought for a sequence of symbols. In essence, we are "encoding" our thougts.
>
>But our "encoding" mechanism is based purely on our experience, and other
>people's "decoding" mechanism on theirs  -- why is there any reason to
>assume that the thoughts prompted in the listener are the same as those
>initiated in the speaker?  The only requirment for FUNCTIONALLY testing
>language effectiveness is behavioral -- there is no functional way of
>testing how "accurately" the listener's mental states match the speaker's.

I'm going to try to explain why the concepts of information and coding
are not very useful in this context.

You can think of the information-coding/transmission explanation of
language comprehension as a "behavioral" explanation.  You can explain
what B's brain does when A says P to B by saying that B is encoding
the information that A is communicating, via natural language.  There
is nothing inherently wrong with this type of explanation (so long as
B understands what A's utterance of P meant).  But we must recognize
that it only explains roughly what B's brain does; it explains nothing
about how B's brain works.  And besides, and Greg indicates, it cannot
be scientifically tested.

A behavioral explanation of language in the brain might be adequate
for the higher-level purposes of sociolinguistics, but it is
definitely not adequate for neurolinguistics.  The latter requires an
explanation of the dynamics of interacting populations of neurons in
B's brain, ie. an "operational" explanation which is derivable from a
body of explicitly defined laws (eg. the laws of thermodynamics).

A truly operational explanation of language in the brain would
probably amount to an explanation along the lines of what Greg Stevens
suggested, where upon hearing, B contructs for himself the relevant
aspects of the utterance and the utterer's meaning.  The alternative
idea that language codes and transmits information simply has no place
in an operational explanation of langauge, because it is impossible to
rigorously describe the semantic content of information in the way
that an operational explanation demands.

Jake.

-- 
The artificial sundering of res cogitans and res extensa in the heritage of
dualism, with the extrusion between them of "life"---this double-faced ontology
of death creates problems which it has rendered unsolvable from the start.
								<-- Hans Jonas
