From newshub.ccs.yorku.ca!torn!utcsri!rpi!uwm.edu!caen!kuhub.cc.ukans.edu!spssig.spss.com!markrose Thu Oct  8 10:10:23 EDT 1992
Article 7039 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!uwm.edu!caen!kuhub.cc.ukans.edu!spssig.spss.com!markrose
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding
Message-ID: <1992Sep25.160149.26882@spss.com>
>From: markrose@spss.com (Mark Rosenfelder)
Date: Fri, 25 Sep 1992 16:01:49 GMT
Sender: news@spss.com (Net News Admin)
References: <19qphvINN7au@darkstar.UCSC.EDU>
Organization: SPSS Inc.
Lines: 51

In article <19qphvINN7au@darkstar.UCSC.EDU> wolfgang@cats.ucsc.edu 
(Robert F Dougherty) writes (quoting me):
>>[Grounding]has to do with the basis for meaning.  The word 'cat' doesn't mean 
>>anything in itself.  It does mean something for humans, because we can
>>associate it with our real-world experience with cats.  You can think of
>>grounding as a formalized version of the folk notion that you don't
>>really know something until you've experienced it yourself.
>
>What do you mean by experiencing it yourself?  We humans are detached
>from the physical world.  All we can "know" or "experience" is via
>neural impulses [...]

Why do you identify "we" with the brain, not with the entire organism?
You are *not* detached from the physical world!  

>Our brains merely manipulate codes.  Granted, these codes are nothing
>like the "symbols" you refer to (if I got it right)- they are more complex.
>The brain utilizes complex coding schemes using spatial (from
>specialized brain areas to where on a neuron's dendritic structure
>the stimulation from other neurons occurs) and temporal (rate of
>firing of neurons) parameters.  This makes for some rather complex "symbols"
>that are meaningless when removed from context.

You can't assume that the symbols in a computer system are any less complex
(or aren't meaningless out of context).

>(My theory gets tenuous...)
>Meaning, understanding, consciousness, etc. are merely emergent properties
>from the incredible complexity of our machine, the brain.  Not any old
>complex machine will demonstrate these properties- our brain has been
>working at it for millions of years.  Even then, with some relatively
>minor changes (genetic aberrations, trauma...), a brain may fail to fully
>achieve some or all of these emergent properties (think of severely learning
>disabled people).  (I am making some heavy assumptions about the mental 
>lives of others- but I'm in psychology, so I have liscense!;-)

Here at SPSS we have a software system containing a million lines or so
of code, and I can attest that it's pretty complex.  Is it self-aware yet?

I just don't see how mere complexity produces meaning and consciousness.

>As for our ability to build a very complex machine from which these
>properties will emerge- I have faith!  I think complex interaction is
>key.  I don't think it matters if input comes from transducers or
>less directly (from a database, for ex.).  I do think lots-o-info is
>needed, especially if you want the thing to learn about something
>as complex as the world.  

About lots-o-info I completely agree.  As I suppose most AI researchers
would: does anyone really believe any more that simply simulating human
reason will be sufficient?


