From newshub.ccs.yorku.ca!torn!utzoo!helios.physics.utoronto.ca!utcsri!rpi!uwm.edu!spool.mu.edu!uunet!trwacs!erwin Thu Oct  8 10:10:36 EDT 1992
Article 7058 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utzoo!helios.physics.utoronto.ca!utcsri!rpi!uwm.edu!spool.mu.edu!uunet!trwacs!erwin
>From: erwin@trwacs.fp.trw.com (Harry Erwin)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding
Message-ID: <735@trwacs.fp.trw.com>
Date: 29 Sep 92 14:16:49 GMT
References: <1992Sep25.160149.26882@spss.com> <717645108@sheol.UUCP>
Organization: TRW Systems Division, Fairfax VA
Lines: 47


Have you looked at what Walter Freeman did? It appears that the neocortex
downloads instructions (perhaps even a "program") to the nucleus of the
olfactory bulb identifying objects to smell for and the corresponding
patterns. There are no invariants in the nucleus, rather there are what
appear to be programs or patterns. With each breath, the nucleus resets to
an initial state and then rapidly (and chaotically) amplifies the patterns
seen and deamplifies the patterns not seen. This then results in one of
the following:
1. A report on specific semantic objects detected,
2. A report of non-detection, or
3. A report of something non-recognized being detected.
A category 3 detection results in an orientation reaction.

Apparently the only syntactic grounding is the exchange between the
individual olfactory sense cells and the nucleus of the bulb, which is
about as semantically grounded as you can be. Pribram has evidence that
the same thing happens in the ear and eye. We may well find that the
spinal column is semantically grounded, too, for touch and body position.
These downloaded "programs" have to be sufficiently sophisticated that
they can control how they interpret the sensory data they monitor. I
speculate that they include a (simple) dynamic model of the object they
are monitoring.


















(Damn newsreader!)

-- 
Harry Erwin
Internet: erwin@trwacs.fp.trw.com



