From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima Sun May 31 19:04:20 EDT 1992
Article 5928 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Keywords: transduction, analog
Message-ID: <32@tdatirv.UUCP>
Date: 26 May 92 19:38:54 GMT
References: <1992May20.034459.8223@Princeton.EDU> <6906@pkmab.se> <18039@plains.NoDak.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 27

In article <18039@plains.NoDak.edu> vender@plains.NoDak.edu (Brad Vender) writes:
|I have a possible solution to the grounding problem:
|  semantics are grounded because they refer to stimili sets.
|  The origin of these stimili can be the system itself (human or
|  AI) or the outside world processed through the senses.

Excellent.  I think this covers most problems with 'grounding'.
(At least it seems so to me).

This seems to omply that the core of semantics is:
	pattern recognition,
and	associative memory.

This seems to fit with what psychologists and neurologists have found out
about biological minds.

|All of this is still new to me, but I hope someone out there
|  understands it and will respond.  Is the idea good or not?
|  Let me know.

Hey, I like it.

-- 
---------------
sarima@teradata.com				(Stanley Friesen)
or
uunet!tdatirv!sarima


