From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!usc!wupost!uunet!wang!lee Thu Feb 20 15:22:14 EST 1992
Article 3876 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3876 sci.philosophy.tech:2162
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!usc!wupost!uunet!wang!lee
>From: lee@wang.com (Lee Story)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Table-lookup Chinese speaker
Message-ID: <LEE.92Feb19150606@meercat.wang.com>
Date: 19 Feb 92 20:06:06 GMT
References: <1992Jan27.023623.8118@husc3.harvard.edu> <6522@pkmab.se>
	<1992Jan28.103015.8159@husc3.harvard.edu> <6555@pkmab.se>
Sender: news@wang.com
Organization: Wang Laboratories, Inc.
Lines: 50
In-Reply-To: ske@pkmab.se's message of 8 Feb 92 11:29:39 GMT


Sorry if this tends to pull the thread a bit away from the specific
issues that Messrs Eriksson and Zeleny were discussing, but the excellent
summary of possibilities for storage and lookup presented by Eriksson
reminds me how close this subject is to the boundary of present
knowledge in physiology, information theory, and complexity theory.

In article <1992Jan27.023623.8118@husc3.harvard.edu> (Mikhail Zeleny) writes:

  Compare two programs generating the same
  kind of output for the same kinds of input (e.g. two sort algorithms).  Why
  would the intensional difference in program stricture be relevant to our
  issue?

As Mikhail seems to suggest, it is probably not necessary to precisely model
the physical processes occurring in a human mind to produce equivalent
"thought".  But we don't know enough (factually nor mathematically) to
assert this with confidence.

In article <6555@pkmab.se> ske@pkmab.se (Kristoffer Eriksson) writes:
   Next, as was recently suggested, one can take all past input together with
   the current input, as the adress into the table. Now we start getting some-
   where. This is quite powerful, as it definitely allows the system to modify
   its behaviour based on past experience to any extent desired. However, it
   still requires every possible combination of events, in any sequence, to
   be worked out at table construction time, and the table quickly takes on an
   absolutely unmanagable size. At the same time, the table will probably
   contain vast amounts of redundant data. The table size explodes with the
   desired run time, without bound. (3)

But there is no evidence at all that humans retain "all past input".  Rather,
we seem to approximate a "memory" by some elaborate spatial and temporal
pattern matching, and that's good enough.  The "table-driven" speaker
needn't give the exact same answers as the human, because another human
wouldn't.  And MZ's comments on the use of the external world as a memory
extension seem relevant.

My own intuition is that "in vacuo" philosophizing will do little to
help us understand consciousness, or even language use.  We shall probably
have to attack understanding from at least two directions:  (1) finding
out what acquisition, storage, and retrieval mechanisms are even possible
(the PDP Research Group and other high-level approaches), and (2) matching
the possible against the physical evidence (the approach of the
physiologist).
--

------------------------------------------------------------------------
  Lee Story (lee@wang.com) Wang Laboratories, Inc.
     (Boston and New Hampshire AMC, and Merrimack Valley Paddlers)
------------------------------------------------------------------------


