From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew Tue Jan 28 12:15:57 EST 1992
Article 3021 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3021 sci.philosophy.tech:1944
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Subject: Re: Table-lookup Chinese speaker
Message-ID: <1992Jan22.204734.20123@cs.yale.edu>
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: atlantis.ai.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <1992Jan21.182524.7880@husc3.harvard.edu> <1992Jan22.161342.17781@cs.yale.edu> <1992Jan22.200714.20798@bronze.ucs.indiana.edu>
Date: Wed, 22 Jan 1992 20:47:34 GMT
Lines: 51

  In article <1992Jan22.200714.20798@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

  >I thought we agreed last time this came up that references to
  >immediate spatiotemporal context, and to events since a given time
  >(e.g., 1990), should be outlawed.  

I must have forgotten that agreement; I don't think it's reasonable;
see below.

  We assume e.g. that the system has
  >a good knowledge of events up to 1990, and has been locked away in a
  >nuclear bunker since then.  (Of course the person it's being compared
  >to has been locked away too...)

There's the problem.  The Turing Test is now to tell the difference
between a person locked in a bunker and a computer.  The problem is to
actually find such a person.  Note that he'll have to be locked in a
bunker a *long* time while we build this gigantic table.  Also, he'll
have to stay locked up while we run the test.  He may be panicky and
distracted by the time we run it, so we'll have to be sure to mimic
those behaviors in the table ....

But why go to all this trouble?  What you wanted was a case that would
behave as if it understood without actually understanding.  Here's a
simple way to achieve that: Connect up some monkeys, or better yet
Geiger counters, to a typewriter.  The typewriter prints out
completely random strings of characters.  Now suppose that through an
incredible series of coincidences, it just happens that as I sit down to
test the system, it always answers with strings of characters that
make perfect sense in context.

This requires only *slightly* more miracles than the Humongous Table
(and a lot less funding), and I think everyone would agree that it (a)
behaves as if it understands, and (b) doesn't really.

Now suppose you find a typewriter behaving this way.  Four hypotheses
occur to you (in increasing order of probability):

   1. It's a miraculous-coincidence system
   2. It's a humongous-table system
   3. AI has succeeded
   4. There's a person on the other end

Surely 1 and 2 are not serious contenders.  Hence although they are
technically counterexamples to Behavioral Strong AI ("if it behaves as
if it understands, then it does"), they are not really hypotheses that
anyone would entertain.

                                             -- Drew McDermott




