Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!spool.mu.edu!sgiblab!sgigate.sgi.com!olivea!news.hal.COM!decwrl!netcomsv!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <jqbD0qwBw.BF3@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <3bu0gs$fff@sun4.bham.ac.uk> <jqbD0DG73.4uu@netcom.com> <D0GFxv.5zL@gpu.utcc.utoronto.ca> <D0K5EA.CEv@cogsci.ed.ac.uk>
Date: Tue, 13 Dec 1994 10:40:44 GMT
Lines: 64

In article <D0K5EA.CEv@cogsci.ed.ac.uk>,
Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>In article <D0GFxv.5zL@gpu.utcc.utoronto.ca> pindor@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:
>
>>Another problem, most often ignored by people pulling out of their sleeves
>>the HLT example (see "merely" above) is a complexity of a search algorithm
>>in this case. considering size and dimensionality of the database. Note 
>>that it has also to include past history of the conversation and a decision 
>>process which branch to take (this decision process would be a reflection of 
>>a "personality"). Personally I do not see any guarantee that a program 
>>utilizing HLT would be any simpler than a program generating the conversation.
>>Regardless, I do agree that stress on "how" is a mistake. Hans Moravec
>>argued this very convincingly in terms of optimization.
>
>So far as I can tell, the HLT program could be very simple.
>
>There's a very large tree.  The program has a pointer to a node
>in the three.  That represents where it is in the conversation.
>At a given node, there's a branch for each input that might arrive.  
>The program finds the right branch and follows it.  If the case
>is conversation-by-teletype, there could be a branch for each 
>character in the character set.  Finding the right branch is
>then trivial.  Some nodes in addition to their branches say
>"output this: ...".
>
>The complexity is all in the data.

Yep.

>And there it appears as size.

Well, complex data requires a large tree, but the converse doesn't hold.
The complexity appears as the relationships within the data which prevent
it from being reduced.

>In any case, the emphasis on how is no mistake, although Aaron
>Sloman and I may be the only people here who believe this.
>Hans Moravecs arguments are entertaining speculation (e.g.
>fictional characters as platonic entities that feel pain,
>interpretation in which rocks are interpreting themselves
>as poets) but do not amount to what I would call a convincing
>case.  (The same is true of his optimization argument, though
>it's less entertaining.)
>
>Now, the case for "how" may not be coinvincing either.  But in
>that case we say "we don't know", not "`how' is a mistake".
>
>The TT defense is shaping up nicely.  How programs work is a
>mistake (only their I/O matters?), and let's see...

Well, I wrote something longer, but some combination of bad software and loss
of carrier ate it, and Mark Rosenfelder said much of what needed to be said.
I will simply point out that "How programs work is a mistake" is incoherent,
just as "defenders of the TT" or "defenders of AI" are incoherent.  I care
about how programs work; everyone here cares about how programs work.  Totally
independently, I debate the claim that you need more than textual exchange,
possibly including knowing how the program works, in order to decide whether
an entity understands.  It is though no wonder that someone who thinks that
"how programs work" can be a "mistake" cannot distinguish between different
statements that simply happen to contain the phase "how programs work" or "the
Turing Test".  Context, models, and meaning matter.

-- 
<J Q B>
