Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!cs.utexas.edu!news.unt.edu!hermes.oc.com!internet.spss.com!markrose
From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <D0Cry4.CCv@spss.com>
Sender: news@spss.com
Organization: SPSS Inc
References: <CzFr3J.990@cogsci.ed.ac.uk> <1994Nov24.135351.25743@unix.brighton.ac.uk> <D00167.91w@spss.com> <3bu0gs$fff@sun4.bham.ac.uk>
Date: Mon, 5 Dec 1994 19:39:39 GMT
Lines: 52

In article <3bu0gs$fff@sun4.bham.ac.uk>,
Aaron Sloman <axs@cs.bham.ac.uk> wrote:
>markrose@spss.com (Mark Rosenfelder) writes:
>> -- It's easy to fool.  Turing seemed to think that people will not on
>> the whole accept "intelligence" in machines.  On the contrary, many
>> people accept it all too readily, or even figure it's already been done.
>
>Bob French wrote an article arguing the opposite: he claimed that
>the Turing test is unreasonably difficult, as one can ask questions
>that only someone with a similar lifestyle and physiology could be
>expected to answer. He concludes that the full TT is really only a
>test of the ability to "think exactly like human beings", and
>therefore of no interest as a general test.

Turing himself expressed similar qualms.  The intuition here, I think, 
is that details of human biology aren't relevant to intelligence.
That's reasonable as far as it goes, but the implications are not IMHO 
fully grasped.  These intuitions show that we do have some specific
notions about what is or is not part of intelligence.  Why not make these
notions explicit, instead of maintaining that the question of what 
intelligence is cannot be answered?

Making these intuitions explicit would also allow them to be analyzed and
criticized.  For instance, it evidently seemed obvious to Turing that the
ability to enjoy strawberries and cream was irrelevant to the problem of
intelligence.  But this is open to question; Lakoff for instance maintains
that meaning is based on direct sensory experience, which raises questions
about the intelligence of a system that doesn't have any.

>> -- Focussing on external behavior as it does, the TT encourages the notion
>> that only algorithmic structure, rather than any physical fact about
>> human brains, produces intelligence.  That may be, but it should be a
>> matter for investigation, not an initial assumption.
>
>I don't see how it even focuses on algorithmic structure. For any
>collection of external behaviour there will generally be infinitely
>many different algorithms capable of producing the same behaviour.
>Behaviour is just behaviour. You have to make very strong
>assumptions to infer anything from it.

I didn't say the TT *focusses* on algorithmic structure, only that it
does not encourage attention to any physical fact about human brains.

This is not to say that an AI needs to be built like a brain is, any
more than an airplane needs to flap its wings.  On the other hand
one can learn a lot about how to build a flying machine from closely
investigating birds.


Thanks for the Shannon/McCarthy quote, whose point I appreciate.  
Now how would you (or they) respond to Daryl McCullough's contentions
about a scale of optimized AIs ending in the HLT?
