Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!gatech!psuvax1!rutgers!argos.montclair.edu!hubey
From: hubey@pegasus.montclair.edu (H. M. Hubey)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <hubey.786764192@pegasus.montclair.edu>
Sender: root@argos.montclair.edu (Operator)
Organization: SCInet @ Montclair State
References: <CzFr3J.990@cogsci.ed.ac.uk> <CzH78F.4Eq@gpu.utcc.utoronto.ca> <CzqHIB.1nA@cogsci.ed.ac.uk> <1994Nov24.135351.25743@unix.brighton.ac.uk> <D00167.91w@spss.com> <3bu0gs$fff@sun4.bham.ac.uk>
Date: Wed, 7 Dec 1994 01:36:32 GMT
Lines: 46

axs@cs.bham.ac.uk (Aaron Sloman) writes:

>markrose@spss.com (Mark Rosenfelder) writes:

>> -- It's easy to fool.  Turing seemed to think that people will not on
>> the whole accept "intelligence" in machines.  On the contrary, many
>> people accept it all too readily, or even figure it's already been done.

>Bob French wrote an article arguing the opposite: he claimed that
>the Turing test is unreasonably difficult, as one can ask questions
>that only someone with a similar lifestyle and physiology could be
>expected to answer. He concludes that the full TT is really only a
>test of the ability to "think exactly like human beings", and
>therefore of no interest as a general test.

This is just a general comment:

Suppose we try another thought experiment. Suppose we have constructed
a machine that knows every possible thing in every formalizable field
that includes, all branches of mathematics, physics, engineering, 
chemistry, biology,,,,, and then even the social sciences like history,
archaeology, psychology,,,,, and even more.

So the machine passes every IQ test, SAT, GRE, and anything else
thrown at it with flying colors.

Then when humans try the TT, then can easily tell that it's
a machine because it "knows too much", and "remembers too much"
and no human can possibly solve sets of equations so fast, etc
so in order to pass the TT, we cripple the machine by making its
knowledge and calculation faulty and making it guess (we might
say use fuzzy reasoning), then it starts passing the TT. After
a while the machine is finely tuned (by fiddling with the diddling
parameters) so that it makes just enough mistakes to pass for
human.

Now we are in a paradoxical situation: it has to made less
"intelligent" to pass the TT.




--
						-- Mark---
....we must realize that the infinite in the sense of an infinite totality, 
where we still find it used in deductive methods, is an illusion. Hilbert,1925
