Newsgroups: alt.philosophy.objectivism,alt.sci.physics.new-theories,sci.physics,sci.physics.particle,comp.ai,comp.ai.philosophy,sci.philosophy.meta,alt.memetics,alt.extropians
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!swrinde!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: A New Theory of Free Will -- continuation of an Open Letter to Professor Penrose
Message-ID: <jqbDM0IHz.G1F@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <jqbDLr4tF.AF@netcom.com> <4eand1$q6n@news.cc.ucf.edu> <jqbDLstq2.8M7@netcom.com> <4edfdp$10o@news.cc.ucf.edu>
Distribution: inet
Date: Tue, 30 Jan 1996 21:03:34 GMT
Lines: 119
Sender: jqb@netcom19.netcom.com
Xref: glinda.oz.cs.cmu.edu sci.physics:168104 sci.physics.particle:7804 comp.ai:36564 comp.ai.philosophy:37251 sci.philosophy.meta:23960

In article <4edfdp$10o@news.cc.ucf.edu>,
Thomas Clarke <clarke@acme.ucf.edu> wrote:
>In article <jqbDLstq2.8M7@netcom.com> jqb@netcom.com (Jim Balter) writes:
>>In article <4eand1$q6n@news.cc.ucf.edu>,
>>Thomas Clarke <clarke@acme.ist.ucf.edu> wrote:
>
>>>The explanation that I would like would be one that would enable
>>>one to actually build a conscious machine; like Hal in 2001, say.
>
>>I didn't ask you what the explanation would achieve, I asked you what type you
>>would accept.  I suspect that, like many, there is no explanation you would
>>accept until you grew out of an incoherent notion of qualia.  It is rather
>>hubristic to think that, just because a particular explanation doesn't satisfy
>>you, it doesn't suffice to build something.
>
>Hubristic?  I can be convinced of the sufficiency of a theory by my
>criteria quite easily.  Show me an intelligent machine that passes
>a Turing test in that in extended conversation it appears to have qualia,
>it is indistinguishable from a human which presumably has qualia.

You have just shifted ground.  You are no longer asking for an explanation,
rather a demonstration.  "Quite easily", eh?  What a joke.  Meanwhile, while
you sit on your butt complaining of unsatisfactory explanations, people are
working theirs off using such explanations to move forward, and perhaps their
efforts will some day yield such a demonstration.  But not "quite easily".
Feh.

>>  BTW, what explanation do you
>>suppose Stanley Kubrick used to create his conscious machine?  That people can
>>ascribe consciousness to fictional movie constructs but not to hypothetical
>>Chinese Rooms and Humongous Jukeboxes indicates just how confused they are
>>about the issue.
>
>What are you talking about?

Perhaps you failed to read your own note.  You ascribed consciousness to Hal.
Why?  Kubrick and his engineers built Hal.  What explanation did they use to
do so?

>I have no idea if Kubrick had a theory
>of qualia in mind when he made his film.  The film did include glimpses
>of Hal's innards - crystaline rectangles - and they were not recognizable
>as belonging to any current technology.  In that sense he was noncommital
>about how Hal might work.

But you are quite committed that certain sorts of explanations are insufficient
for building such a machine.  This is strange, given that your actual criteria
(appearance during a Turing test) are completely behavioral.

>Does the Humongous Jukebox = Homongous Look Up Table that has been
>discussed on comp.ai.philosophy ?

Ned Block, who conceived the thought experiment, did so as a jukebox that
could pluck out the appropriate selection (response).  Note that such a
mechanism passes your "appears to have qualia" test despite lack of any sort
of explanation of qualia.  Here's a particularly humorous comment on this
mechanism from several months back:

[Bill Skaggs]
There is a difference between behavior as a goal and behaviorism as a
method.  I take it as unarguable that it would be very useful to have
machines that are capable of holding a conversation in fluent English.

[Michael Zeleny]
I propose to argue the unarguable.  In view of the theoretical
possibility of a Humongous Lookup Table implementation of the Turing
Test beater, the utility of your proposed device reduces to nothing.


Here a supposedly sensible fellow argues that machines that are
indistinguishable from human beings are *useless* *because* the HLT is
theoretically possible.  Another case of hubris that grants efficacy to one's
satsfaction with an explanation.

>>>My suspician is that we will never be able to build such a machine 
>>>until we incorporate some quantum hardware into our computers.
[...]
>> Perhaps you
>>think this is the wrong sort of device.  What is the right sort of device, and
>>why?
>
>If I knew I'd be wealthier than Bill Gates and wouldn't be wasting
>my time posting on news net :-)

And posting worthless suspicions.

>When someone builds an intelligent, conscious machine, the devices
>used will obviously be the correct devices.

Block's HLT is theoretically conscious according to your Turing test
criterion, yet requires no quantum effects.

>They could be transistors.  I'm not so arrogant as to think that
>I can't be wrong, but so far transistors haven't gotten there.

The "so far" argument seems to me the weakest one of all, being tautological.

>>  This suspicion of yours seems to fit someone's (Putnam's?) comment about
>>Penrose, that the brain is mysterious, and quantum gravity is mysterious, so
>>there must be a connection.  What is the justification for the suspicion?
>
>I kind of like the Putnam argument, tongue in cheek as it was.
>For a long time I used to look at books on the foundations of quantum
>mechanics, scanning the indices for reference to Godel's theory,
>since I thought there might be a relation between Godel and
>quantum phenomena.  A barber shaving Schrodingers cat, perhaps.
>It was kind of neat when Penrose came out with a book with whole
>sections on these issues.

Von Daniken's books were kinda neat too.

>>Surely not Penrose's fallacious arguments in SoTM.
>
>I wish I had written Penrose's books :-)

I wish I owned Microsoft.
-- 
<J Q B>

