From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!psinntp!scylla!daryl Thu Feb 20 15:21:27 EST 1992
Article 3803 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb17.174042.15804@oracorp.com>
Organization: ORA Corporation
Date: Mon, 17 Feb 1992 17:40:42 GMT

Jeff Dalton writes: (in response to Michael Gemar)

>>And I'm still waiting to find out *why* a lookup table *doesn't*
>>have beliefs under a functionalist view (assuming that a lookup table
>>can reproduce "belief-behaviour", which was the original assumption
>>offered by Chalmers).

> And the answer is that functionalists don't have to think that any
> functional organization whatsoever that manages to produce the right
> behavior has to count as understanding.

> Moreover, they don't have to be able to say exactly what functional
> organization is right in order to be able to rule out extremes.
> There's nothing in this to rule out *only* table lookup.  The idea is:
> table lookup is wrong; not: only table lookup is wrong.

Even though Michael and I disagree about quite a lot, I agree with him
that no one has given a principled reason to believe that the table
lookup program is any less capable of beliefs than any other program.
I agree that the table lookup is not the way humans work, but so what?
Are the functionalists interested in general intelligence, or just
human intelligence?

>>>Now, if you can accept that there can be a relevant difference
>>>in functional organization, don't you think it's at least unlikely
>>>that simple table lookup would do the trick?
>>
>>Again, why not?  Until we have a clear unpacking of what a potential
>>relevant difference *is*, I have no reason to think that a table
>>lookup *couldn't* have "beliefs" in the functional sense.  

> There isn't anything in simple table lookup that corresponds in a
> reasonable way to beliefs.

I disagree! Why do you say this? I agree with Michael that it makes as
much sense to attribute beliefs to the states of the table lookup as
it does to attribute beliefs to the states of any other machine.

>> If someone could provide an account of the critical difference between
>> non-believing table lookup and some other believing system, then we
>> can discuss the criteria.  As it stands, all I see is empty
>> assertion...

> The critical difference would be that it works in a different way
> (just as a Chess program that uses brute force search works in a
> different way from one that has clever heuristics for choosing moves
> and narrowing its focus). But if you think no possible way for a
> program to work could possibly produce beliefs, then none of this
> matters.

There is no denying that a table lookup program would probably not
work the way human brains do, but what is the principled reason for
saying that the difference is important for the question of having
beliefs?

Daryl McCullough
ORA Corp.
Ithaca, NY


