From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Thu Feb 20 15:20:37 EST 1992
Article 3716 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb13.234630.1092@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <6171@skye.ed.ac.uk> <1992Feb13.014116.9941@psych.toronto.edu> <6182@skye.ed.ac.uk>
Date: Thu, 13 Feb 1992 23:46:30 GMT

In article <6182@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <1992Feb13.014116.9941@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

>>*Why* do you think that beliefs involve some sort of "special"
>>functional organization?  Is it *only* to rule out lookup
>>tables having them?  If so, this is simply ad hoc...
>
>I think it prejudices the issue to use words like "special".
>If the same behavior (more or less -- we're talking about
>machines that are "like humans", not machines that are like
>one particular person) can be brought about in different ways,
>it can make sense to ask what particular means are used.
>
>What you seem to be saying now is that you don't see how any
>*functional* difference can be relevant.  So instead of asking
>*why* I thought functional organization mattered, you should
>have asked why I thought *functional organization* mattered.
>
>Is that more or less right?  That's what you're asking?

I think that is it, essentially.

>And the answer is that functionalists don't have to think that
>any functional organization whatsoever that manages to produce
>the right behavior has to count as understanding.

OK so far.

>
>Moreover, they don't have to be able to say exactly what 
>functional organization is right in order to be able to
>rule out extremes.  There's nothing in this to rule out
>*only* table lookup.  The idea is: table lookup is wrong;
>not: only table lookup is wrong.

But *why* is table lookup an "extreme"?   This is the problem that I have.
There seems to be some sort of implicit assumption about what kind
of functional organization is required to generate belief.  What I
am trying to uncover is what that assumption is.  If there *isn't* one
any deeper than "well, it's just *obvious* that table lookup can't
generate beliefs," then you are merely ruling them out ad hoc.       

I recognize that functionalism may not have an exact answer as to what
beliefs are.  But it certainly must have some idea of what they *aren't*.
What I want to know is what that idea is...
 
>>>Now, if you can accept that there can be a relevant difference
>>>in functional organization, don't you think it's at least unlikely
>>>that simple table lookup would do the trick?
>>
>>Again, why not?  Until we have a clear unpacking of what a potential
>>relevant difference *is*, I have no reason to think that a table
>>lookup *couldn't* have "beliefs" in the functional sense.  
>
>There isn't anything in simple table lookup that corresponds
>in a reasonable way to beliefs.  

The same could be said for other architectures, depending on how you
define "reasonable" (where is a belief in a neural net?).  In what
way do other architectures have structures that correspond to beliefs more
"reasonably"? 

>There are a couple of positions you might have.  One is "no machine
>can have beliefs".  And if so, then no difference in the functional
>organization of the machine will help.  If this is the case, there
>isn't much more to say, unless we want to rerun the entire "can
>machines understabnd" debate.

I am not ruling out the possibility of machine belief a priori.  I am
quite happy to consider it.  All I am asking is why you think that *some*
machines have them and some identically-behaving machines don't. 

>Another is to say "well, there might be something that corresponds
>to beliefs".  If you think so, can you tell me what it is, or might
>be?  Then at least we'd know what we're disagreeing about.

Well, I don't intend to be coy, but I'd like to keep this discussion
about the *functional* definition of beliefs, since my interest is in
determining if there is such a thing which is a coherent concept.
 
>>To lay out my cards more openly, I agree that simple table    
>>lookup *doesn't* have beliefs.  This seems to be agreed upon by
>>most of the functionalists on the net as well.  However, *I* see
>>no *important*, *principled* difference between table lookup
>>and other functionalist approaches.  Ergo, I see no reason to
>>think that *other* approaches could generate beliefs. 
>
>This seems pretty near to "machines can't have beliefs", as above.
>That is, you think table lookup doesn't have beliefs, and neither
>does any other way of organizing a machine/program.

The above paragraph was merely indicating my reasoning so far.  As I indicate
above, I would be happy to consider any distinction you could provide WRT
table lookup and "real belief" architectures.  If you could provide me
with a clear distinction, then I would re-evaluate my thoughts on machine
belief.  But without someone to say where the above line of reasoning is
wrong, I have to believe it's right.

>>If someone
>>could provide an account of the critical difference between non-believing
>>table lookup and some other believing system, then we can discuss the
>>criteria.  As it stands, all I see is empty assertion...
>
>The critical difference would be that it works in a different
>way (just as a Chess program that uses brute force search works
>in a different way from one that has clever heuristics for
>choosing moves and narrowing its focus).  But if you think no
>possible way for a program to work could possibly produce
>beliefs, then none of this matters.


Again, if you can give a plausible explanation as to *why* a certain
architecture causes real beliefs and another doesn't, I'd
be happy to listen.  But simply saying that table lookup "works in a 
different way" simply re-phrases your original claim.  It adds no
new information.  *What* is the principled difference in the way that
it works?  And *why* would this make a difference in belief
generation?

- michael
 


