From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!psinntp!scylla!daryl Tue Feb 11 15:26:13 EST 1992
Article 3630 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb10.145934.8912@oracorp.com>
Organization: ORA Corporation
Date: Mon, 10 Feb 1992 14:59:34 GMT

David Chalmers writes:

> Of course I don't have a full account of the required functional
> organization [for beliefs], but it requires that the system possess
> internal states that don't only lead to the right behaviour, but also
> interact with each other in appropriate ways; e.g. a desire that P,
> and a belief that if Q then P, and a belief that Q is easily
> attainable and doesn't have other bad side-effects, should cause a
> desire that Q, other things being equal.

I'm not sure to what extent it affects your argument, but I don't
think that beliefs in this sense exist at all for humans--not as a
collection of statements, anyway. As argued in a recent book, _The
Improbable Machine_, by Jeremy Campbell, humans seem not to reason
from a set of beliefs using rules of inference, but instead to use
some kind of fuzzy model-based reasoning. When asked a question, of
the form "Do you believe X is true?", humans (other than philosophers,
logicians, and mathematicians) tend not to look for more primitive
beliefs from which one could derive either "X" or "not X", but instead
try to construct some kind of a model of "X" or of "not X" in their
heads, and then ask themselves (in effect, not literally) whether this
model could plausibly be the real world.

So, for instance, if asked if David Chalmers might really be some
famous psychopathic ax murderer, I don't really reason from some set
of beliefs from which I can prove that he is not. I try to imagine
David Chalmers being an ax murderer, and I find that I can't conjure
up a plausible image of such a thing. Because our powers of
concentration are limited, and because these "models" we create are
somewhat fuzzy in details, it is possible that we can come up with
what seems to be a model of something which is not actually possible.
Our reasoning is especially fuzzy on the subject of quantifiers. When
I say "Such and such always holds", I usually mean "Such and such
holds in all the cases I can think of". For this reason, we can
believe things that are logically inconsistent; we can believe that
"For all x, Q(x) implies P(x)", and believe "Q(a)" and yet not believe
"P(a)".

In my opinion, human belief is more or less a matter of extremely
sophisticated model building and plausibility checking.

Daryl McCullough



