From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!sol.ctr.columbia.edu!bronze!chalmers Tue Mar 24 09:55:09 EST 1992
Article 4413 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <1992Mar11.215901.5592@bronze.ucs.indiana.edu>
Date: 11 Mar 92 21:59:01 GMT
References: <1992Mar5.141610.20612@oracorp.com>
Organization: Indiana University
Lines: 56

In article <1992Mar5.141610.20612@oracorp.com> daryl@oracorp.com writes:

>Suppose that there are two situations a human being can find him or
>herself in: A and B. They are different situations, but, by an amazing
>coincidence (1) all sensory clues to the human are identical for A and
>B, (2) all "sensible" behavior on the part of the human in situations
>A and B are identical. (By (1), I mean more than simply that the
>immediate sensory clues are the same. The immediate sensory clues for
>(a) it being night-time, and (b) being in a dark, sealed room are the
>same, but there are very different sensory clues in the past and
>future. By saying that the sensory clues for situation A and B are
>identical, I mean that all that all sensory clues leading up to A are
>identical with those for B, and also that all sensory clues following
>A are identical with those for B.) If there could possibly be
>situations A and B that are *exactly* isomorphic in this sense, then
>would it make sense to say that a human "understands" that he or she
>is in situation A? Would it make sense to say that the human
>"understands" what to do?

This is more or less exactly what's going on in Putnam's well-known
"Twin Earth" thought experiments.  Two people are in identical
environments, but one of them is surrounded by water (i.e. H20),
and the other is surrounded by the superficially indistinguishable
liquid twater (with different chemical makeup XYZ).  The two people
might even be in identical brain states.  But when they both think
the thought that they express by "gloop is wet" (where "gloop"
happens to be the word that both their languages use for the
respective liquids", the first is thinking that water is wet,
while the second is thinking that twater is wet.

Obviously, it requires a reasonable amount of adhockery to
produce this kind of situation.  But if anyone could produce a
situation in which a computer program was interpretable, from
it's internal processes, in two different ways, that wouldn't
show any great gulf between computers and humans, because humans
have just the same problems.

>In my opinion, the problem of pinning down reference is real, and it
>*doesn't* get solved, either by humans or by programs; the best we can
>ever hope for is some kind of "understanding modulo isomorphism".

I agree with this.  Or more accurately, reference gets pinned down,
but not by processes internal to the human; the work is done by
causal chains between the human and parts of its environment.  The
same, presumably, goes for computers.  The real moral, I take it,
is that reference isn't all that important for understanding
cognition.  What counts is the kind of meaning that is determined
internally (your "understanding modulo isomorphism").  A cottage
industry has recently sprung up among philosophers, trying to make
sense of a notion like this, generally under the rubric "narrow
content" (as opposed to "wide content", which is reference).

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


