Newsgroups: comp.ai.philosophy
From: lupton@luptonpj.demon.co.uk (Peter Lupton)
Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!europa.eng.gtefsd.com!howland.reston.ans.net!news.sprintlink.net!demon!luptonpj.demon.co.uk!lupton
Subject: Re: Is Common Sense Explicit or Implicit?
References: <35vqp8$oqu@mp.cs.niu.edu> <35ak50$d3v@mp.cs.niu.edu> <227798242wnr@luptonpj.demon.co.uk> <35qk9k$6tb@mp.cs.niu.edu> <48020715wnr@luptonpj.demon.co.uk>
Distribution: world
Organization: No Organisation
Reply-To: lupton@luptonpj.demon.co.uk
X-Newsreader: Newswin Alpha 0.4
Lines:  236
Date: Sun, 25 Sep 1994 23:12:33 +0000
Message-ID: <477874535wnr@luptonpj.demon.co.uk>
Sender: usenet@demon.co.uk

Before replying to Neil's latest article, I would just like to say
how much I am enjoying the experience of agreeing with Neil! Not as
*exciting* as an all-out ding-dong but with its own rewards.  

In article: <35vqp8$oqu@mp.cs.niu.edu>  rickert@cs.niu.edu (Neil Rickert) writes:

> Yes, I agree with all of this.  Indeed, that was close to my point.
> That is, I was arguing against the position that language abilities
> are made out of whole cloth.  Some people tend to discuss cognition
> in such a way as to suggest that language is a completely new
> facility, and that if one understands that, one understands
> everything important about human cognition.  For example, the CYC
> project, which is where this debate began, is oriented toward an
> propositional development, relatively independent of other
> abilities.  I am quite doubtful that such an approach can ever work.
> I agree with your comments that language abilities have grown from
> more basic abilities, and heavily depend upon those abilities.  Thus
> in order to understand human cognition one must understand
> prelinguistic cognitive abilities.

For very similar reasons, I suspect, I would also expect the CYC project
to fail to produce the sort of abilities humans take for granted.
I have not managed to get a clear picture of what CYC has and does
not have - but from what I do know it seems difficult to believe that
such a device really could make the judgements it would have to make
in order to exhibit common sense. I just don't see how the basic inputs
(textual reports) could be evaluated by such a machine. One wants to 
know how the CYC device could stop itself from degenerating into 
silliness - and I have not read an account that seemed to think this a
problem.

> While I am not attempting to pressure you into something you wish to
> avoid, I don't see how you can avoid considerations of subjectivity.

The debate I don't want to get into is the subjectiv-*ism*/objectiv-*ism*
debate. This would not rule out some discussion involving objectivity and
subjectivity - I would just wish to terminate the discussion as soon as
it became a matter of which "ism" has been adopted.  
 
> >> >Although animals do not have language
> >> >and propositions, they do learn and this involves trial and error. 

> >                          Even so, there is still a notion of error
> >lurking there which, I think, can be a matter of degree and gives rise
> >to the notion that the animal might undergo revision and correction, say.
> 
> I would prefer to say that utility and efficacy play a role in the
> evaluation of knowledge.  The way we think about "error" seems to be
> too constraining.  Error implies a judgement, and is a subjective
> term.

Certainly I agree that utility and efficacy play a role (a very
important role) in the evalution of knowledge. I will also agree
that I tend to down-play the importance of utility in general
in preference to the specific utility of the projection and
detection of regularities. The reason for this bias is that I
have been arguing that use depends upon (and this, I think, is
an internal or conceptual dependency) the ability to project
regularities. So what seems like a special use actually turns
out to be universal.

Now. In the context of the detection and projection of sensory
data, I don't see why there isn't a clear notion of error which
is as objective as anything else we might class as objective.
This restricted notion of error, restricted to the detection and
projection of sensory data, could be applied to organisms as a whole
and also to the detailed machinations of neural structures, I 
would have thought.
 
I asked Neil for an example of the sort of knowledge that does not
involve revision, correction and error. This was Neil's example:
> 
> Take the example of a physical skill, say swimming, or putting in
> golf.  It seems to me that there are often periods in which, with
> practice, the skill improves.  Some of the improvement may be muscle
> tone, but some of it is coordination, which presumably involves
> abilities of the brain which should interest us.  I don't have any
> difficulty with describing this steady improvement as a process of
> revision.  But it seems to me that normal usage would not refer to it
> as error and correction.  If the golfer is holding his putter the
> wrong way, that may be an error and require correction.  But we tend
> to restrict the terms "error" and "correction" to gross changes,
> rather than to a process of steady revision and enhancement.

I agree with what you write, but draw a different conclusion. This 
is the sort of example I expected from Neil, and it does seem to
be an interesting case. Let me first say that I don't think that
a person learning *anything* necessarily has to make errors. We could
imagine every neuron making connections *just so* all the time - it is
just that such a possibility is wildly unlikely. The question is, 
really, whether the *possibility of error* (and correction) is what 
is necessary for us to call it learning. The golfer may well go 
through a phase of gradual incremental improvement. But, we can easily
(and just as plausibly) imagine the golfer failing to improve and,
indeed, regressing more or less badly. 

We started this discussion with me saying that there must be trial
and error - I have now revised that to the weaker assertion that
there must be the possibility of error, correction and revision -
and this may occur, quite validly, at the level of the organism
or at the level of the simplificational circuitry of the 
organisation.

I would not, for example, say that the moon has learned to rotate
about its axis at the same rate that it rotates about the earth.
It just happenned - the moon did not *learn* anything, for there
was never any question of error, in this case.

> >Rocks don't have sensory data and they don't simplify. Apart from
> >an idiosyncratic use of these terms ("sensory data" and "simplify")
> >I don't see what you could mean.
> 
> If one takes an objective view, rocks do not have sensory data, and
> do not simplify.  But if one takes an objective view, humans equally
> do not have sensory data and do not simplify.  Instead, there are
> simple laws of physics involved in both cases.  

These last two statements strike me as far from being truisms. They
seem to be a substantive thesis of a kind that I don't recognize.
Surely we both accept that high-level structures can be just as
objective as lower level ones? If rocks are made of atoms (objectively)
then I don't see why they cannot also consist of crystalline fragments
(objectively). For brains, I don't see why we cannot say that there
are neurons (objectively) and higher level structures (objectively)
which simplify sensory data. That is, I understand that such objective
accounts are all promissary but I don't understand that such objective 
accounts are excluded before we start. That is what I don't understand.

Now I have said I don't want to open up the objectivism/subjectivism
debate and, so it would seem, I am dancing perilously close to the
edge. The way I would like to proceed would be to explore the conditional:

   "If it makes sense to talk objectively about rocks and humans,
    then, given that, where does such objective talk finish? How
    far can we go without calling upon entirely new principles
    which would count as subjectivity?"

That is the sort of question that seems worth exploring and, I think,
does not necessarily open up the question of objectism/subjectivism.

> In order to talk
> about humans having sensory data, we must be subjective.  We normally
> refuse to consider the possibility that the rock have subjectivity,
> so we do not grant it sensory data.

Not with you here. In order to talk about some-one having sensory data
it seems that we are using terms which *involve* subjectivity. This
is not to say that we must *be* subjective in any way we would not
be if we were discussing atoms, say. Why can't we just note that
"here be subjectivity" and carry on regardless?

The last sentence srikes me as odd because we really want to know
on what grounds we do not grant the rock sensory data. It is not,
as it were, an act of capriciousness in denying rocks subjectivity. 

> As far as I can tell, humans and rocks both react to energy inputs on
> the basis of the laws of physics.  The most obvious difference that I
> can see is that the human is enormously more complex than the rock.

The rock can be, I suspect, exhaustively described using physics and
chemistry, say. This is plainly not true for humans. We must go
on in our descriptions to include cells, neurons, and vast 
interconnected systems of neurons. There is, for example, very good
evidence that the hippocampus is a large associative memory, connecting
every part of the cerebral cortex with read/write connections. I don't
see that this is subjective, and I don't see any such structure in
rocks. 

> If one wants to use supposedly objective language, with words such as
> simplify, I think it is a mistake to insist on applying that language
> only to objects to which we grant subjectivity.  If we are to ever
> understand the principles of human cognition, we must be prepared to
> analyze the human in the same objective manner that we would analyze
> the rock.

(I don't know whether what follows is what Neil is saying, but it 
sounded a bit like it, so I thought I would give Neil the opportunity 
to accept or reject what follows.)

I agree with this, but that does not mean that we must stop analysing
humans once we have identified the sort of structures rocks have.
That is, just because rocks stop at chemistry, say, that is no
reason for us to stop at chemistry for humans.

> Now I certainly grant that rocks are not humans, and do not display
> any significant amount of intelligence.  But we cannot distinguish
> between rocks and humans by arbitrarily declaring that rocks do not
> have sensory data to simplify, while animals do.

I don't think we do that, at all. I think we try to make sense
of rocks and humans as best we may and, so far as we can tell,
rocks don't have sensory data and don't simplify. This is not
arbitrary - we didn't toss a coin - I believe one uses the same 
sort of techniques and judgements that one uses in making other 
judgements for which we claim objectivity.

> >It is this ability to *manipulate, combine, modify* in a rational
> >way which is so characteristic of humans. It is a quite unique
> >ability and permits humans to *invest* in these abilities independently
> >of actual use. (There will, of course, be instances of use, but the
> >investment in simplification - reasoning, thinking - can be quite
> >distinct).
> 
> We really do not know that other animals are incapable of doing the
> same.  It seems evident from their behavior that they cannot do this
> with nearly the proficiency and flexibility of humans.  But we know
> too little about animal cognition to rule out these abilities
> altogether.

Point taken. It is an ability which can occur in any degree. However, 
the discussion was about the *value* of such abilities and the 
purpose of the discussion seems to have gone missing. Now it does 
seem that these abilities can easily be shown to be of great value -
taking a child along to climb the tree; going out of one's way to
pick up a large leaf when gathering berries; ditto for rocks and 
sticks when hunting...the ability to do such things in a flexible 
way (not evolved behaviour!) depend upon the ability to plan and 
to call upon a wide variety of possibly relevant knowledge which
involves a degree of abstraction, a facility in symbolic manipulation.
When I think of a cat on a mat, I don't need to (although I might)
give the mat a position or the cat a position on the mat: nothing
so specific need be involved at all. In contrast, a perception
of a cat on a mat will fill all these specifics one way or the other.
Being able to instrument our simplificational abilities opens up
the possibility that the pattern of activation might be incomplete
in a way that would not occur otherwise. What is lost in specificity
to the situation is gained in specificity to the nature of the 
problem - we can call this, if you will, a difference in context, 
but we must also acknowledge that this is not just one context
among many but the ability to generate a whole new class of contexts,
contexts specialised to the problem in contrast to the circumstance
in which the problem arose.

-------------------
Peter Lupton
