Newsgroups: comp.ai.philosophy
From: lupton@luptonpj.demon.co.uk (Peter Lupton)
Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!europa.eng.gtefsd.com!howland.reston.ans.net!pipex!demon!luptonpj.demon.co.uk!lupton
Subject: Re: Is Common Sense Explicit or Implicit?
References: <35g8hl$pkt@newsbf01.news.aol.com> <pautler-170994124712@pautler.ils.nwu.edu>
Distribution: world
Organization: No Organisation
Reply-To: lupton@luptonpj.demon.co.uk
X-Newsreader: Newswin Alpha 0.4
Lines:  48
Date: Mon, 19 Sep 1994 11:31:16 +0000
Message-ID: <467904443wnr@luptonpj.demon.co.uk>
Sender: usenet@demon.co.uk

In article: <35g8hl$pkt@newsbf01.news.aol.com>  epfaith@aol.com (EPFaith) writes:

> >> Why is it so hard to explain how you
> >> see a real tiger, and yet so easy to explain how you use a mental
> >> representation to see a tiger?
> 
> >I haven't avoided discussing how we recognize real tigers; you've
> >been misreading me.  Recognizing real tigers involves mental
> >representations.
> 
> I did not say you avoided discussing this.  I did not misread you.  You
> misread me.  What I said is that you say that recognizing real tigers
> involves mental representations, and I questioned the point of this
> explanation.  So you see, I read you well.  I claimed that it does not
> solve the problem, because now you have a new problem.  And it's exactly
> the same sort of problem as the one you proposed to solve.  That is what I
> said.  Surely you can agree in general that if you use A to explain B, but
> A needs the same sort of explanation as B, then A is not much of an
> explanation.  All you said is "we use mental representations".  That's
> like saying "we use sketches".  That doesn't get us anywhere, because we
> have to match those sketches against things, and that requires a capacity
> to see what's on those sketches, and to match those sketches to what is
> out there.

Mental representation solves *some* problems and does not solve others.
Can we, at least, agree on that? For example, if I stumbled across
a dangerous animal I would, surely, prefer to then plan my actions using
mental representations of those animals and not the animals themselves?
Isn't manipulating mental representations just much *safer* than manipulating
the real things?

Again, one might manipulate the thing itself in a variety of quite different
sensory modes. Surely it would be better (for computational purposes, say) 
for the interactions to be more uniform. Is this not something the brain
achieves by encoding everything in a computationally uniform manner?

The problem you may be alluding to is the problem of interpretation,
as laid out in Wittgenstein's Philosophical Investigations? I suggest 
that the problem of comparison can be discharged through the processes 
of simplification, and that what makes a representation a 
representation is related to the role that structure plays in 
simplification. Without the structure of simplification, there certainly 
is a problem of interpretation: one that would, indeed, be fatal for a 
theory of mental representation in my view.

-------------------
Peter Lupton
