Newsgroups: comp.ai.alife,comp.ai.philosophy,comp.ai,alt.consciousness
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Thought Question: A kinder and gentler net??
Message-ID: <D3A6It.7nH@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <3geeno$ila@prime.mdata.fi> <3gh2i9$8l6@ixnews3.ix.netcom.com> <1995Jan29.223330.864@news.media.mit.edu>
Date: Tue, 31 Jan 1995 17:42:29 GMT
Lines: 67
Xref: glinda.oz.cs.cmu.edu comp.ai.alife:2089 comp.ai.philosophy:25103 comp.ai:26934

In article <1995Jan29.223330.864@news.media.mit.edu> minsky@media.mit.edu (Marvin Minsky) writes:

>| What you're *not* aware of is knowing any significant detail about
>| yourself, that is, any more than you know about other people.  (In
>| other words, I'm agreeing with the Gilbert Ryle stance.)  You have
>| virtually no ideas at all--and those that you have are probably
>| wrong--about how you get ideas, what ideas are, where the words come
>| from when you speak, how you move a finger, and all that sort of
>| thing.  We share the notion that we have something we call
>| consciousness that reveals to us a great deal about ourselves, about
>| our mind, about our feelings, and so forth--but consdiering that we
>| evidently do not have much such ability, one must conclude that
>| no such thing that actually corresponds to that myth.
>
>What I meant is that none of us seem to have much idea of what happens
>in our minds to produce what our minds do.  Otherwise we wouldn't need
>the slow, tedious scientific investigations called psychology and
>cognitive science.  Introspection in particular, and consciousness in
>general, does not seem to reveal much about ourselves, as Ryle
>observed.
>
>Of course, as Jeff Dalton has observed, (1) we do have a certain
>amount of 'privileged access' to our own thioughts, but what I'm
>complaining about is that they are of very low quality, and scarely
>better for practical matters than what we observe about our friends'
>thinking.  (That's what I meant bby the Ryle stance.)  

I don't know if it's a *practical* matter, but in net discussions
people often have a lot of trouble understanding other people.
I suspect that they're significantly better at understanding
themselves.

Access to one's own thought seems to be useful in a number of other
cases as well, for instance when trying out (to oneself) several
different ways of saying something or when trying to work out a
plan for accomplishing some goal.

OTOH, some people have regarded introspection as infallible in
ways that it isn't and have sometimes been wrong about certain
aspects of experience (e.g. how the blind spot is dealt with).
I agree with Marvin Minsky that psychology and cognitive science
are often more effective.  Unfortunatelky, it's been over 15
years since I read Ryle, so I can't say much about his stance.

> Also I agree
>that Jeff Dalton's complaint is justified: that not everyone conflates
>'conscious' with 'self-conscious' the way I do.  However, I can't see
>how merely "being aware" without knowing that you're being aware could
>be much of a mystery.  Especially because you could never then assert
>that you are aware.  Therefore, that certainly can't be what we're
>taking about!

I would say that some animals are probably aware w/o being able to
say so and that we don't know all that much about what causes 
conscious awareness of any sort to occur.  If some animals have
visual experience pretty much like ours (for example), then
we're not much (if any) closer to explaining how they get such
subjecctive experiences than we are to explaining how we do.

However, some aspects of consciousness are not so difficult.
Programs can have access to some of their own processing, and
in many ways better access than we have to ours.  So if, say,
a program were trying out internally different versions of
a sentence, it might well have a better memory of the different
versions than we have.  (Will robots need post-it notes?)

-- jd
