Newsgroups: rec.arts.books,comp.ai,comp.ai.philosophy,sci.cognitive,sci.psychology.theory
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!travelers.mail.cornell.edu!news.kei.com!news.mathworks.com!tank.news.pipex.net!pipex!howland.reston.ans.net!ix.netcom.com!netcom.com!shankar
From: shankar@netcom.com (Shankar Ramakrishnan)
Subject: Re: Does AI make philosophy obsolete?
Message-ID: <shankarDFu2FM.8JD@netcom.com>
Reply-To: shankar@vlibs.com
Organization: VLSI Libraries Incorporated
References: <44efmb$jdm@scotsman.ed.ac.uk> <DFnG0u.1Gu@research.att.com> <44h0ga$dqh@scotsman.ed.ac.uk>
Date: Mon, 2 Oct 1995 18:09:21 GMT
Lines: 85
Sender: vlsi_lib@netcom7.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai:33796 comp.ai.philosophy:33270 sci.cognitive:9809 sci.psychology.theory:891

In article <44h0ga$dqh@scotsman.ed.ac.uk> cam@castle.ed.ac.uk (Chris Malcolm) writes:
>[I have added comp.ai.philosophy to the newgroups]
>
>In article <DFnG0u.1Gu@research.att.com> rhh@research.att.com (Ron Hardin <9289-11216> 0112110) writes:
>>Chris Malcolm writes:
>
>>>However, computer science showed us the extraordinary utility of
>>>recursion which was not infinite, but which bottomed out in a terminal
>>>case, and computer vision showed how this could be applied to computer
>>>image processing.  Thus by experimental demonstration Hume's problem
>>>was shown to be a misconception: what Hume thought was a damning
>>>property of a proposed explanation of vision -- recursion -- turned
>>>out to be the key to how it could be made to work.
>
>>I don't follow this argument.  Who's watching the screen whatever
>>depth it terminates at?  (Whatever provoked the question still
>>provokes it.)

This question has been oft asked in literature. A common question is
whether "grandmother cells" exist, and if they do, who do they report to.
My answer is that such a question is ill-framed. For the existence of
grandmother cells doesn't mean that they report to nobody (or to some
hypothetical "soul" or whatever), yet it is not theoretically impossible
that they do exist. When a person is asked to identify a picture
of his grandmother, words come out of his mouth to that effect. So the
output of a "grandmother" cell is fed into other areas of the brain,
ultimately causing his speech part to respond. In other words, there
is no "dead end", for a nerve cell that doesn't have any outputs is as
good as dead.
>
>A good question. The answer is that that at each stage in the
>recursion of homunculi, the homunculus required to watch the screen is
>a simpler creature than his predecessor. When, after however many
>recursions, we reach the final homunculus, it turns out that he is so
>simple that he isn't a homunculus at all, he is a trivially simple and
>easily comprehensible device.

Good idea, but this is not the way the human vision works. Human visual
areas are organized in several regions like V1, V2, etc. But this is not
strictly hierarchical. Also, in the brain there are reverse feedback
paths from the upper to the lower levels. It is presumed that in the case
of dreams these feedback paths play an important role in recreating vivid
scenes (though this is not the only or the most important role).
>
>
>I do realise, however, that this has not answered one important aspect
>of the problem: the qualitative feel of sensory experience, awareness,
>experience, consciousness. Whether this mysterious ingredient of the
>mind pudding comes along superveniently with the right behaviour
>implemented by the right kind of machinery, whether further specific
>kinds of elaborations of virtual machinery are required, or whether
>some new and as yet unkown something is required is currently a very
>hot debate, with a relevant book being published every few months.
>Another interesting question is whether consciousness is
>epiphenomenal, an ingredient in the human implementation of mind due
>to an evolutionary accident, but not (considering all possible types
>of mind) an essential ingredient in producing cognitive behaviour of
>any degree of sophistication. Finally, there are some who suggest that
>the very scale and diversity of the debate should be taken as a strong
>hint that we are all barking up the wrong tree: consciousness,
>whatever it is, is very far from what any of us currently suppose it
>to be.

Could very well be. I also have a slightly different question: Instead
of asking if consciousness (and qualia as such) are an inevitable
by-product of human evolution, can we ask if the *illusion* of consciousness
is a by-product of human evolution? Can people claim to be perplexed
by consciousness when there is none? I can imagine a gedaken experiment
that can answer this question. Given a sufficiently powerful computer
we should be able to simulate an evolving society of living beings (purely
deterministic program). Finally when the society begets philosophers
who ponder at the mystery of consciousness and qualia, we know that
the whole thing is hogwash. For we can explain formally what led them
to ask such questions, without having to bring in their illusory
consciousness. Are we in exactly the same position? It is quite possible
that others around us (whose consciousnesses we take for granted)
are not actually conscious, but nevertheless behave not only as though
they are conscious (whatever that means), but also as though
they are perplexed by their own consciousnesses. In the case of explaining
the consciousness of the self, things can get a bit tricky. For it
is the case of dividing zero by iteslf. For if we are not conscious, there
is really no agency to *disprove* the claim that we are conscious.
It is kind of a no-win situation here.

Shankar
