From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!nuscc!jit!smoliar Tue Apr  7 23:23:11 EDT 1992
Article 4815 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!nuscc!jit!smoliar
>From: smoliar@jit.iss.nus.sg (stephen smoliar)
Subject: Re: A rock implements every FSA
Message-ID: <1992Mar30.090841.15015@nuscc.nus.sg>
Sender: usenet@nuscc.nus.sg
Reply-To: smoliar@iss.nus.sg (stephen smoliar)
Organization: Institute of Systems Science, NUS, Singapore
References: <1992Mar27.145107.12415@oracorp.com>
Date: Mon, 30 Mar 1992 09:08:41 GMT

In article <1992Mar27.145107.12415@oracorp.com> daryl@oracorp.com (Daryl
McCullough) writes:
> It is common to think of the brain as
>composed of conscious and unconscious parts. When sensory information
>(such as visual signals) enter the brain, there is a lot of
>unconscious processing to get the information in a shape that our
>conscious mind can use. Then we do our conscious thinking about that
>information, and decide on a response. Then, once again there is
>unconscious processing to translate that response into the particular
>output signals to our muscles to make our bodies do the right thing.
>However, as this example shows, if you claim that all the pre- and
>post-processing of information is unconscious, then that might not
>leave anything at all for the *conscious* mind to do. Maybe the split
>between conscious and unconscious is less clear than one might at
>first think.
>
I think this has a lot to do with my own efforts to develop a better connection
between what Minsky had to say in THE SOCIETY OF MIND and what Tinbergen had to
say in THE STUDY OF INSTINCT.  It does not seem particularly far-fetched to
view the various forms of behavior cataloged in Tinbergen's book as all being
representative examples of unconscious processing.  (Indeed, one may wish to
question whether any of the animals he examines could be said to have the sort
of consciousness we attribute to ourselves.)  It seems to me that one of the
things THE SOCIETY OF MIND is trying to explore is the idea that what we call
"consciousness" may ultimately just be a matter of trying to coordinate more
and more of the sorts of "instinct mechanisms" (agents?) which Tinbergen
explored.  If this is the case, there there IS no clear split;  there is
simply a continuum in which more and more control decisions have to be
exercised (and may ultimately be exercised by new classes of "instinct
mechanisms," invoking the same sort of "bootstrapping" which Edelman pushes
in his own account of consciousness in THE REMEMBERED PRESENT).

Does any of this have anything to do with either functionalism or behaviorism?
We might begin by turning to Putnam's position in REPRESENTATION AND REALITY:

	Many years ago, I published a series of papers in which I
	proposed a model of the mind which became widely known under
	the name "functionalism".  According to this model, psychological
	states ("believing that P", "desiring that P", "considering
	whether P", etc.) are simply "computational states: of the
	brain.  The proper way to think of the brain is as a digital
	computer.  Our psychology is to be described as the software
	of this computer--its "functional organization".

	According to the version of functionalism that I originally
	proposed, mental states can be defined in terms of Turing
	machine states and loadings of the memory (the paper tape
	of the Turing machine).  I later rejected this account . . .
	on the ground that such a literal Turing machine-ism would
	not give a perspicuous representation of the psychology of
	human being and animals.  That argument was only an argument
	against one particular type of computational model, but the
	arguments of the preceding chapters constitute a more general
	reason why computational models of the brain/mind will not
	suffice for cognitive psychology.  We cannot individual concepts
	and believes without reference to the ENVIRONMENT.  Meanings
	aren't "in the head".

Unfortunately, this argument seems to rest on the assumption that you cannot
have machines which do not reference their environment;  yet Minsky's approach
seems to be based on the assumption that Tinbergen's "instinct mechanisms" are,
indeed, such machines.  Thus, the Tinbergen/Minsky position ultimately entails
an expansion of the scope of functionalism beyond the abstract Turing model
which Putnam chose as his foundation.  THIS form of functionalism probably
CAN be reduced to behaviorism, since ultimately it rests on a view that what
instinct mechanisms are is what they do.  It also emphasizes Minsky's position
that we should be expanding our scope of what machines ARE, rather than
dismissing machines under the theological assumption that we cannot be
them.
-- 
Stephen W. Smoliar; Institute of Systems Science
National University of Singapore; Heng Mui Keng Terrace
Kent Ridge, SINGAPORE 0511
Internet:  smoliar@iss.nus.sg


