Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.mathworks.com!news.alpha.net!uwm.edu!math.ohio-state.edu!howland.reston.ans.net!ix.netcom.com!netcom.com!departed
From: departed@netcom.com (just passing through)
Subject: Re: What makes up consciousness?
Message-ID: <departedD3zp0E.Es9@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <departedD3vKy5.M3B@netcom.com> <792697161snz@chatham.demon.co.uk>
Date: Tue, 14 Feb 1995 12:21:49 GMT
Lines: 168
Sender: departed@netcom12.netcom.com

In article <792697161snz@chatham.demon.co.uk>,
Oliver Sparrow <ohgs@chatham.demon.co.uk> wrote:
>One makes a mistake if one assumes that all of the things that one can label 
>in one's experience are attributably made of other things, as an Airfix 
>fighter is made up of little bits of plastic. There is no reason to suggest 
>that consciousness is somehow dissectable into the "molecules of awareness" of 
>which it is made up. We can say how the cogs and weights in a clock make the 
>hands go round; and we can talk of gravity and pendula, spacetime continua and 
>entropy. None of these tell us about "telling the time" or the reasons why 
>clocks get assemebled. 

Mmm, this is nice.  However, it seems to me that in your clock example
you're jumping levels from the mechanical nature of the clock to the subjective
uses of it, and arguing that the levels don't connect.  True, they don't
for a clock, but the subjective and the nature of consciousness are going to
be tied more closely.  Why not have little particles of, um, subjectivity?
(What those might look like is another question!)
But I agree with you that it's all too easy to jump into reductionism and
claim that because you have a grasp on some functionality of some part
of consciousness, you actually have a claim to 'explaining' what's going
on.  Only to be later disappointed (as with chess playing computers) when
you've mimicked that layer of functionality and find that what you have is
not interestingly conscious at all.
[more exploration of 'understanding a clock is not understanding timekeeping'
 deleted ...]

>Can we have a Grand Unified Theory of Awareness? One would have to think of 
>what would it be made and how might these components be unified. A crude 
>approach - that like Frankenstein, one stiches a chunk of this activity to a 
>lump of that - might well give rise to a Thing reporting all of the features 
>which you report when we think about being aware. Would such a recipe be a 
>GUT of A? I think not. 

I have to agree with you there, although the problems you run into may shed
some light.

>To get down in the GUTA, one would need to grapple with a dynamic which 
>described how self-organising patterns of information came to be evolving from 
>data streams on a substrate which had predispositions and attractors (yes, Dr 
>Rickert, we know, we know). In such a description, one would be looking for 
>two things, I think:

Hmm, you're placing a heavy emphasis on change here ("dynamic", "self 
organising", "evolving".)  I agree there, we tend to reify consciousness
into sort of a thing & I think a better picture might be a sort of 
standing wave or self maintaining process.
One of the obvious features of consciousness is indeed that it won't
stand still w/o tremendous effort and attention.
Anyhow, I like your description.

>1: As self-organising filters and detectors came to be in this system, so the 
>   interplay of these elements would give rise to ever-more complex processes 
>   of data discrimination and prediction. If we followed what happened very 
>   closely, we could probably track all such nodes in the evolving tapestry of 
>   interaction. Photons go in, (Red Square AND Blue Spot) detection comes out.
>
>2: At some stage, the rope of nodes that arises from one area - say, vision - 
>   would begin to interact with chains of structures which had arisen from 
>   another - say, from taste.(*note, below) The complex pattern of output from 
>   the hierarchy that we will calll Sight might well be represented in an N 
>   space with complete fidelity: a bundle of vectors. Equally, the M space of 
>   the outcome of the Taste hierarchy would be describable. Suppose that 
>   Taste's output begins to modify Vision's learned hierarchy - altering its 
>   "rope of nodes" and, vica versa, Vision does the job on Taste's integrity. 
>   This is what we call associative learning; and what comes out of it is an 
>   association between two independent streams of DP. A New Thing has emerged: 
>   the organismish thingoid now can salivate when it sees a Choc Biscuit. It 
>   could apply for a job in the civil service.

I like your #2 here; my own definition would involve remapping of information
into a differently behaved information space.  Such a definition _could_
include #1, but #1 is obviously much less of a remapping (considering that
you can actually pick up an image from the visual cortex.)

>One can see how (1) can arise from clockwork tickings down among the neurons, 
>weights or however you go about this sort of thing. Mechanical birds do it, 
>bees do it, even fleas do it. Given the nature of the process, one can 
>see more or less what is likely to cone out of a given architecture. 
>
>This is not true of (2), however, because one needs to know the nature of the 
>outcome before one can begin to find the explanation that one would need to 
>describe this outcome. If one began without knowing the outcome - 
>unprompted by experience with similar systems, for example - then this 
>would be somewhat like formally criticising a movie that you have not yet seen 
>(which is not of itself enough to stop some people, but there you go.) 

Hmm, okay, well what does the outcome look like?  Are you saying that
association is fundamentally different than abstraction?

>A GUTA - a theory of consciousness - has to grapple with this property of 
>emergence, whereby something describable after the event is created from the 
>interaction of components to which it has only an oblique connection. The 
>emergent is not "made of" the structures from which it is woven, but rather 
>lies over them, in a circumstance in which they are necessary conditions for 
>its activities but do not constitute the whole of what it does. Consciousness 
>- with all of the caveats with which I began - could only be captured in a  
>GUTA which took these self-referential and volatile processes of interaction 
>and evolution as the bedrock. This is a task as difficult as inventing quantum 
>mechanics without either the mathematics, the repeatability or the ability to 
>observe. But then, that is exactly what they achieved in building QM from 
>nothing, so there's hope for us all yet.

Well, we don't really even have a bedrock yet, because we don't know
what those self-referential interacting evolving processes look like.
You could try to extrapolate backwards from the results and capabilities
of awareness (have you seen Gelernter's book, _The Muse in the Machine_ ?)
at some risk of falling into the mimicking-functionality trap.

Even if consciousness is an emergent process as you've delineated, we might
still be able to simplify the form of it quite a bit -- it seems unlikely
to me that you have these processes and then boom! awareness emerges --
even a much simplified form of these processes etc should show something
like awareness, or something you can call awareness, even if it's somewhere
_between_ your #1 and #2 examples.

We already have some clue as to these processes when you figure that one
main activity of your awareness seems to consist of altering information
into a form that reveals what we consider essential and important about
that information.  I.e. your visual cortex has an image, it's remapped 
into lines, then shapes, then things, and if these things seem to have
something important about them, then we refine that further.

So there's one possible angle to break down these processes -- as a
series of stagings for information flow -- each stage attempts to achieve
its own level of coherence, and whatever's different or remarkable about
the information at one stage gets kicked upstairs for further processing.

So maybe that's a definite primary task for awareness -- attempting to
reconcile the information flow by matching it against the maps we have.

The associative thing (like your choccie-loving bureaucrat) may be
quite similar -- only a mapping-across from one information space to
another, instead of a remapping upwards.  These information spaces
will have common features that could aid this kind of rethreading --
both will be affected by attention and desire for example.  Some other
element may include both of them -- there's not necessarily a problem
of regress here; we may be able to create information spaces ad hoc
by threading other information spaces together, so there really is
no top information space threatening to become a 'homunculus' on us.

I would suspect that the emergent quality of conscious may not be
an external attribute at all, even to the extent that we wouldn't call
anything that wasn't emergent, consciousness.  Perhaps emergence is inherent
to consciousness and thus it doesn't represent a barrier to understanding
consciousness at all -- consciousness may be emergent all along a continuum
of parts that exhibit emergent phenomena to a lesser degree, making up
a whole (perhaps not a unity, but a mass) that exhibits emergence to a much 
higher degree.

Perhaps a viable foundation, a substrate for your self-organizing data
streams, would be neural nets that had hooks inside each other, that
could extract information from each others' behavior (and modify it), not 
just receive their output.

>(*NOTE nb this is an example. A real biological being (RBB) would have these 
>chains integrated from neonatehood, of course. This is a thought experiment 
>about RBBs)
>
>_________________________________________________
>
>  Oliver Sparrow
>  ohgs@chatham.demon.co.uk

Thanks for your input, it's been most enlightening.

-- Richard Wesson
(departed@netcom.com)

