From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!emory!gwinnett!depsych!rc Fri Jan 31 10:27:33 EST 1992
Article 3320 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!emory!gwinnett!depsych!rc
>From: rc@depsych.Gwinnett.COM (Richard Carlson)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <y82DFB3w164w@depsych.Gwinnett.COM>
Date: 31 Jan 92 00:36:45 GMT
References: <391@tdatirv.UUCP>
Lines: 128

sarima@tdatirv.UUCP (Stanley Friesen) writes:

RC:
> |It is most likely some melange of "text" (semantic elements),
> |graphics (imagery) and behavioral intentions ("attitudes").

SF:
> O.K., now given that these things must be encoded in some manner to be
> instantiated in any processing system, be it brain or computer, how are
> "text", and "graphics" and "attitudes" different than symbols?

Whew!  I could write a book on each one of these and how they may
differ from the others.  For a long time psychologists were
enamored of the notion the ideas were stored in the mind as
images. They meant pictures.  Actually the notion isn't so far
fetched. Animals don't have language so presumably they construct
some kind of virtual reality in their foggy animal minds which
reproduces their prey and natural enemies, etc.  About 80 or 90
years ago there was even a question as to whether there could even
be "imageless thought."  It was thought that general images of
things like, say, "man" (it was a sexist time!) were "built up"
from particular experiences.  They seemed to assume some kind of
overlay process because the psychologists of that day had
discussion of things like why didn't the general image of "man"
include a beard since only one particular exposure to a man with a
beard should cause that to overlay all the images of clean-shaven
men.  The idea of statistical sampling and abstraction was
apparently alien to them.

Yes, yes, I know that the image has to be reduced to some kind of
"machine language" of a presumably binary "symbol system,"
although the thought was a hundred years ago that it wasn't
changed all that much, that the image was a "pattern" of neurons
that looked like the thing it imaged (i.e., an iconic symbol
rather than an arbitrary one).  In psychology texts of 40 years
ago the posterior "general association" area of the human brain
was draw to look like a movie screen on which images were
literally projected.  It felt right to them then.

Then attitudes!  I  spent years studying those things.  My Ph.D.
thesis is in attitude measurement.  Let me just mention that from
about 1915 or so on -- after they had lost interest in images as
conveyers of memory and mediators of behavior -- psychologists got
interested in muscle twitches.  Not as silly as it sounds since
they theorized that little, minuscule movements (which they called
"responses") could be "conditioned" to "stimuli" to produce
"behavior."  Behaviorist psychologists from the 1920s through the
1950s seemed to get the same feeling of "solidity" or whatever
from the physicalist notion of observable stimuli and real,
physical (even if miniaturized) responses that some contemporary
AI theorists get from an algorithmic or syntactic proof procedure
-- something to hold onto, I think, almost literally to
_visualize_, which takes us back to those images!  And these
notions felt as solid and as common-sensical to them as your
notion that there is a symbolic representation for everything does
to you.

> Once encoded, they are now reduced to *representations*, they are no longer
> the things themselves.  And mental manipulation of these representations seem
> to me to be little different than symbol manipulation, even in my own brain.

Reread you own words and see if they are not just an updated
version of the focuses of psychologists of yesteryear, only now
symbolic processes are the privileged mechanism.

> This is where I have problems with the Searle approach.  I can see no way
> of instantiating things like this, in any hardware, that do not reduce to
> symbolization.

But the human brain may not be as much like the hardware of our
computers as we sometimes think.  Back in the 1940s the brain was
depicted in undergraduate biology and psychology texts as like a
telephone exchange -- a 1940s telephone exchange with polite,
sorry-wrong-number-like operators, complete with microphones on
their chests, as the homunculus of the day.

> As far as I can see we have come full circle, right back to symbol
> manipulation.  We are just at a different *level* now.
> 
> |> [Prior experience *must* be encoded in some way, since the experience itse
> |> is no longer available, and much evidence suggests that all memories are
> |> *reconstructions* not direct recall, and prior experience can generally be
> |> expressed as real sentences].
> |
> |A lot of forensic research done on the use of hypnosis and other
> |techniques to help eye witnesses recall details indeed suggests
> |that memories are reconstructions (not complete recordings as
> |Freud, among others, speculated) but the focus on verbal encoding
> |into sentences hasn't emerged from the research.  (You can alter a
> |person's recall with verbal suggestions.  Any competent hypnotist
> |can convince a witness he did or did not see something.)
> 
> The last phrase simply refers to the fact that one can generally talk
> about ones memories.   And to the fact that most people seem to think
> in language (whether written or spoken).  Thus there does not seem to be
> any functional difference between linguistic memory and non-linguistic
> memory.  At most they simply come from different loci in the brain.
> 
> Also, neurobiology has, so far, only found one mechanism for memory, and
> only one encoding scheme for internalized data in the brain.  Thus, again,
> there seems to be only one memory system, and only one internal encoding.

I must have missed it.  I'm not being sarcastic.  Perhaps there
has been some new finding about memory that I missed.  (Somehow I
had never heard of formal semantics, although the damn stuff had
been around for sixty years!)  I thought the "engram" or "memory
trace" was still highly speculative and that there was no known
mechanism.

> Thus it seems silly to give special status to non-linguistic data traces,
> and treat them as 'causal' or 'non-symbolic', or 'semantic'.  I think this
> kind of dichotomy is artificial.

If memory is largely linguistic, how do animals -- especially
closely related species but species which are still lacking
language, like orangutans -- remember?

The "linguistic turn" which hit philosophy in the 1930s and
psychology in the 1960s seems dominant in cognitive science now,
but one advantage of having lived through several paradigm shifts
is that the currently privileged paradigm doesn't feel quite as
convincing as it might.

--
Richard Carlson        |    rc@depsych.gwinnett.COM
Midtown Medical Center |    {rutgers,ogicse,gatech}!emory!gwinnett!depsych!rc
Atlanta, Georgia       |
(404) 881-6877         |


