From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!think.com!samsung!emory!gwinnett!depsych!rc Fri Jan 31 10:26:34 EST 1992
Article 3221 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!think.com!samsung!emory!gwinnett!depsych!rc
>From: rc@depsych.Gwinnett.COM (Richard Carlson)
Newsgroups: comp.ai.philosophy
Subject: Narrativity
Message-ID: <yNm0eB1w164w@depsych.Gwinnett.COM>
Date: 28 Jan 92 15:10:21 GMT
Lines: 104

With all this talk of "semantics" and "understanding" and
"morality" as determinants of "humanness" or "sentience," let me
suggest a (stronger?) criterion which seems to subsume all of the
above: namely narrativity.  Human beings are, to revert to an
Aristotelian-like definition, the storytelling animal, the being
who "has a story," the _narrative_ being.  "Semantics" is just a
scientific-sounding synonym for the old standby "meaning," as in
saying, "I'm looking for the _meaning_ of life," or "I'm trying to
put some _meaning_ into my life," and meaning, especially when it
takes on serious significance and you want to spell it with a
capital M, is generally narrative meaning, i.e., the meaning of
something "inside" a story.

I think it has become clear as the discussion of intelligence has
gone on that the "intelligence" in AI is, for many of the
participants in this Newsgroup, just that: disembodied
_intelligence_ without the rest of the "mind," as if AI were
really limited to expert systems and when some other AI process is
considered (which is seldom in this Newsgroup) -- pattern
recognition, learning (training), virtual reality -- with
stochastic or probabilistic or fuzzy (rather than "logical" or
"syntactic") algorithms, the tendency is to treat these
emergent-like processes as "in principle" "no different" from the
familiar logical expert systems model. Even the discussion of what
makes an algorithm an algorithm suggests that many of the
participants think of "algorithmicity" as primarily a
deterministic-like phenomenon (although they will apparently
somewhat reluctantly accept training algorithms which lead to
different outcomes depending on the training).

To try to make clear what I am saying, let me talk in terms of a
paradigm which has an embodied expert system embedded in a
narrative "context": the doctor show on television -- whether the
doctor is Marcus Welby or Doogie Howser or Jim Kildare or whoever.
The typical doctor narrative has three "themes" or "movements"
(just what they are, plots? subplots? ideas? isn't clear).  First
off there's the disease the patient is having.  Then there's the
personal problem the doctor is having  -- say Doogie has just found
himself lying to his parents about something.  Finally there's the
problem, which happens to be the "same" problem, only in a
sillier and clearer manifestation, which one of the minor
characters is having -- Vinnie is lying to Janine or the head
doctor is lying to a big contributor to the hospital or something
like that.  The embodied expert system, say Doogie, "sees" the
solution to his own personal problem in an analogy to what he
advises the minor character to do.  And finally he either
diagnoses or treats the _patient's_ disease by means of a line of
thought which is suggested by his own and the minor character's
personal (human, existential) problem -- he either lies to the
patient or tells the patient a truth which no one else has told
her.

A disembodied expert system, manifest in a software program
developed in another "context" (which might even, as part of its
context, claim to be context-free) would take account of all the
"official" information available -- lab tests, history, and so one
-- and could even "in principle" do a table lookup of various
narrative patterns, such as lying, but until we think that such a
subroutine of narrative patterns might be helpful, we will
continue to focus on the standard official data.  Given the fact
that Doogie is a genius some AI implementer might plausibly even
use his habitual style of diagnostic thinking to create the rules
in the expert system.  But in today's climate of opinion that
hypothetical AI implementer wouldn't be likely to probe into
Doogie's personal life to find analogies and metaphors he used to
make his diagnoses.  [NB:  I am well aware that I am giving this
treatment a sociological and historical, in terms of "climate of
opinion," twist instead of an "in principle" philosophical spin.
The reason I'm doing it is that too many people are willing to
admit something in principle, or exclude something in principle,
and then not touch it again, returning to familiar patterns of
thought, satisfied that if they've "covered" some possibility
"in principle" that that's sufficient.]

Most of the posters in this Newsgroup are thinking of the
"computer" in the Turing Test as an expert system (and not a
neural network system or a pattern recognition system or a virtual
reality system) designed to be an "expert" at appearing human.
Most of the discussion has had that unspoken assumption.  Then
some have gone on to question what it might "mean" (semantics
again!) for a computer to "pass" such a test: is it "alive"? does
it have rights? and so on.  My feeling is that the important parts
of the human mind -- even when functioning as an expert -- are not
the logical, algorithm-like processes, but the narrative
processes, the story in which the person sees herself.  If she's a
character in the wrong story -- maybe her parents or the mass
media _told_ her the wrong story -- then her behavior is going to
be wrong.  And the story is the meta-context of the single,
discrete-looking act.  It controls that phenomenally "discrete"
act the way the grammar of a sentence controls the appending of
this or that morpheme at the level of the individual utterance, or
the way the structure of the word shapes the phonetic form of this
or that phoneme at the level of the word.  To really be similar to
the interaction or conversation between two human beings, the
computer would have to see itself as a character in a story in
which the human other is just "another" character -- the way HAL
saw the human characters who accompanied him on his mission in
deep space to the moons of Jupiter.

--
Richard Carlson        |    rc@depsych.gwinnett.COM
Midtown Medical Center |    {rutgers,ogicse,gatech}!emory!gwinnett!depsych!rc
Atlanta, Georgia       |
(404) 881-6877         |


