Newsgroups: comp.ai.neural-nets,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!udel!gatech!howland.reston.ans.net!agate!news.ucdavis.edu!library.ucla.edu!news.mic.ucla.edu!news.bc.net!newsserver.sfu.ca!fornax!hadley
From: hadley@cs.sfu.ca (Bob Hadley)
Subject: Connectionism, Systematicity and Representation.
Message-ID: <1994Nov10.234013.6701@cs.sfu.ca>
Keywords: systematicity, learning, representation, compositionality
Organization: Simon Fraser University
Date: Thu, 10 Nov 1994 23:40:13 GMT
Lines: 83
Xref: glinda.oz.cs.cmu.edu comp.ai.neural-nets:20040 comp.ai.philosophy:21909


                          (Extended Abstract)


                     Strong Semantic Systematicity
				 from
                   Unsupervised Connectionist Learning

                                  by

                     Robert F. Hadley and Michael Hayward
                         School of Computing Science
                            Simon Fraser University

                              CSS-IS TR94-02 


Fodor's and Pylyshyn's arguments (1988) to the effect that human
thought and language exhibit both compositionality and
systematicity are by now widely known.  Although connectionists
have questioned whether humans display these attributes in the
form that F&P describe (cf. van Gelder & Niklasson, 1994), most
now agree that  in some important sense humans do employ a
combinatorial syntax and semantics and, as a result, exhibit some
form of linguistic systematicity.  

In 1989--90, a number of connectionists reported results which
established that connectionist networks (hereafter, c-nets) could
exhibit forms of linguistic generalization, which, 
prima facie, qualify as systematicity.  These results were
obtained without recourse to mere implementation of ``classical''
symbolic methods, and so, it appeared that one of F&P's major
conclusions was falsified.  However, in Hadley, 1992, 1994a, a
learning based conception of systematicity was introduced, and
various degrees of systematicity were distinguished, ranging from
weak syntactic to strong semantic systematicity.  Hadley (1994a)
examined six different connectionist systems (Chalmers, 1990;
Elman, 1990; McClelland & Kawamoto, 1986; Pollack, 1990;
Smolensky, 1990;  St.John & McClelland, 1990) and argued that,
in all probability, none of these systems displayed the strong
forms of systematicity that humans display.   As a consequence,
it appeared that a variant of F&P's original challenge stood
unscathed.  Recently, however, some researchers claim to have
satisfied Hadley's definition of strong systematicity, though not
his formulation of semantic systematicity.  In one instance
(Phillips, 1994), this claim  clearly requires qualification,
since  (as Phillips has acknowledged, personal communication) the
system involved cannot process embedded sentences as required by
Hadley's definition.  In another instance (Christiansen &
Chater, 1994),  a claim to strong generalization is restricted to
a single syntactic context (conjunctive noun phrases). 
Discussion of this claim, together with those of Niklasson & van
Gelder (1994) are given in Hadley, 1994b, where reservations are
explored.  In any event, none of the researchers just cited
address semantic aspects of systematicity and compositionality,
although F&P's (1988) presentation of these concepts did seem to 
involve semantic issues (such as the capacity to understand the
{\em meaning} of novel sentences and the need to banish semantic
equivocation in logical inference).  

          ................................................

A network exhibits semantic systematicity just in case, as
a result of training, it can assign  appropriate  meaning
representations to simple and embedded sentences which contain
words in syntactic positions they did not occupy during training.  
Herein we describe a network which displays strong
semantic systematicity in response to *unsupervised*
training.  In addition, the network generalizes to novel levels
of embedding.  Successful training requires a corpus of about
1000 sentences, and network training is quite rapid.  The
architecture and learning algorithms are purely connectionist,
but `classical' insights are discernible in one respect, viz.,
that complex semantic representations spatially contain their
semantic constituents.  However, in other important respects,
representations are distinctly non-classical.  


        The above is available as a PS file by email.  
        Also available by FTP upon request.  Please contact
	hadley@cs.sfu.ca for FTP directions.


