Newsgroups: comp.ai.philosophy,sci.lang
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!zombie.ncsc.mil!news.duke.edu!agate!howland.reston.ans.net!swrinde!sdd.hp.com!hplabs!hplntx!curry
From: curry@hpl.hp.com (Bo Curry)
Subject: Re: What's innate? (Was Re: Artificial Neural Networks and Cognition
Sender: news@hpl.hp.com (HPLabs Usenet Login)
Message-ID: <D3LF78.C8n@hpl.hp.com>
Date: Mon, 6 Feb 1995 19:23:32 GMT
References: <3esaig$6h5@mp.cs.niu.edu> <3f6ep0$r5f@oahu.cs.ucla.edu> <3f6il0$io6@mp.cs.niu.edu> <D2IDC2.12J@cogsci.ed.ac.uk> <3gshff$e5k@tardis.trl.OZ.AU>
Nntp-Posting-Host: saiph.hpl.hp.com
Organization: Hewlett-Packard Laboratories, Palo Alto, CA
X-Newsreader: TIN [version 1.2 PL2]
Followup-To: comp.ai.philosophy,sci.lang
Lines: 46
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:25241 sci.lang:35211

: >rickert@cs.niu.edu (Neil Rickert) writes:
: >>If, as I suggested, natural language has no grammar, then the
: >>learning task is simpler.  The child tries an approximate grammar,
: >>and with experience makes ad hoc modifications to that grammar to
: >>bring it closer to the way other people seem to talk.  This might
: >>mean that we all have empirically constructed approximate grammars,
: >>but we do not all have the same approximate grammar.  And we are all
: >>willing to violate the constraints of our approximate grammars when
: >>semantic necessity demands it.

Jacques Guy (jbm@newsserver.trl.oz.au) wrote:
: I have long come to share the same opinion. To me, grammar is the
: paths most trodden in infancy when deciphering what was that 
: jabbering going on around you. Let me stretch the analogy.
: The corpus (the data) is the garden, grammar is the ruts you
: have made exploring the garden. It follows that we all have
: different grammars (and that alone would go a long way to explaining
: why language changes). On fine (and not-so-fine) points of grammar
: my wife and I often disagree. Yet we are both native French speakers.

: >Indeed, does anyone know what evidence Chomsky has for saying there's
: >a poverty of stimulus?  

: Nil, I guess. Just like this claim of his:

Now really, you guys. You may disagree with Chomsky, but to assert
that he has *no* evidence for his claims is simply ludicrous.

Why, Neil, do children's "approximations" follow such a constant
course? There are thousands of types of errors, many of which
are made by computers using "higher level approximations" (by
which I assume you mean frequency information of word triples,
quadruples, etc. This is the way Racter works, as I understand
it, so the experiment Jacques suggests has already been done).
These errors are very different in kind from the errors made
nearly universally by children. "High-level" computer-generated
text (i.e. generated by purely syntactic correlations) can read
hauntingly similar to a particular writer's prose style, such
that, at high orders, it is possible to determine which author's
work was used to generate the correlation tables. It reads
not a bit like the speech of children, however.

We might not all have the same "approximate grammar", but
the approximations we make are all of a kind.

Bo
