Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!gatech!udel!news.mathworks.com!newshost.marcam.com!charnel.ecst.csuchico.edu!olivea!wetware!spunky.RedBrick.COM!psinntp!psinntp!psinntp!scylla!daryl
From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: What's innate?
Message-ID: <1995Feb6.111749.27092@oracorp.com>
Organization: Odyssey Research Associates, Inc.
Date: Mon, 6 Feb 1995 11:17:49 GMT
Lines: 105

Neil Rickert <rickert@cs.niu.edu> wrote:
>>>And there is even another problem.  Just suppose that Chomsky is
>>>right, and that there is not sufficient stimulus to acquire English
>>>without an innate UG.  Let's imagine that, due to some escape of
>>>radiation, the UG genes are damaged in every human.  The children of
>>>the next generation grow up without UG.  Because of the poverty of
>>>stimulus, they cannot learn English grammar now that they have no
>>>UG.  They perhaps can learn some of the simpler parts of it.  But
>>>they still have an urge to talk, so they get by the best they can,
>>>albeit ungrammatically.
>
>>>By the time of the next generation, this ungrammatical speech has
>>>become the norm.  Therefore, by virtue of the way linguists define
>>>grammatical, it has become the correct grammar for the new
>>>generation.  In other words, without a UG, the language itself
>>>evolves so as to become learnable without a UG.  Who is to say that
>>>the English language has not already evolved so as to become
>>>learnable without a UG?

I don't understand this argument, either in what it is trying to
show, or how it is supposed to show it. In a later post, you add:


>I was presenting a thought experiment, not suggesting something that
>might have actually happened.  What I was trying to suggest was that,
>even without there being a UG, you can pretty well guarantee that any
>language will evolve to be learnable, simply because a language has
>to be learnable to survive into the next generation.

Agreed. Language has to be learnable. But the question is this: is it
possible (and likely) for a language as complex as human natural
languages to be learnable to creatures without some innate UG?

>In the
>circumstances, it is almost certain that there will be apparent
>evidence which supports a "poverty of stimulus" argument.

No, that isn't true. Take the extreme case of a language with only
(say) 10 correct utterances. Individuals would clearly receive enough
stimuli to be able to figure out what those utterances are. The
poverty of stimulus argument only applies to languages of sufficient
complexity, like English, where there is no possibility of hearing
every possible utterance. Once again, the question is whether it is
possible to have a language as complex as English that is learnable
without UG.

>In other words language will develop so as to support a "poverty of stimulus"
>argument, quite independent of whether or not there is a UG.

As I said, the "poverty of stimulus" argument only applies to
languages of sufficient complexity. I agree that if it is possible
for such languages to develop without UG, then that shows UG is
unnecessary. But is it possible?

>Therefore the poverty argument by itself is no more persuasive that
>there is a UG than it is persuasive that there is not a UG.

I don't see how your "therefore" follows from anything you have said.
*If* UG is not necessary for language to be learnable, then the face
that language is learnable is not support for UG. But how is your
argument supposed to show that UG is unnecessary for learnability?
It *assumes* that it is unnecessary.

>>>It seems to me that the "poverty of stimulus" argument is based on
>>>fallacious reasoning.  It looks only at the role of the child in
>>>learning the language.  It ignores the processes by which language
>>>itself evolves so as to be learnable by the culture.  Both processes
>>>must exist, and must be in approximate equilibrium.

The claim that language evolves so as to be learnable isn't contrary to
UG. UG only constrains the meaning of "learnable".

>>>Jacques Guy, in article <3gshff$e5k@tardis.trl.OZ.AU>, pointed out
>>>a nice example due to Claude Shannon.  He came up with:
>>>
>>>	THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE
>>>	CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE
>>>	LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN
>>>	UNEXPECTED
>>>
>>>This sequence of words was constructed randomly by a stochastic
>>>process.  The process took into account word frequencies, and
>>>frequencies of pairs of adjacent words.  It is semantic hash, but it
>>>contains reasonably long strings which look realistic as parts of
>>>grammatical sentences.  Now suppose that just such a stochastic
>>>process were used, except that it were further constrained to only
>>>choose words appropriate for a meaning that was to be conveyed.  It
>>>seems quite likely that the resulting sentence would be pretty
>>>acceptable grammatically.  Yet it would have been constructed using
>>>virtually none of the grammatic principles which Chomsky thinks are
>>>important.

Yes, and it would likely be nonsense, just as the sample above is
nonsense. It seems to me that stochastic arguments actually support
Chomsky, because (1) sentences with the right word frequencies still
sound like nonsense, and (2) sentences that are clearly sensible can
use quite rare word combinations. It seems clear to me that the
stochastic approach you describe is *not* what humans use.

I would like to recommend a wonderful book (in my opinion) called "The
Grammatical Man" by Jeremy Campbell about the importance of grammar.

Daryl McCullough
ORA Corp.
Ithaca, NY
