Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!cs.utexas.edu!convex!convex!insosf1.infonet.net!internet.spss.com!markrose
From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: What's innate? (Was Re: Artificial Neural Networks and Cognition
Message-ID: <D3Lt79.3A6@spss.com>
Sender: news@spss.com
Organization: SPSS Inc
References: <3g673d$7pl@mp.cs.niu.edu> <3gtu3i$rf3@mp.cs.niu.edu> <D3GBKw.F5D@spss.com> <3guola$sj5@agate.berkeley.edu>
Date: Tue, 7 Feb 1995 00:25:55 GMT
Lines: 46

In article <3guola$sj5@agate.berkeley.edu>,
 <jerrybro@uclink2.berkeley.edu> wrote:
>markrose@spss.com (Mark Rosenfelder) wrote:
>> [. . .] In other words, without a UG, the language itself
>> >evolves so as to become learnable without a UG.  Who is to say that
>> >the English language has not already evolved so as to become
>> >learnable without a UG?
>> 
>> Boy, you just don't like UG, do you?  :)  This is an interesting argument,
>> but not a very convincing one, IMHO.
>
>I don't know if this is what Neil meant, but I find an important
>point here.  For, it is claimed that language is just too
>complicated for us to learn it without special help from a
>"language acquisition device" of some sort.  But what is the
>basis for this assertion?  Surely it is this:  language is
>"comparable" in complexity to things which we cannot learn so
>easily. (Obviously, "comparability" is not decided here by
>comparable difficulty for learning.)
>
>But I see no basis for supposing that a learning computer is
>equally able to learn all tasks of "comparable" complexity.
>I'm sure this is generally the case--consider Minsky's work on
>"perceptrons":  he showed that perceptrons could not learn
>particular tasks at all.  Surely it is also true that
>perceptrons can learn certain tasks more quickly than others
>which are "comparably" complex.  And this bias in favor of certain
>sorts of tasks would exist even without any specially designed
>"acquisition device" for those particular tasks.  One might
>suppose that languages are just such "easy" tasks.

I agree that different types of "learning machines" might find different
things easy or difficult.  Still, I would expect some explanation of *why*
certain tasks are easy or hard (such as Minsky and Papert provided with
perceptrons).  

You might be able to demonstrate that something about the learning machine
that is our brain makes language (rather than, say, playing the
piano, repairing bicycles, or running a government) easy to learn.
But this result might well be pretty much equivalent to UG.

To put it another way, Chomsky argues specifically for a "language acquisition 
device" rather than a "general learning facility".  But a general learning 
facility that works fantastically at language, and not so well at mathematics 
and other things computers are easily programmed to do, seems to fit the
spirit, if not the letter, of Chomsky's theory.
