Newsgroups: sci.lang
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!howland.reston.ans.net!tank.news.pipex.net!pipex!uknet!newsfeed.ed.ac.uk!edcogsci!davidt
From: davidt@cogsci.ed.ac.uk (David Tugwell)
Subject: Re: Chomksy, Significance, and Current Trends
Message-ID: <DDGrFn.Dvp@cogsci.ed.ac.uk>
Organization: Centre for Cognitive Science, Edinburgh, UK
References: <DD18rC.AK0@actrix.gen.nz> <DDB8GB.9L5@cogsci.ed.ac.uk> <40o450$4e0@senator-bedfellow.MIT.EDU>
Date: Thu, 17 Aug 1995 16:35:44 GMT
Lines: 98

In article <40o450$4e0@senator-bedfellow.MIT.EDU> David Pesetsky <pesetsk@mit.edu> writes:
>davidt@cogsci.ed.ac.uk (David Tugwell) wrote:
>[I won't quote the original message.]
>
>Once again, we are given essentially a parody of generative 
>grammar and asked whether it can be defended.
>
>To reply in a nutshell: the idea that restrictions on 
>center-embedding fall under a different explanatory rubric 
>than restrictions on, say, whether V comes first or last in 
>VP, is not a "tenet" of generative grammar, but a putative 
>discovery.
>
>For example, among the data to be accounted for in the 
>study of human language are the differing empirical 
>"footprints" of various phenomena.  In one of their joint 
>papers, Chomsky and Miller note that certain sorts of 
>deviance improve when memory is aided with pencil and paper 
>or opportunity for reflection, and others do not.  Certain 
>sorts of deviance appear to increase with greater sentence 
>length or complexity, and others do not.  This was the sort 
>of thing that Chomsky described with the dividing line 
>"performance" vs. "competence".  The hypothesis was that 
>certain phenomena are due specifically to aspects of a 
>syntactic knowledge base, while others are due to a 
>separate implementation system that is affected by memory 
>and other less obvious factors.
>
>Certainly, specific instances of this sort of distinction 
>can be rightly or wrongly described, and claimed 
>distinctions can turn out to be non-existant. It's not 
>always easy to nail the facts down. Even harder is the 
>interpretation of the facts.
>
>But we are not dealing with a tenet, a belief, or a virtue 
>made out of necessity.  We are dealing with a claimed 
>empirical discovery -- furthermore, one that is not 
>especially foundational, but one of many pieces of evidence 
>supporting a body of research.
>
>-David Pesetsky
>
>


In my original posting I was trying to set out the line of reasoning
behind the adoption of the competence/performance distinction in
Chomsky's ``Three models for the description of language'' and ``Syntactic
Structures''. These are works for which I have a deep and genuine
reverence for outstanding brilliance, originality and
clarity. I hope it would not enter my head to try and parody them, but
if my clumsy efforts at exposition smack of parody, then it certainly
was not intentional.

Nor do I disagree that restrictions on centre-embedding etc. are
clearly quite a different thing from restrictions on basic element
order for example and that they should be explained in differing ways
by the linguistic theory. Indeed, in the structural/statistical language
model that I propose these would be differentiated -- and limits on
the former would be variable according to something analogous to
``memory constraints'' while the latter would not.

I think that the crux of the matter is what set of data it is that we
wish to model. The analogy with fluid mechanics in a previous posting
is very pertinent--the immensely turbulent chaotic rush of water down
a twisting pipe seems an excellent analogy to language. Naturally when
modelling such realities we must stick to simple models capturing the
essential truths. Generative grammars are excellent examples of such
powerful and general models. However,  do not imagine that it would
occur to a fluid engineer (is that what they're called) to say when
trying to test, improve and develop his flow model that real flow data
from flow in real pipes was actually not relevant to him, that it was a
misunderstanding to suggest that he had ever been interested in
modelling flow in real pipes and that he was in fact trying to model
an abstract flow in abstract pipes.

Back to linguistics, the data that are most commonly referred to in
generative grammar are linguists' introspective estimates of an
abstract quality (the grammaticality of strings of words). In the
structural/statistical approach I am advocating this would be replaced
by a language user's interpretation of a text (interpretation in the
sense of ``who's doing what to who when etc''). Both approaches depend
on intuitions, but it is my experience that the latter are more
useful and dependable. Now this approach will not result in the
creation of a perfect model, as perfect models do not exist, but in an
abstraction just as generative grammar is. The reasons one might
prefer one over the other are with respect to the utility of the
models and their explanatory power. 

I have simply been trying to explain my dissatisfaction with
generative grammar (which I know from experience is shared by many
others) and to dispel the idea that there is no other way to do
things. I hope no-one takes any of it personally, we are all honest
labourers, brothers and sisters toiling in the dark towards
enlightenment etc. etc.

David Tugwell

