From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!samsung!crackers!m2c!garbo.ucc.umass.edu!dime!chelm.cs.umass.edu!yodaiken Thu Dec 26 23:58:00 EST 1991
Article 2355 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!samsung!crackers!m2c!garbo.ucc.umass.edu!dime!chelm.cs.umass.edu!yodaiken
>From: yodaiken@chelm.cs.umass.edu (victor yodaiken)
Newsgroups: comp.ai.philosophy
Subject: Re: Cargo-cult science
Message-ID: <41022@dime.cs.umass.edu>
Date: 21 Dec 91 19:44:03 GMT
References: <1991Dec20.013025.13569@oracorp.com> <40968@dime.cs.umass.edu> <1991Dec20.163736.25213@cs.yale.edu>
Sender: news@dime.cs.umass.edu
Organization: University of Massachusetts, Amherst
Lines: 26

In article <1991Dec20.163736.25213@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>
>  In article <40968@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
>  >
>  >It is precisely this "leaning over backwards" that is missing in the various
>  >pro-ai claims seen in this debate. There have been confident assertions that
>  >the brain will be completely understood by 2050  or so, gross overstatements
>  >of the current level of knowledge, dismissal of contrary points of view
>  >(e.g., Edeleman's), use of terms without definition (e.g., "computation"
>  >has been defined as "something like data tranformation") , wild
>  >extrapolations (e.g. human brains are just more complex versions of
>  >slug nervous systems), and repeated efforts to assume conclusions.
>  ...
>  >Instead, I object to a claimed "scientific" understanding of human thought
>  >which reduces to an unproven hypothesis, some contested data, and a whole
>  >lot of hype.
>
>I think it's important to go on record as agreeing with Mr. Yodaiken
>to the extent that I think he's right.  It is reasonable, in the
>context of this newsgroup, to assume that "strong AI" is correct and
>explore the consequences.   It is unreasonable to lose sight of the
>fact that we've made an assumption. 

Thank you Drew McDermott. Just as a side note, let me say that I don't think
that AI is really much worse in this respect than some other areas in c.s..
The whole field is prone to oversell.


