From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Thu Jan 16 17:22:06 EST 1992
Article 2753 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Cargo Cult Science
Message-ID: <1992Jan15.203848.10871@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <92Jan15.081805est.14473@neat.cs.toronto.edu>
Date: Wed, 15 Jan 92 20:38:48 GMT
Lines: 90

In article <92Jan15.081805est.14473@neat.cs.toronto.edu> mgreen@cs.toronto.edu (Marc Green) writes:

>It's clear from the discussion that advocates of Strong-AI, and
>computer scientists in general, don't have much understanding of
>empirical science. The essense of science is refutability. For any
>hypothesis to be taken seriously, it must be open to refutation. This
>means that the advacates must spell out exactly what evidence they
>would take as contradictory to the hypothesis. Well, what evidence
>would refute Strong-AI? 

What makes you think that AI is an empirical science?  Actually we have
to distinguish three senses of AI:

AI qua engineering: Getting computers to do smart things
AI qua cognitive science: Getting computers to model human behaviour.
Strong AI: Getting computers to really think (not just behave).

The first is not, for the most part, an empirical science at all; rather,
in a sense it's a branch of the theory of algorithms, and is relatively
independent of the empirical structure of the world (empirical
considerations are occasionally relevant in an indirect way, through
e.g. hardware implementations and environmental interfaces).  So
Popperian falsification is not relevant here, as we're not talking
about contingent possibilities, but matters that are necessarily true
or necessarily false, we just don't know which yet (not unlike
e.g. Fermat's last theorem).  If you want to know how we could know
that computers could do smart thing X or Y (play Go, speak English well),
the answer is that at the start we don't know for sure, but there are
plausibility arguments in favour of the possibility.  The best way
of investigating the truth of the matter seems to be through
engineering ingenuity backed with theoretical considerations where
possible, which is just what AI is trying to do.

AI-qua-cogsci (sense 2) is empirical, but only insofar as it's a
subfield of cognitive science.  "AI can model human behaviour" may be an
empirical hypothesis, but it's far too broad to be the kind of thing that
can be confirmed or falsified in one go.  Rather, the object of
falsification will be specific hypotheses about human function couched
in computational terms -- e.g. "the visual system can be modeled by
algorithm A".  So far, we don't have any such theories that are good
enough even to be serious candidates for falsification, but that's just
because we don't know much about the mind -- it's not a comment on AI in
particular.

The third sense (Searle's strong AI) isn't the kind of thing that's
open to confirmation or falsification by observation, as observation
can only get you as far as behaviour and mechanisms.  Presumably, if
we've got as far as the behaviour, then the question of whether it's
"really thinking" is either (1) a matter to be settled by conceptual
analysis, or (2) something which is beyond the realm of observation,
as consciousness is not directly observable.  Either way, it looks
like philosophical argument is just about all we've got to go on.

>I've never gotten a straight answer to this one. Usually I get the
>"someday defense." You know, "someday we'll be able to do this" or
>"someday we'll understand that." Anybody looking at the statements by
>Feigenbaum, Simon, etc. over the last 30, 20 and 10 years knows how
>optimistic their predictions have been. Yet, Strong-AI types keep
>making claims about what will be accomplished. At this point, their
>credability is a bit thin. If you don't believe me, just ask DARPA.

Now it appears as if you're talking about AI-qua-engineering
rather than Searle's "strong AI".  In any case, it's obviously
ridiculous to infer "it can't be done" from "it hasn't been done
yet".

>Belief in Strong-AI is like belief in God. The advocates simply can't
>understand how anybody could believe any differently or that there are
>no alternatives.

This is silly.  You should get out more.

>This mind set results in stupid arguments like the one over Searle and
>his simple-minded Chinese room. Everything ends up being someone's
>personal opinion. The real issues, like the one raised by Smith in
>"The Owl and the Electronic Encyclopedia" or by Lakoff in "Women, Fire
>and Dangerous Things" are never discussed. Lakoff actually uses
>empirical evidence to argue against strong AI. Some of his arguments
>are convoluted, but at least their a start toward a real scientific
>discussion of AI.

Your ability to make universal generalizations is getting the better
of you.  "Not discussed recently on comp.ai.philosophy", maybe.
In any case, neither Lakoff nor Smith is arguing against AI in general;
only against certain approaches to it.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


