From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima Tue Jan 21 09:26:52 EST 1992
Article 2856 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Cargo Cult Science
Message-ID: <378@tdatirv.UUCP>
Date: 17 Jan 92 18:35:58 GMT
References: <92Jan15.081805est.14473@neat.cs.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 70

In article <92Jan15.081805est.14473@neat.cs.toronto.edu> mgreen@cs.toronto.edu (Marc Green) writes:
|It's clear from the discussion that advocates of Strong-AI, and
|computer scientists in general, don't have much understanding of
|empirical science. The essense of science is refutability. For any
|hypothesis to be taken seriously, it must be open to refutation. This
|means that the advacates must spell out exactly what evidence they
|would take as contradictory to the hypothesis. Well, what evidence
|would refute Strong-AI? 

I have lready stated (or implied) the answer to this.

Strong observational evidence that Searle's claim that the way brain
produces mind is inherently physical, and thus not capable of duplication
in an information processing system.

As long as the hypothesis (not theory) that mind is essentially an
informational entity is a reasonable one, the Strong-AI position remains
a tenable one.

The continuation of AI research is one good way of approaching this
(and perhaps even achieving the refutation).

| Yet, Strong-AI types keep
|making claims about what will be accomplished. At this point, their
|credability is a bit thin. If you don't believe me, just ask DARPA.

Well, I certainly am not making claims about what *will* be accomplished,
I am only saying I see no reason to deny they *can* be accomplished.
This is a subtle, but important distinction.

|Belief in Strong-AI is like belief in God. The advocates simply can't
|understand how anybody could believe any differently or that there are
|no alternatives.

Oh, I can understand the alternatives, I just have seen no reason to
accept them.

|Strong-AI adovates: they are so obviously right that there is no
|evidence which could contradict them. They don't understand that in
|science, it's up to you to disprove the null hypothesis. 

Wrong.  In science it is up to each researcher to provide evidence for or
against any relevant theories.

Also, why is the Anti-Strong-AI approach the 'null hypothesis'?  It seems
to me it is derived from different axioms, and so has equal status with
the Strong-AI approach.  From the point of view represented by my axiomatic
system, the anti-AI position is the more derived one, and the Strong-AI
position is the one closest to a null hypothesis.

But of course if you take Searle's, or Penrose's axiom system, then the
Strong-AI approach seems derived.  It is the underlying axiom systems
that must be checked against the evidence and falsified or not.  But since
all axiom systems are, in the abscence of evidence, equal, there is no
way of assigning null status to either approach.

|This mind set results in stupid arguments like the one over Searle and
|his simple-minded Chinese room. Everything ends up being someone's
|personal opinion. The real issues, like the one raised by Smith in
|"The Owl and the Electronic Encyclopedia" or by Lakoff in "Women, Fire
|and Dangerous Things" are never discussed. Lakoff actually uses
|empirical evidence to argue against strong AI. Some of his arguments
|are convoluted, but at least their a start toward a real scientific
|discussion of AI.

Could you summarize this supposed evidence so we could evaluate it?
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



