From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Apr 16 11:33:30 EDT 1992
Article 4994 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Ian Stewart on the philosophy of AI, hypothetically!
Message-ID: <6588@skye.ed.ac.uk>
Date: 8 Apr 92 19:33:59 GMT
References: <1992Apr06.025223.33630@yuma.acns.colostate.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 78

In article <1992Apr06.025223.33630@yuma.acns.colostate.edu>
ld231782@LANCE.ColoState.Edu (L. Detweiler) writes:

Another failed analogy (this time to complex numbers).  I won't
show why the analogy fails.  Instead I'll try to say why we should
reject the conclusion that this way of thinking of AI suggests.

>``All the while ...artificial intelligence programs [1] were useless, people
>thought about them as a problem in philosophy. That meant they had to be
>invested with some deep and transcendent *meaning*.  

There are many useful AI programs.  Programs that, say, pass the
Turing Test are not so much useless as nonexistent.  (And I don't
mean this in the sense of "AI programs don't exist because they're
not really intelligent".)

>Thus we find ...Searle
>and Penrose [2] making statements like those that decorate this chapter.
>For philosophers nothing is better than some obscure but mysterious idea
>that nobody really cares about and certainly can't test, because then you
>have plenty of room for clever arguments.  Angels on the head of a pin, and
>so forth.  But when something actually becomes useful, most people stop
>arguing about the philosophy and get on with the job instead. They don't care
>what the deep philosophical essence of the new gadget is; they just want to
>churn out as many results as they can in the shortest possible time by taking
>advantage of it.

So?  Most people aren't philosophers.  Most people don't wait for
these nonexistent programs to become useful before they decide not
to care what their deep philosophical essence is.  They already
don't care.  

>  If you can actually *see* the angels dancing on the head of
>the pin you stop trying to count them in favour of persuading them to dance
>on a microchip instead.  And that's exactly what happened to ...AI in the
>early 21st century [3].  The ...programmers discovered machine thought---
>how to *think* with software [4].  And that turned out to be so powerful that
>it would have been dreadfully embarrassing had some ingenious but unwary
>philosopher proved that ...artificial intelligence really can't exist [5].

The "*think*" is begging the question that we should supposedly drop
in favor of getting on with it.

>        As time passes, the cultural world view changes.  What one
>generation sees as a problem or a solution is not interpreted in the same
>way by another generation.

True enough, but what makes you so sure you can tell how future
generations will view AI?

Suppose at some future point we have robots that behave more or
less as if they were people and at the same time we've discovered
a great deal more about how humans work.  Sure, it might turn out
that we come to see the humans and robots as essentially the same.
But it might also turn out that, given what we know about how the
robots work and what he know about how humans work, it will be
clear that the robots aren't conscious (or whatever).

They'd still be _useful_.  But the philosophical issues might
still be of interest, at least for their ethical implications.
(Eg, would we be willing to treat robots as slaves or should
they be free citizens?)

  Today, when ... human thought is seen as no less
>abstract than other thinking systems, `artificial' ones included, [8] it's
>hard to grasp how different it all looked to our forebears. We would do well
>to bear this in mind when we think about the development of ...machine thought
>[9].  To view history solely from the viewpoint of the current generation is
>to court distortion and misinformation.''

We don't have much choice about viewing history from our own
viewpoint.  In any case, the idea that we should prefer the
viewpoint of some future generation is quite wrong.  Suppose
these future generations decide slavery is a pretty good idea
and democracy isn't.  Are we supposed to say "silly us!  how
could be so narrow-minded?"

-- jd


