Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!news.acsu.buffalo.edu!news.uoregon.edu!arclight.uoregon.edu!news.maxwell.syr.edu!news.mathworks.com!rill.news.pipex.net!pipex!uknet!usenet1.news.uk.psi.net!uknet!uknet!yama.mcc.ac.uk!loki.cf.ac.uk!news
From: moffat@cf.ac.uk (Dave Moffat)
Subject: Re: AI wants to grow up, but should it?
Sender: news@cf.ac.uk (Usenet News user)
Message-ID: <32ff4de5.4073114@netnews.cf.ac.uk>
Date: Mon, 10 Feb 1997 17:07:22 GMT
X-Nntp-Posting-Host: d056.psyc.cf.ac.uk
References: <Pine.SOL.3.95.970128180209.1384B-100000@butch.dai.ed.ac.uk>
Organization: Posted through the Joint Cardiff Computing Service, Wales, UK
X-Newsreader: Forte Free Agent 1.1/32.230
Lines: 120

On Tue, 28 Jan 1997 18:26:25 +0000, Ian Clarke <ianc@dai.ed.ac.uk>
wrote:

>As a student of Artificial Intelligence and Computer Science at the
>University of Edinburgh, and a keen observer of the field, I have noticed
>a worrying trend.

Don't worry, Ian....

>Researchers in the field are beginning to shun the unconventional - some
>would say 'messy' - methods of research from which the field was born, in
>favour of what they see as a more mature formal approach, involving
>careful repetitive experimentation, and tabulation of statistics.  In a
>way it appears that they want AI to become more like other scientific
>fields, with well defined methods of research, and where increased  
>specialisation is seen as progress.

Fair comment, but not a serious criticism.
Wanting to do like other sciences do, just to keep up
with them in some vague envious and/or misguided
copy-cat way is clearly silly.
But that's not why people like to formalise things in AI.
Not all of them, anyway.

It's also a good discipline to hold your thoughts together,
and to enable the field to come to a solid concensus.
Other sciences like the social sciences and
the humanities have a hell of a job trying to
reach concensus, so they hardly get past square one.
You can't build on previous generations' foundational knowledge
unless the foundations are secure and everyone trusts them.

Formality is one way of being rigorous, possibly even the best.
Formality helps less intelligent people to be smarter because it
trains their thoughts -- discipline and guidance, again.
Formality is a tough master, which is why so many people don't
like it.

>I believe that this trend will hold back progress in AI.  Virtually all of
>the various areas of research in the field at present were spawned through
>that 'messy' methodology, whether planning, robotics, connectionism,
>or A-Life - none of them began as a natural well-defined progression from
>previous work in the field (as in many cases, there was no 'field' at the
>time).

All new ideas come out of nowhere, by definition, so of course
they will be informal -- even messy as you put it.
Babies are born burping and pooping, too, and they are wonderful,
but we expect them to grow out of it.
Otherwise they'll never make a contribution.

>One of the areas in AI where careful experimental method is very
>important, is that of A-Life, and connectionism, but these areas still
>require the fertile womb of this 'messy' methodology to spawn insightful
>new ideas.

Aha, so you're onto the birth-canal metaphor too, eh?! :-)

>AI is not like other sciences.  In many ways it is completely unique, as
>where chemistry, physics, and biological sciences attempt to gather
>information about the world we live in, AI tries to replicate what we see 
>around us (and within ourselves).

AI should be like other sciences in its respect for rigour, for
mathematical formalisation where possible, computational modelling if
that's all we can do for the moment, and at least careful argument at
all other times.
Terms have to be defined as well as possible, and an honest effort has
to be made to communicate effectively with colleagues.

The problem with messy methodology is that it gives you messy Science.
Sooner or later, AI has to grow up.
The "worrying trend" to statistical analysis you refer to is something
I can only applaud -- it's been a long time coming!

------

And now, having disagreed with you totally on your first point,
it's my pleasue to agree with you totally on your second :-)

>Specialisation is another problem currently facing AI.  If AI is to ever
>reach it's ultimate goal (ie. to build an intelligent machine) all of it's
>sub-fields must work in unison, whether planning, vision, connectionism or
>linguistics.  Increased specialisation, and therefore fragmentation will
>only serve to hinder this.  I suspect that one of the final difficult
>tasks facing AI will to bring all of it's sub-disciplines back together. 

Yes, absolutely, dead right!
Nothing to add to this: you're just right.
Isn't it curious how few people have spotted that?

Still, it's a mistake that's being rectified I think.
The field of A-Life is, from an AI perspective (from mine),
just the branch of AI that is attempting such unification.

>In conclusion, I hope I have convinced you that while it is desirable from
>certain points of view for AI to become more like other science, this will
>have an overall negative effect upon progress in the field.

Hmm... well considering the size of the field I'd say that progress to
date has been disappointing. I'm encouraged by recent trends, and
think that AI as such has a brighter future now than every.
(And that's not AI-hype :-)

As for being worried about stifling ideas with too much maths.
Well, mathematicians manage to have lots of ideas, you know?

>Ian Clarke

Dave Moffat
-------------------
School of Psychology, Cardiff University of Wales.
moffat@cardiff.ac.uk

Dave Moffat
School of Psychology              email:  moffat@cardiff.ac.uk
Cardiff University of Wales         tel:  +44.(0)1222.874000 (ext. 6285)
PO Box 901                          fax:  +44.(0)1222.874858
CARDIFF   CF1 3YG
Wales, U.K.
