From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!sdd.hp.com!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aisb!aisb!aiss Sun Dec  1 13:05:57 EST 1991
Article 1681 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!sdd.hp.com!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aisb!aisb!aiss
>From: aiss@aisb.ed.ac.uk (Sven Suska)
Newsgroups: comp.ai.philosophy
Subject: Re: Defs (Intelligence)
Keywords: intelligence
Message-ID: <1991Nov27.193459.1670@aisb.ed.ac.uk>
Date: 27 Nov 91 19:34:59 GMT
References: <1991Nov21.153122.15464@cc.ic.ac.uk>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Reply-To: aiss@aifh.ed.ac.uk (Sven Suska)
Organization: Dept AI, Edinburgh University, Scotland
Lines: 16

(Regarding Adrian Redgers definition)

Using Neural Nets to define intelligence seems to me to be
a good way of finding out more about intelligence because it
allowes to look inside. The problem with many other definitions
is, that they try to describe a hidden thing/structure by 
its outside behavior. Of course we are not good at looking
inside humans, but you can make assumptions about it.
Intelligence anyway seems to assume a 'general ability' in the
mind.
I would like it much more though, if the direct model of Neural 
Networks could be kept out and replaced by more abstract concepts.
Mere counting (of nerons, connections or whatever) can (of course?)
not be enough, at most it can yield an upper bound for intelligence.

Sven Suska


