From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!sdd.hp.com!cs.utexas.edu!uunet!mcsun!uknet!icdoc!cc.ic.ac.uk!redgers Sun Dec  1 13:06:30 EST 1991
Article 1737 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!sdd.hp.com!cs.utexas.edu!uunet!mcsun!uknet!icdoc!cc.ic.ac.uk!redgers
>From: redgers@sig.ee.ic.ac.uk (Adrian Redgers)
Newsgroups: comp.ai.philosophy
Subject: Re: Defs (QDI)
Summary: Help us #define it quantitatively
Message-ID: <1991Nov28.170726.27890@cc.ic.ac.uk>
Date: 28 Nov 91 17:07:26 GMT
References: <1991Nov21.153122.15464@cc.ic.ac.uk> <1991Nov27.193459.1670@aisb.ed.ac.uk>
Lines: 82
Nntp-Posting-Host: elijah.ee

Sven Saska (aiss@aifh.ed.ac.uk) and I are designing (haha) a Quantative 
Definition of Intelligence (QDI) that can be used to compare intelligent 
machines despite them being very different.  Now I'm a Neurotic Networker 
so it was no surprise when I wrote:

>I once saw rat, cat and human compared in terms of connections per neuron 
>but some Neural Network measures apply to conventional machines too.... 
>Intelligence could be measured in terms of the number of ... (neurons) ... 
>(weights), connections ... or else *system functionality* (number of poss. 
                            ~~~~~~~
>behaviours) vs. *generalization ability* (learning from few examples - 
>training algorithm comes into it).

Sven wrote:
[nice things about NN's, but then:]
>I would like it much more though, if the direct model of Neural 
>Networks could be kept out and replaced by more abstract concepts.

Good point we want to compare machines having neural and non-neural
architectures.  The last two items on the list were in fact my attempt 
to abstract beyond NN's, but the words "training algorithm" slipped in.  


*Functionality* I use in a technical sense to mean the number of different
functions, mappings from say {1,0}^Nins to {0,1}^Nouts, that some
Black Box can implement - where Nins = number of (binary) input lines to
the box and Nouts = number of (binary) output lines.  

If the black box is a look-up-table (c.f. David Gudeman's current posting
on 'Awareness') then the functionality is maximum = 2^(outputs*(2^inputs))
in the case where input and output lines can be set to either 0 or 1.

[ Generally if there are Istates possible values for each input and Ostates
possibles for each output the functionality, f, is (trivially):

	f = Ostates^(Nouts*(Istates^Nins))

and bad luck if you wanted real-valued I/O.]


Some questions and suggestions for the pot:

Q1: Is it right/useful that our QDI be highest for look-up tables?

S1: Entertain this idea seriously, remember this is a QDI, not an
    Actual Definition (ADI?).  I still think it lacks something.


Q2: How about *which* functions, not just how many?

S2: Perhaps using some info. theory definition of how *hard* a 
    function is. 


Q3: What about internal states, FSA's?

S3: Consider mega-inputs as finite sequences of possible black box 
    inputs, universfuls of them, yuk.


Q4: What about 'adaptiveness' and learning?  

S4a: I would like to see it included in a QDI, perhaps as an abstraction
     of the NN's idea *generalization*.  It could embody speed but also 
     'correctness' or appropriateness of adaptive behaviour.  

S4b: Pile the whole lot into look-up table format, 
     f = universfuls^(universfuls^...)


Q5: These Q's and S's run off quite easily and naturally so I guess someone 
    has already invented this wheel - anyone know who?

S5: This thread is rather 1-ply (or 2-ply): perhaps we ought to move to 
    e-mail, Sven.



/*       And, as with Gods and men, the sheep remain inside their pen, 
               Though many times they've seen the way to leave...     
Adrian Redgers : redgers@sig.ee.ic.ac.uk : Neural Systems Lab, Elec. Eng.,
Imperial College, Exhibition Road, London SW7 2BT, UK : (071) 589 5111 x5212 */ 


