From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sdd.hp.com!hp-cv!ogicse!pdxgate!dehn!erich Mon May 25 14:04:55 EDT 1992
Article 5599 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sdd.hp.com!hp-cv!ogicse!pdxgate!dehn!erich
>From: erich@dehn.mth.pdx.edu (Erich Boleyn)
Newsgroups: comp.ai.philosophy
Subject: Re: Quantitative measure of Intelligence
Message-ID: <erich.705550812@dehn>
Date: 11 May 92 02:20:12 GMT
Article-I.D.: dehn.erich.705550812
References: <erich.704535714@dehn> <1992May4.012153.12979@ntuix.ntu.ac.sg>
	<1992May4.012813.13154@ntuix.ntu.ac.sg>
Sender: news@pdxgate.UUCP
Lines: 130


bill wrote:
: In article <erich.704535714@dehn> erich@dehn.mth.pdx.edu 
: (Erich Boleyn) writes:
: >
: >   There have been several attempts to use information density to
: >measure intelligence level.  
: >
:   Since the highest information density is achieved by total
: randomness, this does not seem like a very promising approach.

   Well, I didn't claim that it was a linear scale that was used...

   This is a little *too* off-the-cuff, but one could imagine a midpoint
between no information and infinite information (fixed and random) where
there must exist mappings of a certain complexity level.  The interesting
idea here is the question of where in these mappings (over time, of course)
is "structural" or "computational" complexity maximized?  I don't claim
that this has any actual relevance to anything that might be useful, just
a thought ;-).

: Information theory contains no teleological element, so by
: itself it is inadequate.
: 
:   On the other hand, dynamical systems theory does contain
: a quasi-teleological element (e.g. the notions of stability
: and of attractors).  Maybe it would be possible to combine
: dynamical systems theory and information theory to come up
: with some useful ideas . . .
: 	-- Bill

   Is a teleological element all that useful, though?  From the point-of-view
of a system where the "intentional" (this is used perhaps improperly here)
underpinnings/interpretation is very similar to mammals/primates or even
humans this is a good point.  What about systems which don't correspond to
something that is very recognizable from a human standpoint, however?  This
is where the biggest problem with the Turing Test comes in of course.  It
*assumes* the existence of a similar system of interpretation on the
part of both parties.  In essence, it is a test to see if a "copy" functions
correctly, with the template being the general one of "human", perhaps with
modifications thrown in for those SF readers amoung us ;-).  Of course,
the irony of this is that most of what we'd really want in functional AI
systems would not be precisely human copies, more of a kind of semi-autonomous
complement of skills/abilities.

   Generalizing the teleological elements to the structural (and/or
fundamental) levels of "operation" of a system could (?) clear away
some of those problems...  you have an interesting point.  The "object"-
metaphor may dissolve past certain levels of complexity as a fine instrument
of understanding, however.  Who says that all of the activity is taking
place in macroscopic patterns?  The presence of recognizable self-sustaining
behaviors may hint that that are present, though.  It certainly deserves a
shot.


eoahmad@ntuix.ntu.ac.sg (Othman Ahmad) writes:

> <erich.704535714@dehn>
>wrote:
>:    The test proposed by Turing was vague for a specific reason.  The reason
>: is of course that people judge whether another person is "smart", "likable",
>: etc. by what they percieve about them on several "levels" (if I can get away
...[deleted]...
>:    How can you separate Intelligence from "thoughput", so to speak?  Unless
>: we have a way of measuring the brain directly, say via a kind of real-time
>: pattern scanning (speaking of non-invasive methods, of course), that
>: "throughput" is in a sense all we have.  Sure, knowing a lot about the
...[deleted]...
>: but sometimes you just don't have any better tools to work with.

>You can use Information theory to determine the confidence level of our
>measurement. This is called the sampling problem of measurement. This is due
>to the lack of access to the object under test. There must be a machine, like
>measuring tape, that must allow us full access to the item under test.
>For intelligence, we have a computer, where we can actually study the internal
>working of its program, and compare it against another computer for calibration
>, and humans for actual measurement.
...[deleted]...
>: only slightly when working on those tasks, while the unskilled ones brain-
>: activity shot up to enormous levels, and they rarely even completed it
>: correctly.  There have been cognitive studies showing that as far as
>: can be determined, chess masters, for example, use far *fewer* steps
>: when thinking about what moves to make than do ameteurs.  It seems to be

>These data actually support my theory. Please give me detail source of these
>experiments so that I could quota them.

   I am not sure of the references offhand.  I will look them up, however,
and get back to you on this.

>	These experiments verify my notion that knowledge is different from 
>intelligence. The physicists are knowlegeable people so they need not consume
>much intelligence resource. Whereas, a less knowledgeable person, would need 
>to think more(consume more intelligence resource).
>	My "letter" actually predict without any proff that there is a relation
>ship between intelligence and energy usage, because the theory treats
>intelligence as a resource that needs to be utilised. You may have a large 
>capacity for intelligence, but if you do not utilise it then you are stupid.
>The experiment had failed to isolate knowledge from intelligence, instead it
>only measures the ability to solve problem. The brain activity measurement is
>the best indicatior of intelligence usage.
>	The chess masters are more knowledgeable than amateurs so they need
>less intelligence to solve a problem because intelligence is a function of
>the number of steps that you think, again predicted by my theory.

   I guess what I wanted to say in the last message (and got sidetracked
by my own overzelousness ;-), is that what is the theory?

   As mentioned in both this and the last message, vague ideas abound, some
very similar to the one you put forward, but as to being full-fledged
*theories* (I guess I mean to the level of them actually being useful for
something besides intuitive thought), that is another matter.

   Most of what you have said so far has little substance in terms of
being able to measure anything.  Granted, intuitive ideas using recent notions
of adaptive growth of neural systems, neural systems seeming to minimize
effort over time for repeated operations, an information-theory-like
conclusion that the system could be a kind of minimizer on the level of
"circuit-paths", etc.  could all be said to be vaguely likely to be true
(given certain constraints).  I mean, there are rules of thumb in neurology,
neural-net research into dynamic nets, and biological systems (neurodevelopment
being one of them of course ;-) which hint to things like this.

   Erich

--
             "I haven't lost my mind; I know exactly where it is."
    / --  Erich Stefan Boleyn  -- \       --=> *Mad Genius wanna-be* <=--
   { Honorary Grad. Student (Math) } Internet E-mail: <erich@dehn.mth.pdx.edu>
    \  Portland State University  /      WARNING: INTERESTED AND EXCITABLE


