From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!uunet!wupost!csus.edu!netcom.com!stas Wed Sep 23 16:54:20 EDT 1992
Article 6968 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!uunet!wupost!csus.edu!netcom.com!stas
>From: stas@netcom.com (Stanislav Malyshev)
Newsgroups: comp.ai.philosophy
Subject: Re: Defining intelligence
Message-ID: <wd3ndxq.stas@netcom.com>
Date: 18 Sep 92 09:10:13 GMT
References: <exukjb.137.716066041@exu.ericsson.se> <1992Sep11.00857.16070@ms.uky.edu> <1992Sep13.174128.2843@zip.eecs.umich.edu>
Organization: Netcom - Online Communication Services (408 241-9760 guest)
Lines: 66


Hmm.. it seems to me that looking at intelligence as an ability to solve
problems is ok, if you believe that a behavorial definition suffices.

It is conceivable that, with sufficient complexity, an AI program will behave
indistinguishably from a human, as far as problem solving goes.
Some cognitive scientists would argue that such a system is not conscious of
its doing, that it is merely following rules, however complex they may be.

I personally do not agree with this view.  I think that world knowledge
databases combined with plan segments and frame systems and a method to
describe memories of sensory experience (some sensory primitives to be
associated with other knowledge), all dumped into a reasoning engine
(not a theorem prover mind you), would constitute sufficient "awareness"
of the world to call this overall system "conscious" of its doings.

I don't find anything wrong with the semantic meaning of what this
program would reason with, since objects and higher-level concepts would
be tied into a framework of real knowledge about the world - just as
much knowledge as humans have (type-wise).  There would be
1.  past experiences (plan segments) to draw upon
2.  procedural knowledge (recipes, frames, scripts)
3.  world knowledge bases (such as CYC) - declarative knowledge about 
	the world
4.  sensory records to associate with concepts such as movement, sound,
	color, etc.
5.  an ability to make sensible inferences from all this, and the ability
	to learn by changing the data.
If, for example, we learn that some inference doesnt seem to be valid,
we can create information to this extent, and be less likely to thus infer
in the future in a similar situation (whatever that means, etc).
So, if the semantics that associate the information we have with the world
produce consciousness, then a system privvy to our information and our
semantics should be conscious as well, if semantics is the sole component
of consciousness.

In all, it seems to me that a sufficiently complex system can make
use (however successfully) of the same kind of information that we humans use.
If you accept the view that it is the interaction of an intelligent entity
with its universe that is most important, you should not care whether
the current (inconstant) definition of consciousness classifies the entity
as conscious or not, since that doesn't affect its behavior.  As long as
the processes within the entity produce the behavior we associate with
intelligence, etc, we are done.  AI is solved.

Consider, again, whether the goal of AI is to fit an arbitrary definition
of intelligence and consciousness and what have you, or to produce behavior
that we consider, by comparing with what we do, reasonable and intelligent.


Cheers,

Stan

p.s.
My views are biased, I must warn you, against the non-constructive stifling
rhetoric in which cognitive science securely resides - that means you,
John Searle, and the like-minded.  If you'd like to hear some strong 
arguments against your proclamations, feel free to mail me.
-- 

-------------------------------------------------------------------------------
Stan Malyshev		|    Open up the windows and let the fresh air out,
stas@soda.berkeley.edu	|    said the television to the shackled children..
stas@netcom.com		|		- King Missile
-------------------------------------------------------------------------------


