Newsgroups: comp.ai.games
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!cs.utexas.edu!utnut!wave.scar!93funkst
From: 93funkst@wave.scar.utoronto.ca (FUNK  STEVEN LESLIE,,Student Account)
Subject: Re: Chess, Tictactoe, and Checkers, Oh My!
Message-ID: <D7313L.4tv@wave.scar.utoronto.ca>
Sender: usenet@wave.scar.utoronto.ca (news owner)
Nntp-Posting-Host: wave.scar.utoronto.ca
Reply-To: 93funkst@wave.scar.utoronto.ca
Organization: University of Toronto - Scarborough College
References: <SMISHRA.95Apr14162929@eagle.acns.nwu.edu>
Date: Sat, 15 Apr 1995 15:16:33 GMT
Lines: 45

In article 95Apr14162929@eagle.acns.nwu.edu, smishra@eagle.acns.nwu.edu (Sunil Mishra) writes:
> In article <D6zxIx.H02@wave.scar.utoronto.ca> 93funkst@wave.scar.utoronto.ca (FUNK  STEVEN LESLIE,,Student Account) writes:
> 
	[stuff I wrote deleted]
> 
> Before saying that one should try to represent consciousness of the mind,
> one has to consider what consciousness is, and if it is even useful for
> most purposes.
> 
> Consider most of the things you do. Is seeing conscious? Is reading
> conscious? I could go on and give a bunch of other examples. The point is
> that the conscious mind is just a window to the unconscious, where most of
> the real work is done.
> 
> Consciousness is great for some things though which require deliberate
> effor. Examples include logic and mathematics, two things that are best not
> done using cognitive models but by brute search or other computational
> models.
> 
> As for your problems with current research, you do have some valid
> complaints. The difficulty is precisely formulating the problem of
> cognition into a form amenable to a solution. Right now we are just trying
> to figure out what the problem is. Trying to reverse engineer millions of
> years of evolution is not easy.
> 
> Sunil

Hi,

	My point is that any model of cognition must eventually lead to a model of consciousness.  If your going to loosen your criteria and say that it doesn't then why not loosen it further and say that you really don't have to model cognition (or even a single component of cognition) very well at all.  My point might best be exemplified by a quick critique of logic.  I don't think that many will argue that a major goal of logic is to produce a consistent formal system.  Godels incompleteness theorem states onl
y that we cannot prove consistency, not that it is inconsistent.  So, we are still left with a big push towards the consistent.  Consciousness itself is inconsistent so you immediately have a problem.  People like Penrose say that the brain follows the consistent laws of physics, but the mind is inconsistent so there must be some special component. Penrose advocates Quantum Mechanics.  But, consider the real world.  In particular the twin paradox.  If we apply the cognitive level description to the world w
e might get a statement like: "all people born at the same time are the same age".  But, if we send one twin off at near light speeds for some time relativity demands the statement: "not all people born at the same time are the same age."  So we're left with P and ~P.  The consistent laws of physics produce an inconsistent cognitive level description of the world.  What is the necessary concept for logic to capture this.  Locality.  Local consistency may allow global inconsistency.  But, the current framew
ork of symbolic logic does not allow for that.

	I go on to bring in connectionism, dismiss it and present my own alternative.  The point is that each extraction from the platonic world (to use Penrose's terminology) brings its own baggage.  In the case of logic it becomes obvious that this is a serious problem for more complex models of cognition.  In application to AI, it means that we can only do what our tools allow us to.

2 more cents worth
Steve

PS:  if anybodies interested I can send them a first draft of my essay.  Its not pretty, but I think it makes some good points.





