Newsgroups: comp.ai.philosophy
From: ohgs@chatham.demon.co.uk (Oliver Sparrow)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!demon!chatham.demon.co.uk!ohgs
Subject: Re: definitions of `Strong AI'
References: <39vk6e$2m3@toves.cs.city.ac.uk> <3a2pg3$7gh@percy.cs.bham.ac.uk> <HPM.94Nov13144926@cart.frc.ri.cmu.edu>
Organization: Royal Institute of International Affairs
Reply-To: ohgs@chatham.demon.co.uk
X-Newsreader: Demon Internet Simple News v1.27
Lines: 45
Date: Tue, 15 Nov 1994 14:09:38 +0000
Message-ID: <784908578snz@chatham.demon.co.uk>
Sender: usenet@demon.co.uk

All we can know is models of what is or what may be. Some models reflect
hard realities and these are pursued by reality miners, who scrape the     
mother lode from the ore of truth. Others - such as the social sciences -   
have no such hard and tangible exogenous benchmark: they are self-  
referential, made up from the interaction of things which are themselves    
made up of interactions. They are not reducible to their parts. They are    
real in that they influence what happens.

A lettuce grows in my garden. Tell me why it does so: what would an answer
look like? One could speak of how soil came to be: from cosmogeny to clay;   
and of biophysics and biochemistry. These terms at least partly map together; 
although life is hard to fit into thermodynamics and embodied information 
systems hard to fit into a world of 4 (12?) dimensions and 3 (4? 99?) forces.
How lettuces came to be is harder to discuss in these terms, although there  
are perfectly good ways of talking about this. What are gardens? How is it  
that "I" own this one? Why do people eat lettuces (or wear them on their 
heads?) Why do I grow lettuces rather than buy them? Answers to these   
pressing issues of life and death do not map into other terms of reference.

This matters for the issues which are addressed by this Usegroup. We are 
discussing (supposedly) how systems can be enabled or engineered such that 
machines arise that can usefully operate on concepts of cabbages, Kings (and 
lettuces). That is to say, to use partial, incomplete but helpful models  
*which they must derive for themsleves*, for this is what learning is all 
about. 

I find that I drop unread threads which try to show how Godel can or cannot 
"disallow the existence of a logically complete algorithm for awareness, and 
so and therefore blah blah and etcetera. Equally, discussions of whether 
awareness os quantum based or the product of tiny invisible mice with punch 
cards in their paws is not vastly entertaining. I do not think that  
"awareness" has much directlt to do with what most people want from AI, which 
is a set of smart and interpretive structures which can follow a given goal 
over a complex, noisy and information rich surface. That such a sustem will 
start to seek the vote, to save for its old age and the like would be 
positively counter-productive. Aware machines is a distraction, both at the 
level of discussion and in the practical issues which are at hand.

(On which note, this small voice is off to climb in the Himalayas for five 
weeks, so farewell for now).

_________________________________________________

  Oliver Sparrow
  ohgs@chatham.demon.co.uk
