Newsgroups: comp.ai.genetic
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!bloom-beacon.mit.edu!gatech!swrinde!pipex!lyra.csx.cam.ac.uk!sunsite.doc.ic.ac.uk!dcs.gla.ac.uk!unix.brighton.ac.uk!bashful!am88
From: am88@unix.bton.ac.uk ( AABS )
Subject: Re: [Q] Robotic 'ants' for Pest control...
Message-ID: <1994Nov14.111807.17522@unix.brighton.ac.uk>
Sender: news@unix.brighton.ac.uk
Reply-To: am88@unix.bton.ac.uk
Organization: University of Brighton
References: <Cz2F6G.AFL@undergrad.math.uwaterloo.ca>
Date: Mon, 14 Nov 1994 11:18:07 GMT
Lines: 47

In article AFL@undergrad.math.uwaterloo.ca, mwtilden@math.uwaterloo.ca (Mark W. Tilden) writes:
>Marvin Minsky <minsky@transit.ai.mit.edu> wrote:
>>
>>As for the COG project of Brooks et al., I disagree with them about
>>the value of building the real-time hardware machinery.  My personal
>>opinion is that more would be learned by making simulated robots
>>operate in relatively simplified simulated worlds.  They object that
>>if you do this, you might overlook serious real world problems.  I
>>don't agree: in my view, it is not important precisely which kinds of
>>noise , you encounter -- or otherwise unpredictable friction effects,
>>etc.  You'll run into the same basic cognotive problems whatever you
>>do, so you might as well introduce cheap, easy to compute types of
>>variation.  I'm not winning that argument, though.
>
>But isn't there the problem that computer life would then only evolve
>for computer environments?  Isn't the goal to see how to pull
>intelligence out of the box so we can test it personally for validity?

Isn't this begging the question about whether we are looking at human intelligence in the 
purely objective way that you seem to think. That is - we are not out of our own box.
perhaps it would be illuminating to look at our own intelligence in the light of seeing how intelligence behaves in other cages?

>Granted it might be cool to have a simulated creature on the other side
>of the screen that we could talk to, but so far as been seen there's no
>evidence of intelligence beyond biological means.  Alife environments
>are progressive and neat but they are also too shallow dimensionally to
>emerge competence.
>
>Computer worlds are fine for us because we can extend our belief to
>encompass them, but without a physical ability, computers have no
>influence to exercise change by themselves.  
>
>You've gotta build robots, otherwise how can you ever *know*.

what difference does that make?

I thought that the debate, over what intelligence is, had concluded that although movement and coordination are difficult tasks they are not that part of intelligence that is involved in consciousness? I would have thought that the particular platform is irrelevant (sp?)

Perhaps if we look at the brain as an object with properties (without trying to be too behaviourist!) that determine its mental and physical reactions to stimuli,  then there is an isomorphism (in a sense) between all conscious beings because their individual properties (ie each will have different properties depending on which platform they are based upon)  will determine their behaviour in similar ways.

for instance if we were to map the structures of the brain into a phase or pattern space then the mapping into the pattern space will be different according to the properties of the system, and that widely differing systems will map into the same regions of the phase space because of the ways that their "processing structures" behave. Or perhaps it is not the location of the mapping in pattern space that counts (in this isomorphism) but rather the shape of its trajectory (the way it changes processes) woul
dn't there then be no "real" dis-similarity between intelligence?

Would the current debate over the quantum properties of brains invalidate these ideas?
any comments?

andrew matthews
