From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!wupost!waikato.ac.nz!comp.vuw.ac.nz!canterbury.ac.nz!cosc.canterbury.ac.nz!chisnall Mon Jan  6 10:30:18 EST 1992
Article 2469 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!wupost!waikato.ac.nz!comp.vuw.ac.nz!canterbury.ac.nz!cosc.canterbury.ac.nz!chisnall
>From: chisnall@cosc.canterbury.ac.nz (The Technicolour Throw-up)
Newsgroups: comp.ai.philosophy
Subject: Re: More replies to jbaez
Message-ID: <1992Jan2.143651.3317@csc.canterbury.ac.nz>
Date: 2 Jan 92 01:36:51 GMT
References: <61194@netnews.upenn.edu>
Reply-To: chisnall@cosc.canterbury.ac.nz
Organization: Computer Science,University of Canterbury,New Zealand
Lines: 10
Nntp-Posting-Host: cosc.canterbury.ac.nz

>From article <61194@netnews.upenn.edu>, by weemba@libra.wistar.upenn.edu (Matthew P Wiener):
> Edelman, eg, took the outside world as the deciding factor against
> the computability of his models.

Could you elaborate on Edelman's reasoning?  I can understand using
interactions with an outside world as an argument against real life
predictability but how does he get non-computability?
--
Just my two rubber ningis worth.
Name: Michael Chisnall		email: chisnall@cosc.canterbury.ac.nz


