From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!mucs!fs1.mcc.ac.uk!zlsiida Thu Apr 30 15:23:15 EDT 1992
Article 5318 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!mucs!fs1.mcc.ac.uk!zlsiida
>From: zlsiida@fs1.mcc.ac.uk (dave budd)
Newsgroups: comp.ai.philosophy
Subject: re re penrose
Message-ID: <zlsiida.64@fs1.mcc.ac.uk>
Date: 28 Apr 92 09:15:37 GMT
Sender: news@cs.man.ac.uk
Organization: Manchester Computing Centre
Lines: 12
Originator: netnews@uts.mcc.ac.uk

You're calling neural net architectures heuristic as opposed to algorithmic?
I read him as saying: computers are algorithmic; the halting problem is 
unsolvable algorithmically; computers can't think.  My problem with this is
that his definition of computer is much tighter than mine - I'll allow an 
entire system including multiple networked machines of varyingarchitectures,
he resorts to the human ability to 'stand back' when something like the 
halting problem is found, saying the algorithm can't do this, which I see as
a false limitation on the algorithm (like he won't let it be self-modifying 
and ignores the multi-tasking ability of op.sys algorithms), and he is
incredibly vague about what thinking, awareness, consciousness etc actually
are.
I expect to see him proved wrong within 50 years.


