From newshub.ccs.yorku.ca!torn!utcsri!rpi!batcomputer!caen!spool.mu.edu!uwm.edu!ogicse!psgrain!percy!nosun!hilbert!max Thu Oct  8 10:10:21 EDT 1992
Article 7035 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.skeptic:20494 comp.ai.philosophy:7035
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!batcomputer!caen!spool.mu.edu!uwm.edu!ogicse!psgrain!percy!nosun!hilbert!max
>From: max@hilbert.cyprs.rain.com (Max Webb)
Newsgroups: sci.skeptic,comp.ai.philosophy
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Sep24.000850.6734@hilbert.cyprs.rain.com>
Date: 24 Sep 92 00:08:50 GMT
Article-I.D.: hilbert.1992Sep24.000850.6734
References: <1992Sep13.194856.21976@meteor.wisc.edu> <1992Sep17.181358.1828@Princeton.EDU> <1992Sep20.180454.4161@daffy.cs.wisc.edu>
Organization: Cypress Semiconductor Northwest, Beaverton Oregon
Lines: 34

In article <1992Sep20.180454.4161@daffy.cs.wisc.edu> tobis@xrap3.ssec.wisc.edu (Michael Tobis) writes:
>To apply reason to consciousness we must take consciousness
>to be axiomatic. This is a long way from taking it to be explained.

You have said this several times. Several times, both in email and
in public I have asked you to explain what 'taking consciousness
to be axiomatic' means. What constraint does this place on models?
What predictions does this make? I know that I am conscious, and am
willing to take that as an axiom. I am not thereby transformed into
a substance dualist like yourself. 

Care to answer the question this time?

>|> In article <1992Sep13.194856.21976@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
>I am NOT arguing that rational thought should not be applied to phenomena
>of consciousness because there is no consciometer. I am arguing that
>the idea that consciousness can be explained in some objective way is
>at best profoundly premature (and I continue to suspect that it is
>undecideable). 

With what would you replace the assumption that consciousness can be
explained? The assumption that it _cannot_ be explained? Exactly what sort
of research program does that lead to? [Hint: none; you merely advise us
to give up and go home before we begin].

If you still think Searles argument is convincing, please say so, and
we can air it out in public. I don't think it will stand up too well.

>I think we should move this discussion to comp.ai.philosophy where it
>belongs, btw.
>
>mt (<-- not an algorithm)

Max (<-- at least an algorithm)


