Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!galileo.cc.rochester.edu!ub!news.kei.com!bloom-beacon.mit.edu!gatech!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Strong AI and consciousness
Message-ID: <Czu6r3.36D@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <3b0176$hu8@mp.cs.niu.edu> <CzsCCC.3DF@cogsci.ed.ac.uk> <3b35ln$h4s@mp.cs.niu.edu>
Date: Fri, 25 Nov 1994 18:45:02 GMT
Lines: 26

In article <3b35ln$h4s@mp.cs.niu.edu> rickert@cs.niu.edu (Neil Rickert) writes:
>In <CzsCCC.3DF@cogsci.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>>In article <3b0176$hu8@mp.cs.niu.edu> rickert@cs.niu.edu (Neil Rickert) writes:
>
>>>When people disagree about objective matters, there are objective
>>>tests that can be made to settle the issue.  What objective tests do
>>>you propose for establishing consciousness?  Be sure to specify tests
>>>that would work equally for robots as for humans.
>
>>What happens if someone does not accept the objective test?
>>How is this different from the subjective case where you say:
>
>Strictly speaking, all language meanings are subjective, so there is
>no difference.  But I don't feel like delving into a lengthy
>subjectivity debate right now.

I just want to know how you square that with saying "when people
disagree about objective matters, there are objective tests that
can be made to settle the issue".  I realize that you've already
done something in that direction by saying "strictly speaking".

-- jeff



