Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Strong AI and consciousness
Message-ID: <CzzpDK.C6A@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <3b35ln$h4s@mp.cs.niu.edu> <Czu6r3.36D@cogsci.ed.ac.uk> <3b5mj8$76v@mp.cs.niu.edu>
Date: Mon, 28 Nov 1994 18:15:20 GMT
Lines: 57

In article <3b5mj8$76v@mp.cs.niu.edu> rickert@cs.niu.edu (Neil Rickert) writes:
>In <Czu6r3.36D@cogsci.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <3b35ln$h4s@mp.cs.niu.edu> rickert@cs.niu.edu (Neil Rickert) writes:
>>>In <CzsCCC.3DF@cogsci.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>>In article <3b0176$hu8@mp.cs.niu.edu> rickert@cs.niu.edu (Neil Rickert) writes:
>
>>>>>When people disagree about objective matters, there are objective
>>>>>tests that can be made to settle the issue.  What objective tests do
>>>>>you propose for establishing consciousness?  Be sure to specify tests
>>>>>that would work equally for robots as for humans.
>
>>>>What happens if someone does not accept the objective test?
>>>>How is this different from the subjective case where you say:
>
>>>Strictly speaking, all language meanings are subjective, so there is
>>>no difference.  But I don't feel like delving into a lengthy
>>>subjectivity debate right now.
>
>>I just want to know how you square that with saying "when people
>>disagree about objective matters, there are objective tests that
>>can be made to settle the issue".  I realize that you've already
>>done something in that direction by saying "strictly speaking".
>
>I square the two statements by asserting that the meaning of
>"objective" is not what we sometime presume.  We tend to presume that
>"objective" means, roughly speaking, a property of reality
>independent of human observers.  Let me refer to that as A-objective,
>with the A standing for absolute.  But in practice, when we make
>objective judgements, all we do is attempt to eliminate our personal
>subjectivity.  That leaves the likelihood that we are really talking
>about C-objective, where the C stands for cultural.  C-objectivity is
>simply the shared subjectivity of the culture.  The matter is made
>more complex, because we may be members of several cultures.  For
>example, as well as being part of western culture, I am part of the
>culture of science, and the usenet culture.

Sounds reasonable enough to me.

>I claim that A-objectivity is impossible, and the best we can have is
>C-objectivity.  But I don't intend arguing the point right now.  The
>debate between epistemological relativists and absolutists has gone
>on for some time without being settled, and we don't need to bring it
>to c.a.p.

And I'm not planning to address it.

However, when you say

  When people disagree about objective matters, there are objective
  tests that can be made to settle the issue.  What objective tests do
  you propose for establishing consciousness?  Be sure to specify tests
  that would work equally for robots as for humans.

are you saying that there aren't any C-objective tests (right now)
or that there couldn't be?

-- jeff
