From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor Tue Mar 24 09:56:20 EST 1992
Article 4509 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: The Systems Reply I
Message-ID: <1992Mar17.173251.18462@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <BL1p0D.6II@world.std.com> <1992Mar14.182737.15329@psych.toronto.edu> <1992Mar14.213045.21776@mp.cs.niu.edu> <1992Mar16.224423.29809@psych.toronto.edu>
Date: Tue, 17 Mar 1992 17:32:51 GMT

In article <1992Mar16.224423.29809@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Mar14.213045.21776@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>>
>> More to the point:
>>
>>	There can be no final convincing proof that strong AI is
>>	possible until there is an actual implementation.
>
>No.  This is wrong.  An implementation will *not* demonstrate that it has
>semantics (or understanding, or qualia, or whatever).  This is *not* a
>matter of empirical investigation, but of conceptual analysis. 
>
If I understand you correctly, you are saying that a property of 'having 
semantics' is empirically unverifiable (in principle, not just now), is this
right? If so, then the concept (semantics) does not belong into realm of
science.

>- michael
>
>


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


