From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:33:25 EST 1992
Article 4096 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Message-ID: <1992Feb27.223345.2965@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb25.175012.8924@oracorp.com>
Date: Thu, 27 Feb 1992 22:33:45 GMT

In article <1992Feb25.175012.8924@oracorp.com> daryl@oracorp.com writes:

christo@psych.toronto.edu writes:
>> Searle is *giving* his opponents that a human could accomplish this
>> astounding feat (just as he gives them the possibility that a language
>> could be reduced to a finite set of rules; a matter which leads to all
>> sorts of confusion). The point is that *even under these improbable
>> conditions* -- conditions which work to the advantage of the
>> strong-AIist -- you can still show that the system has no
>> understanding.
>
>I disagree with you that Searle is giving anything to the Strong AI
>position by making these concessions. As Searle describes it, Strong
>AI is the philosophical position that any machine that "implements the
>right program" must understand in the same sense a human does. That
>is, Strong AI is logically in the form of an implication: If machine A
>implements the right program, then machine A understands. It isn't
>making a concession to Strong AI to assume the antecedent in order to
>explore the consequences.

You may be technically correct, but it seems to elude many people that
Searle is merely *assuming* the antecedent for purposes of argument, and
that the possibility that this assumption is true is by no means assured.

- michael



