From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:35:25 EST 1992
Article 4282 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of Understanding
Organization: Department of Psychology, University of Toronto
References: <1992Mar4.151700.15282@gpu.utcs.utoronto.ca> <1992Mar4.205642.26955@psych.toronto.edu> <1992Mar5.144815.11531@gpu.utcs.utoronto.ca>
Message-ID: <1992Mar5.200322.219@psych.toronto.edu>
Date: Thu, 5 Mar 1992 20:03:22 GMT

In article <1992Mar5.144815.11531@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <1992Mar4.205642.26955@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>In article <1992Mar4.151700.15282@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>
>>>Please note that to answer the questions in Chinese, CR has to have a database
>>>of information about the objects in the story (as SHRDLU program uses). If
>>>you want to be able CR to draw a hamburger, you have to agree that the database
>>>shoud also include visual information (like you have such an information, other-
>>>wise you won't be able to draw a tree, agreed?). Then, when encountering word
>>>'hamburger' in the story CR would access all info it had about hamburgers, 
>>>including what it looks like. Do you see any problem here? 
>>
>>Nope, except for the implicit assumption that the way in which a hamburger
>>looks would be represented in a fashion that the man in the CR could
>>understand (can *you* read a videodisc?).
>>
>>The CR may know what a hamburger looks like.  The man *still* doesn't
>>know what Chinese symbol refers to hamburger.
>>
>Why care?

Because the issue at question is whether or not the CR process generates
understanding in the *man*.   

>The man is only a part of CR.

This is an odd usage of the term "part," given that the whole CR is contained
*inside* him.

> Do you insist that your cortex (or whatever other
>part of the brain) knows what a hamburger looks like? If the _CR_ knows
>what a hamburger looks like (you seem to allow such a possibility) that's good
>enough.

This is merely equivocating on the word "know".  My encyclopedia has a 
picture of a hamburger in it, but it seems ridiculous to say that the
encyclopedia "knows" what a hamburger looks like.  

>One of the problems of the CR construct is that it sweeps a lot of important 
>things under the carpet. Searle's argument leaves an unsuspecting person with
>an impression that only two things count: the rule book (the program) and the 
>man (executing agent). Searle only at the very begining mentions a database
>of information which the program uses. As the discussion above (and postings 
>which led to it) indicate the content of the database is crucial to the issue
>of what degree of understanding we can expect from the CR. Additionally, there
>is a problem of what happens when the CR reads the story. Searle chooses to
>talk about 'slips of paper', creating by this choice of words an impression
>that this is totally unimportant. He probably thinks himself that this is 
>unimportant, showing yet again his ignorance of the way computers work. When
>a computer (running SHRDLU program, or the CR) reads the original story it has 
>to analyze it in such a way that when the question comes in it is able to 
>answer it. It could not answer the question without reading the story first!
>In other words, reading the story puts the computer into a different state, 
>such that it is able to process incoming information (the question) in a way
>that is different then before. Those 'slips of paper' can not be thrown into
>wastebasket, they are a crucial part of the system, just as the database is.  

All of the above is *completely* irrelevant for the CR example.  The actual
architecture used, the addition of sensors, etc., makes no *principled*
difference to the argument.  See the original BBS article.

- michael



