From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!m2c!wpi.WPI.EDU!cs!rdouglas Thu Jan 16 17:22:26 EST 1992
Article 2788 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1876 comp.ai.philosophy:2788
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!m2c!wpi.WPI.EDU!cs!rdouglas
>From: rdouglas@cs.wpi.edu (***** Rob Douglas ****)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Behavior in the Bart Room (repost)
Message-ID: <1992Jan16.180819.13756@wpi.WPI.EDU>
Date: 16 Jan 92 18:08:19 GMT
References: <X39JeB1w164w@depsych.Gwinnett.COM> <1992Jan15.190843.1636@wpi.WPI.EDU>
Sender: news@wpi.WPI.EDU (News)
Reply-To: rdouglas@cs.wpi.edu (***** Rob Douglas ****)
Organization: Worcester Polytechnic Institute
Lines: 94
Nntp-Posting-Host: maxine.wpi.edu

In my last post, it seems that much of what I said gotten eaten up by some
editors, so I am reposting (without the problems, I hope): 

In article <1992Jan15.190843.1636@wpi.WPI.EDU>, rdouglas@cs.wpi.edu (***** Rob Douglas ****) writes:
|> In article <X39JeB1w164w@depsych.Gwinnett.COM>, rc@depsych.Gwinnett.COM (Richard Carlson) writes:
|> |> In article <5939@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
|> |> >>>Actually, it would _waste_ a lot of time arguing about definitions
|> |> >>>of understanding.
|> |> >
|> |> >I stand by the claim that it will be a waste of time.  A tremendous
|> |> >waste of time.  Virtually every net debate about definitions confirms
|> |> >this, in my opinion.
|> |> 
|> |> 
|> |> Even a human being can be "programmed" to appear to know more than
|> |> sh/e does.  But sometimes if you spend enough time with that
|> |> person you suddenly have the insight, "Hey, this guy is dumb as a
|> |> post!  He doesn't understand anything!"
|> |> 
|> |> Let us suppose that Bart Simpson was actually twins.
|> |> 
|> 
|> (long story line deleted, summary: twin gets programmed to try to fool you into
|> believing he is a prodigy.)
|> 
|> |>  Could
|> |> you figure out in, say, a half an hour that little Bart's mind was
|> |> mediocre -- turbocharged with good instruction and support, but
|> |> fundamentaly and essentially mediocre?  I think so.  Wouldn't the
|> |> same apply to the Chinese Room , the Mathematics Room, the Group
|> |> Theory Room, and all the other rooms that have been hypothesized?
|> |> Didn't the Turing test presuppose a fairly lengthy
|> |> cross-examination?  After all, even Eliza can fool you for a few
|> |> minutes.
|> |> 
|> 
|> On understanding:
|> 
|> 1)  I alone can decide whether or not I understand, and what it is that I do
|> understand.  (This has generally been agreed upon by everyone submitting to
|> this newsgroup, I believe (understand).)
|> 
|> 2)  I have no guaranteed way to determine if another person (thing,conscious
|> being, etc.) understands something I am trying to explain or discuss. 
|> However, I can be convinced that another understands if, in the course of our
|> discussion, my own understand|> ing of the subject is (in my opinion)
|> reinforced or refuted.  In other words, if my understanding has increased.
|> 
|> It has already been assumed that I can tell whether or not I can understand.

|> It has been pointed out that one can be shown that he believed something which
|> he did not understand, in the past, and yet called it understanding.  I
|> maintain that realizing you did not understand in the past does not mean you
|> did not understand somethin|> g, you just understood it differently than you
|> do now.  People's understandings change.
|> 
|> My point is this:  if you meet someone with whom you hold a conversation, and
|> the conversation allows the person (thing,conscious being, etc.) to pass #2
|> above, then, at that time, you must accredit it with understanding, until a
|> conversation which does n|> ot pass #2 is found.  (By the way, you will notice
|> that this is similar to the Turing test.  For some reason, people seem to
|> believe that the Turing test is not a complete enough test to test for
|> understanding.  I suggest you think about that.  To be able
|> to relate ideas to other ideas and change them around syntactically in order
|> to explain what they mean would really require a very complicated system. 
|> This is well beyond the scope of any Eliza-like program.)
|> 
|> Understanding cannot be universally determined.  It is a fleeting thing.  But
|> as long as someone can allow us to increase our own understanding, why would
|> you care if he were attributed understanding.  It is a relative judgment,
|> which may be viewed differ|> ently from different points-of-view.
|> 
|> 
|> On a slightly different note: I seem to remember having seen an argument that
|> claimed to prove that the human mind was more powerful than a Turing machine.
|>  It essentially stated that a human being can solve the halting problem, and
|> we all know that a Tur|> ing machine cannot, so a human is more powerful. 
|> This is not true. In order to solve the halting problem, the solver has to be
|> guaranteed to give an answer to every yes/no question.  I know of at least one
|> question which no person is yet guaranteed 
|> wer correctly yes or no.  Is the problem <IS P = NP?> solvable?  No person can
|> tell you the answer.  Therefore, humans cannot solve the halting problem.
|> 

To summarize, I don't believe there exists an _objective_ definition of
understanding.

-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~  Rob Douglas                         |  email:                       ~ 
~  AI Research Group                   |       rdouglas@cs.wpi.edu     ~
~  Worcester Polytechnic Institute     |  Fuller Labs Room 239         ~
~  Computer Science Department         |  (508) 831-5005               ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


