From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!tdatirv!sarima Mon Mar  9 18:35:16 EST 1992
Article 4268 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence and Understanding
Message-ID: <472@tdatirv.UUCP>
Date: 4 Mar 92 21:06:23 GMT
References: <1992Feb29.080019.9272@ccu.umanitoba.ca> <1992Mar1.072408.25643@a.cs.okstate.edu> <1992Mar2.031253.3229@ccu.umanitoba.ca> <1992Mar4.022416.11169@a.cs.okstate.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 30

In article <1992Mar4.022416.11169@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
|  No, its not an unusual burden; because a I am assuming that the computer
|has information about the language, as you would maintatin, and that both
|humans and computers can be left alone without interlocutors or other
|agents present.  The critical difference is that a human can think about
|these things, indeed event invent problems to solve on their own, without
|other agents preset.  I doubt this to be possible on a computer.

Gack, this is silly.

Let's see, we can install a background demon that seeks out problems to solve,
perhaps by scanning current inputs for unexpected patterns, or by tracking
other activities for failure to complete.  These problems can then be inserted
in a list of 'problems to solve', which are worked on when there is time
to do so.  And, if ti is determined that some other entity, say a human,
may have part of the data needed for a solution, install a trigger to ask
said entity when the chance arises.

This seems to me to accomplish basicly what you ask.

Now, this may not be easy to program, but how is it really any different
than how humans do it?  Or do you maintain that humans can 'make up problems'
out of thin air, with no relation to prior experience?  If so, prove it,
because it is contrary to current psychological research.

Or are you saying that this cannot actually be programmed?
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



