From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!munnari.oz.au!uunet!morrow.stanford.edu!CSD-NewsHost.Stanford.EDU!Neon.Stanford.EDU!vishal Mon Dec  9 10:48:21 EST 1991
Article 1896 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!munnari.oz.au!uunet!morrow.stanford.edu!CSD-NewsHost.Stanford.EDU!Neon.Stanford.EDU!vishal
>From: vishal@Neon.Stanford.EDU (Vishal I. Sikka)
Newsgroups: comp.ai.philosophy
Subject: Re: Zeleny (was Re: Searle
Message-ID: <1991Dec6.012318.6474@CSD-NewsHost.Stanford.EDU>
Date: 6 Dec 91 01:23:18 GMT
Article-I.D.: CSD-News.1991Dec6.012318.6474
References: <1991Nov24.195230.5843@husc3.harvard.edu> <1991Nov24.224724.2149@arizona.edu> <441@trwacs.UUCP>
Sender: news@CSD-NewsHost.Stanford.EDU
Organization: Computer Science Department, Stanford University, Ca , USA
Lines: 23
Originator: vishal@Neon.Stanford.EDU

In article <441@trwacs.UUCP> erwin@trwacs.UUCP (Harry Erwin) writes:
>The assumption that humans have a mystic power to "induct" is interesting.
>This summer, my son tried to train back-prop nets on chaotic data to see
>how good they were at predicting chaotic time series _at arbitrary
>points_. He learned that the nets were no good unless they had experience
>in the subset of the state space that he was testing them on. No magical
>power exists for the nets to make predictions without experience. That's
>consistent with my experience teaching math. People "induct" because the
>brain is good at learning inductive patterns.

Who is to say that there cannot be a class of nets OTHER than back-prop nets
that are as good at learning inductive patterns, as back-prop nets are at
learning simple geometric patterns?

The principal point of contention in AI is not whether it has been done,
but whether it can be done.

>Cheers,
>-- 
>Harry Erwin


Vishal.


