From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!samsung!crackers!m2c!garbo.ucc.umass.edu!dime!orourke Mon Jan  6 10:30:14 EST 1992
Article 2462 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!samsung!crackers!m2c!garbo.ucc.umass.edu!dime!orourke
>From: orourke@unix1.cs.umass.edu (Joseph O'Rourke)
Newsgroups: comp.ai.philosophy
Subject: Re: A learning automaton for temporal sequences
Message-ID: <41247@dime.cs.umass.edu>
Date: 1 Jan 92 01:32:51 GMT
References: <1991Dec31.220604.2892@uwm.edu>
Sender: news@dime.cs.umass.edu
Reply-To: orourke@sophia.smith.edu (Joseph O'Rourke)
Organization: Smith College, Northampton, MA, US
Lines: 19

In article <1991Dec31.220604.2892@uwm.edu> markh@csd4.csd.uwm.edu (Mark William Hopkins) writes:
>
>   Way back in 1986 I tried out a series of simple experiments to probe in the
>mind.  [...]

>[description of program]

>   Training it on about 150k of erotically written text resulted in the
>most ridiculously funny output...
>
>   One thing that is guaranteed: as N approaches infinity, the family of
>algorithms will converge to a 100% accurate look-up table of everything that
>was used as training input.  But long before then, you'll start seeing
>increasing cohesion, then increasing syntatic regularity, and then even
>semantic regularity.  [...]

The idea of simulating text this way has been in the air for a while.
For an early light-hearted example, see Hugh Kenner and J. O'Rourke, 
"A Travesty Generator for Micros," BYTE, Nov. 1984.


