From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!tarpit!cs.ucf.edu!news Thu Apr 30 15:23:22 EDT 1992
Article 5330 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!tarpit!cs.ucf.edu!news
>From: clarke@acme.ucf.edu (Thomas Clarke)
Newsgroups: comp.ai.philosophy
Subject: Re: Games (was Re: Categories: bounded or graded?)
Message-ID: <1992Apr29.122350.20125@cs.ucf.edu>
Date: 29 Apr 92 12:23:50 GMT
References: <1992Apr28.230052.7394@spss.com>
Sender: news@cs.ucf.edu (News system)
Organization: University of Central Florida
Lines: 28

In article <1992Apr28.230052.7394@spss.com> markrose@spss.com (Mark  
Rosenfelder) writes:
> In article <1992Apr28.173231.11604@cs.ucf.edu> clarke@acme.ucf.edu 
> (Thomas Clarke) quotes Bernard Suits as writing:
> 
> >"I also offer the following simpler and, so to  
> >speak, more portable version of the above:  playing a games is the voluntary  
> >attempt to overcome unnecessary obstacles."
> 
> This one seems to apply pretty well
> to the stock market.  There's a specific goal (making money), set rules,
> more efficient means like insider trading are prohibited, and the rules are
> accepted to facilitate the trading.  Dancing a square dance fits, too.
> Running a research institution.  (Goal: increase of knowledge; rules:
> scientific method; prohibited efficiencies: plagiarism, espionage;
> rules accepted to facilitate goal.)
> 
> It's no reflection on Suits to say that his definition has holes in it.
> Defining a word like "games", for both Wittgenstein's reasons and 
> Prof. Minsky's, is tricky.

I do recommend the book; it's amusing and readable, but Suits takes great pains  
to point out that his definition is not exclusive.  Something can be a game and  
also serve another motive as making money in the stock market or advancing  
knowlege in research.  For that matter, the legal system itself somewhat fits  
the definition, or is it just that the apparent inefficiencies are introduced  
to avoid greater inefficiencies caused by the lack of rules.  Oh no, not Mill's  
utility theory!  (I fear this is getting away from AI).


