From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sdd.hp.com!think.com!spool.mu.edu!uunet!mcsun!sunic2!sics.se!sics.se!torkel Mon May 25 14:06:53 EDT 1992
Article 5814 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sdd.hp.com!think.com!spool.mu.edu!uunet!mcsun!sunic2!sics.se!sics.se!torkel
>From: torkel@sics.se (Torkel Franzen)
Newsgroups: comp.ai.philosophy
Subject: Re: penrose
Message-ID: <1992May21.153500.17675@sics.se>
Date: 21 May 92 15:35:00 GMT
Article-I.D.: sics.1992May21.153500.17675
References: <1992May8.015202.10792@news.media.mit.edu>
	<1992May18.194416.27171@hellgate.utah.edu>
	<1992May19.025328.5332@news.media.mit.edu>
	<1992May20.010756.27980@news.media.mit.edu>
	<1992May20.074423.4405@sics.se> <593@trwacs.fp.trw.com>
Sender: news@sics.se
Organization: Swedish Institute of Computer Science, Kista
Lines: 14
In-Reply-To: erwin@trwacs.fp.trw.com's message of 21 May 92 11:59:14 GMT

In article <593@trwacs.fp.trw.com> erwin@trwacs.fp.trw.com (Harry Erwin)
 writes:

   >People working in risk management, analysis, and assessment have evidence
   >that humans are inconsistent in a very pragmatic sense--humans make
   >different decisions with regard to risk management depending on how
   >the decision is posed.

  Yes, this kind of thing makes good sense. But what is at issue in
such cases is not consistency but inconsistency, and we see from the
particular instances what inconsistency means here. What I wonder is,
what does it mean to assume that people are consistent? Since I, for
example, have no well-defined set of statements that I "believe", the
logical concept of concistency defined for formal theories is useless.


