From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!christo Thu Feb 20 15:21:47 EST 1992
Article 3836 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!christo
>From: christo@psych.toronto.edu (Christopher Green)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb18.200409.21596@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb18.044411.18663@psych.toronto.edu> <1992Feb18.180114.12414@watdragon.waterloo.edu>
Date: Tue, 18 Feb 1992 20:04:09 GMT

In article <1992Feb18.180114.12414@watdragon.waterloo.edu> cpshelle@logos.waterloo.edu (cameron shelley) writes:
>christo@psych.toronto.edu (Christopher Green) writes:
>> >> In article <1992Feb14.152243.6535@watdragon.waterloo.edu> cpshelle@logos.waterloo.edu (cameron shelley) writes:
>> >> >All I can add here is that the sort of work I refered to above takes
>> >> >belief to exist a priori, and generally models it by various
>> >> >truth-functional modal logics.  
>> 
>> ??!! But modal logics aren't truth functional. They're intensional!
>> What might you mean?
>
>I guess I should have added Montague Semantics to my list of examples.
>
Please go on. I'm not sure how this helps. Montague's semantics is modal
and not truth functional (except very early on in his career).

-- 
Christopher D. Green                christo@psych.toronto.edu
Psychology Department               cgreen@lake.scar.utoronto.ca
University of Toronto
---------------------


