Newsgroups: comp.ai,sci.cognitive,sci.logic,sci.cognitive,sci.psychology.research
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!howland.reston.ans.net!swrinde!cam.news.pipex.net!pipex!edi.news.pipex.net!pipex!bnr.co.uk!bcarh8ac.bnr.ca!bcarh189.bnr.ca!nott!dgbt!al
From: David Longley <David@longley.demon.co.uk>
Subject: Is a rDBMS Applied AI??[1/1]
X-Nntp-Posting-Host: longley.demon.co.uk
Status: OR
Message-ID: <804285336snz@longley.demon.co.uk>
Originator: al@debra.dgbt.doc.ca
Lines: 507
Sender: news@dispatch.demon.co.uk
Reply-To: David@longley.demon.co.uk
Organization: Relational Technology
X-Newsreader: Demon Internet Simple News v1.29
Date: Tue, 27 Jun 95 20:35:36 GMT
Approved: al@debra.dgbt.doc.ca
Xref: glinda.oz.cs.cmu.edu comp.ai:30936 sci.cognitive:8093 sci.logic:11585 sci.psychology.research:1699


I'm interested  to  know what those working  in this field think of 
the proposition that the following programme  comprises an exercise 
in applied AI. It certainly isn't psychology  per se, but could  be
described as behaviour science & technology.  However, given what I
have said elsewhere about actuarial vs. clinical judgement, I think
it fair to class it as an AI application. I'd be grateful  to  hear
what others think. I'm making it available in two parts (those  who 
do not like citation of primary sources are advised to skip this).

PROfiling and PROgramming BEhaviour: A Relational DBMS.

PROBE  is  designed  to  serve as a  system  of  'helpful  extensional 
strategies' for professionally trained Behaviour Scientists. Extensive 
coverage of the empirical research on the heuristics and biases  which 
comprise    the   descriptive   subject   matter    of    contemporary 
(methodologically   solipsistic)   psychology,  from   classical   and 
instrumental  conditioning  to decision making under  uncertainty,  ie 
clinical  (intensional),  'gap-filling',  judgment,  contrasted   with 
actuarial  (extensional)  analysis,  is  to  be  found  in  Volume  1. 
Practical  illustrations of an alternative, extensional analysis,  can 
be  found in Volume 2, and in Sections 3 and 4 of this document.  Some 
simple  illustrative examples, which give an indication of the  import 
of failure of substitution of identicals in intensional contexts  will 
suffice for the present section:

1)  One  member of staff may describe an inmate as 'subversive'  as  a 
characteristic or quality, (intensional). Another member of staff  may 
describe the same inmate as 'disruptive'. However, one can not  freely 
substitute  'subversive'  for 'disruptive' in the  contexts  of  their 
statements  and preserve the truth of their statements. For  it  would 
not be true to say that the first staff member described the inmate as 
'disruptive' and the second 'subversive'. That substitutions are often 
made in such contexts may be taken as an indication of the  vagueness, 
and   therefore  predictive  and  diagnostic  poverty  of   our   folk 
psychological vernacular. 

2) To use an example from antiquity, one could say that Oedipus wanted 
to  marry Jocasta, but one could not say that Oedipus wanted to  marry 
his  mother,  even  though,  extensionally  speaking,  "Jocasta"   and 
"Oedipus' mother" had the same reference (i.e. were coextensive). 

>From another perspective:- Harry says he loves Sally, but doubts  that 
Sally loves him. Sally protests that she does love Harry, and with all 
her  heart. The problem here is that loving is an  intensional  idiom. 
Sally may equate her 'loving' Harry with her feelings of affection for 
him,  which Harry may not see manifest in her behaviour  towards  him. 
One  could say that Sally is using 'loving' intensionally to  describe 
her feelings for Harry. Harry, on the other hand, basing his  judgment 
on  Sally's  behaviour, is using 'loving'  extensionally  to  describe 
Sally's  actual  behaviour  towards  him.  Cognitive  processes  being 
context-specific,   try   substituting  the   propositional   attitude 
'remorse'  for  'love', 'a life sentenced inmate' for 'Sally'  and  'a 
Discretionary  Lifer Panel interviewer' for 'Harry' to appreciate  the 
force of the problem, and why the PROBE project eschews usage of  such 
idioms.
 
Or  again:  a  psychologist  interviews a  disruptive  inmate  who  is 
considered paranoid. During the interview, the psychologist treats the 
inmate's  utterances  with  unconditional  positive  regard,  ie   the 
psychologist does not overtly ascribe truth values to what the  inmate 
says  or  does,  other  than  to record  that  such  events  occur  as 
behaviours.  An officer, hearing the inmate say that the  psychologist 
'understood' the inmate's point of view, considers the psychologist to 
have  colluded with the inmate. Conclusion: the officer has failed  to 
understand the nature of extensional analysis. 

All  staff  working  in prison establishments will be  aware  of  many 
familiar   examples  of  the  ubiquity  of  intensional   idioms   and 
indeterminacy  of translation or interpretation from their day to  day 
work.  Folk  Psychology  ('common sense') is in large  part  the  very 
business   of   working   with  prisoners.   Yet,   whilst   cognitive 
psychologists  model  such  judgmental processes  (see  Volume  1  for 
extensive  coverage),  it must be appreciated that their  approach  is 
descriptive, they do not recommend that such processes are accepted as 
normative,  ie prescriptions for optimal management of behaviour.  For 
that  we  turn  to  the actuarial  technology  reviewed  in  Volume  1 
illustrated  in Volume 2, and explained in Volume 3. As  Nelson,  1992 
remarked:

    'It  does strike me as odd that in most  cognitivist  thought 
    the truth of folk psychology is taken for granted in  setting 
    up  the discipline of cognitive science (e.g. Fodor 1987;  Ch 
    1).  This is true of Functionalism and RTM. But knowledge  of 
    mind  is  a product of cognitive science, one  hopes,  not  a 
    presupposition of it.'

    R. J. Nelson
    Naming and Reference p.276

Quine's  work revealing the failure of quantification within  contexts 
of  propositional  attitude (1943, 1956) shows that there  is  a  very 
important  limit  to  what  we  can  truth-functionally  infer  within 
psychological  and  other  intensional contexts. Yet  when  we  record 
observations  within  a data base, we give  those  observations  truth 
values, namely, that a particular class of behaviour was the case at a 
specific  time  and  date. Deductive (extensional)  analysis  of  such 
observation    statements,    their   conjunctions,    negation    and 
quantification, therefore represents the limit of the contribution  of 
the  Behaviour Analyst. It should also be clear that all of  the  work 
within  the  PROBE system is constrained to  extensional  analysis  of 
material  regimented within the language of predicate logic.  However, 
even  within such constraints we can manage  inmates,  establishments, 
and entire estates. Nevertheless, nothing we deductively infer  allows 
us to conclude that the subjects themselves are privy to the relations 
we extensionally identify, and manage via. Deductive inferences can be 
made  on the basis of behaviour profiles which individuals  themselves 
are unaware of, such inferences may be valid, and true nonetheless. 

The  routines which support PROBE impose constraints on what can,  and 
cannot, effectively (computationally) be done. For instance, all  data 
entry  and  analysis depends on decisive data entry. Whatever  led  to 
those truth functions (data values) is largely independent of the work 
of  the Behaviour Scientist's work, since the entered data,  (assuming 
it  is entered accurately) will be the consequence of the sentence  of 
the court, the assessments of workshop staff, and so on. If the system 
reveals  useful functional relations between classes of behaviour,  it 
is  quite irrelevant whether inmates or others 'believe' or 'know'  of 
such relations. Behaviour can, and generally is, managed on the  basis 
of such relations. 

Applied behaviour scientists eschew intensionalism and  methodological 
solipsism. They also abandon (Quine 1951), the notion of  analyticity, 
synonymy,  and  all of its other subtle cognates such as  identity  or 
similarity  (see  preface). This is no more than to adopt  a  thorough 
going  empiricism.  The 'Five Milestones of Empiricism'  (Quine  1981) 
represent  a progressive shift from the 'idea' to 'the word', to  'the 
sentence'  to  'systems of sentences' and finally  to  'methodological 
monism'  (the abandonment of the analytic-synthetic  distinction)  and 
'naturalism'.

    'Naturalism does not repudiate epistemology, but  assimilates 
    it to empirical psychology.......The naturalistic philosopher 
    begins  his reasoning within the inherited world theory as  a 
    going  concern.  He  tentatively  believes  all  of  it,  but 
    believes  also that some unidentified portions are wrong.  He 
    tries  to  improve, clarify, and understand the  system  from 
    within. He is the busy sailor adrift on Neuraths's boat.'

    W. V. O. Quine (1975)
    Five Milestones of Empiricism
    Theories and Things (1981)
    
All  effective  reasoning  (computing)  must  therefore  be  deductive 
inferential reasoning as Frege (1879) proposed in his  Begriffsschrift 
and  which  Church  1936;  Turing 1937  so  forcefully  elaborated  as 
computer  programming. Reasoning, it appears, is not  a  psychological 
process  at  all, but a physical, algorithmic process like  any  other 
behaviour.   However,   attempts  at  reasoning   within   intensional 
(psychological)  contexts  are  therefore only to be  expected  to  be 
fraught  with  difficulty, primarily because of the  unreliability  of 
logically  'quantifying in' (Quine 1956). This conclusion  immediately 
raises  problems  for strategies designed to improve  thinking  beyond 
teaching  specific,  context-relevant  behaviours  (see  beginning  of 
Section 3 and Volume 1). 

It  may well be that the failure of Leibniz's Law  within  intensional 
contexts  leads  us to 'believe' in the existence of the  whole  sorry 
business of induction which can never be rationally justified  (Popper 
1963),  see  Carnap  for the best attempts (1970,1980).  It  may  also 
account for 'the paradox of inference' (Cohen & Nagel 1962). 

    'If  in an inference the conclusion is not contained  in  the 
    premises,  it cannot be valid; and if the conclusion  is  not 
    different   from  the  premises,  it  is  useless;  but   the 
    conclusion  cannot  be  contained in the  premises  and  also 
    possess  novelty; hence inferences cannot be both  valid  and 
    useful.'

    An Introduction to Logic
    M.R Cohen & E. Nagel (1962) p.173

Psychological novelty, and other psychological processes have  nothing 
to  do with logical deduction, per se, as the authors go on  to  point 
out, but the phenomenon probably serves a good phenomenological marker 
that  change is occurring. Yet this can be a very difficult  point  to 
see clearly. It is akin to appreciating Popper's realistic approach to 
knowledge  as  objective.  Whilst the  processes  whereby  we  acquire 
knowledge  may  well be constrained by the limits  of  generalisation, 
modularity  and  memory processing capacity, all of  this  only  makes 
sense from the methodological solipsist stance of the psychologist who 
is  concerned  to  explicate the constraints of  human  reasoning  and 
behaviour.  Unfamiliarity,  possibly  the  triggering  condition   for 
endogenous  opiate inhibition and behavioural withdrawal  (Deakin  and 
Longley  1981), is merely a cognate of uncertainty. A condition  which 
all  learners  must operate within, those with a  heroin  habit,  very 
painfully  if they are to return to being in a state like the rest  of 
us - they know better than anyone the meaning of 'hunger'. 

However,  it  is not the case that concepts are true only  if  someone 
understands, knows, or has first hand experience of them. In fact,  it 
looks,  in  the  latter  part of the 20th century,  that  it  is  just 
constraints  on our central nervous system which accounts for  why  we 
fail to spontaneously derive the conclusions which are implicit in the 
premises  of deductive arguments. To us, these inferences  are  novel, 
and  they  are novel because of the failure of  Leibniz's  Law  within 
intensional  contexts.  Physiological  research  provides   impressive 
evidence  in  support  of methodological solipsism,  but  it  must  be 
appreciated that such evidence can only serve to explain why there are 
constraints  on  human inference (ie because of the  architecture  and 
physiology of the hippocampus). Professionals achieve what they do, it 
is argued, because they limit what they do to professional contexts.

    'Exp  1  assessed emergence neophobia in 20 male rats  in  an 
    apparatus  that provided a choice between novel and  familiar 
    alternatives.   Two  weeks  after  emergence   testing,   the 
    threshold  to induce perforant-path LTP and the magnitude  of 
    perforant-path  LTP in the dentate gyrus were assessed  under 
    pentobarbital anaesthesia. Neophobic Ss that spent relatively 
    little time in the novel alley during a 1-hr test had a lower 
    threshold  to  induce LTP and  exhibited  greater  asymptotic 
    excitatory  postsynaptic potential LTP than did neophilic  Ss 
    that readily entered and explored the novel alley. In Exp  2, 
    plasma  corticosterone  levels  in  13  rats  tested  in  the 
    emergence  task were also correlated with emergence  duration 
    and  were generally lower in neophobic Ss. Data suggest  that 
    neotic  behavior and LTP share a common  mechanism,  possibly 
    one mediated by an interaction of glucocorticoid hormones and 
    habituation.'

    S. Maren, K. Patel, R.F. Thompson, and D. Mitchell
    Individual   differences  in  emergence   neophobia   predict 
    magnitude of perforant-path long-term potentiation (LTP)  and 
    plasma corticosterone levels in rats.
    Psychobiology; 1993 Mar Vol 21(1) 2-10

All  that  physiological study will show us in the end is  the  actual 
mechanical  constraints  which  account for habit  formation  and  its 
context  specificity. Whilst a major rationale for the  PROBE  system, 
qua deductive system, is   precisely  this  failure  of Leibniz's  Law 
(substitutivity  of  identicals)  within  intensional  contexts,   and 
suggest the mechanism. No doubt much simpler organisms such as Aplysia 
will   suggest  the  basic  model.  Regardless,  constraints  on   our 
physiological architecture ensures that we do not spontaneously derive 
all  of  the  possible  conclusions from the  premises  we  know  (see 
Cherniak  1986  on 'Minimal Rationality', and our  limited  processing 
capacity, constraints on consistency checking and restricted range  of 
material implication as a consequence - Volume 1). 

It  is critical at this point that the reader appreciates that  it  is 
through  reliance  on the principles of Quantification  Theory  (First 
Order   Logic),   principles   which   mechanise   generalisation   as 
quantification,   that  PROBE  breaks  out  of  the   constraints   of 
intensionalism. Elsewhere, the work of Meehl 1967; 1978 and Gigerenzer 
(1987,  1993) has been cited to warn against the alternative  lure  of 
ritualistic inductive inferential statistics, which is endemic  within 
some  areas  of psychology. From what has been covered above,  and  in 
earlier volumes, 'The Logic of Scientific Justification' suffices. The 
PROBE  project  is  premised  on the fact that  with  the  support  of 
Information Technology, we can deal with inmate behaviour deductively, 
descriptively  and  extensionally,  at least up the  capacity  of  our 
present technology. From the evidence to date (Dawes, Faust and  Meehl 
1993)  current  technology already far exceeds the capacity  of  human 
problem solving in such situations.   

What we are left with is the application of Information Technology  as 
computerised deductive logic and scientific method (the application of 
predicates  and  functions - recursive function  theory)  to  specific 
domains  of concern, ie classes of physical behaviour. In the case  of 
the  Prison Service, and the PROBE system, the domain comprises  those 
classes  of human behaviour which led inmates to offending  behaviour, 
conviction,  and subsequent behaviour whilst in custody. What  inmates 
'think',  'want',  'believe' etc. can be of no concern, except  as  de 
dicto, ie directly quoted, dated, verbal behaviours (Quine 1992). This 
is  simply  because,  as  far  as  the  PROBE  project  is  concerned, 
intensional material is not amenable to reliable quantitative analysis 
for  reasons outlined above (Kneale & Kneale 1962; Place 1987;  Nelson 
1992),  and  elsewhere  (Volume  1). What  is  directly  observed  and 
recorded  in  relation  to  other  observables  is  all  that  can  be 
systematically  analysed  truth functionally. This means  that  inmate 
reports,  and  any  discussion or analysis of  such  reports  in  case 
conferences  must  respect  the  basic  principle  of   extensionality 
outlined here and elsewhere (Volume 1).

The  rejection of analyticity (Quine 1951) has  profound  implications 
for  non-behavioural  psychology. Written reports on inmates  must  be 
verbatim   transcripts  of  what  is  said  and  observed,  with   the 
interviewer's  assessments  and comments  distinctly  identifiable  as 
such.  These constraints must also be respected when  discussing  such 
reports.  That  is, in the interest of truth and  accuracy,  all  such 
material  must  be analysed using  effective,  computationally  sound, 
extensional  logic  if falsehood and fiction is to  be  avoided.  This 
requires, at minimum, a commitment to verbatim recording of behaviour, 
leaving  deductive  analysis of that data to formal systems  which  we 
have  developed  to perform that  function.  Psychological  processes, 
narrowly construed, in the terms outlined in this section, are no more 
than  that  set of intrusive, intensional heuristics  which  behaviour 
scientists  have  gone to such lengths to document over  the  past  30 
years.   Such  heuristics  generally  serve  only  to  distract   from 
penetrative  extensional  analysis, and are often no  more  than  gap-
filling rhetoric or sophistry. Such heuristics, are only  contextually 
effective,  and  have  nothing to do with sound  analysis  (Tversky  & 
Kahneman  1973;  Dawes,  Faust  and Meehl 1989).  In  the  absence  of 
distributional data, behaviour scientists can do no more than  request 
that  such systems are created as a sine qua non for them to  practice 
their profession effectively.  

However,  and  this caveat is critical, behaviour science is  not  co-
extensive  with  Information  Technology,  since  all  scientists  use 
Information  Technology to analyze their data drawn from their  domain 
of concern. For this reason, the technology discussed in this document 
must  remain  in the hands of those who specialise in  the  scientific 
recording and analysis of inmate behaviour if it is to continue to  be 
a  service  in  Behaviour  Science. It  is  a  category  mistake,  and 
therefore  an  administrative  mistake  to  dogmatically  conceive  of 
Information  Technology as an independent service, except as a  source 
of  hardware and software procurement. The application of  Information 
Technology to any domain is fundamentally no more than the application 
of algorithmic methods to render tasks effective, and it is  therefore 
impossible  to make specialists of such a service beyond that  of  the 
scientific  and  technical services provided by any  professional,  be 
they  accountants,  medical  practitioners,  architects  or  behaviour 
scientists.  This argument, whilst superficially simple, goes  to  the 
heart  of  the nature of the relationship between expert  systems  and 
professional services.

It  is  important for what follows that the  reader  appreciates  that 
descriptors  or declarative statements used to classify behaviour  are 
not  variables (even though they are labelled as such within  SIR  and 
SPSS).  What they are in fact is propositional functions,  predicates, 
relations  or  just  functions.  Failure  to  appreciate  this   fact, 
discovered  by Frege in 1879, can account for  considerable  confusion 
through a blurring of the distinction between deductive and  inductive 
inference.  The  issue can be explicated somewhat by  reference  to  a 
language closely related to that of the 4GL under discussion:

    'The  query  is  written  as one  or  more  predicates  (with 
    arguments,  separated by commas and terminated with  as  full 
    stop.  Unlike  in predicate definitions,  any  variables  are 
    existentially quantified, and the interpreter will attempt to 
    find  values  for them that satisfy the  conjunction  of  the 
    predicates....The  Prolog  interpreter starts by  trying  the 
    first  definition  of  the predicate,  and  substituting  the 
    values for the variables...'

    P. Gray (1984)
    Logic, Algebra and Databases p.74: 
    Representing programs by clauses: Prolog

Note that this is an entirely extensional procedure, substitutions are 
made  in terms of truth-conditions. What may seem a little strange  to 
the  novice  is that in working as above,  particularly  when  writing 
retrievals  to  generate  inmate reports (Volume 1,2)  he  or  she  is 
actually using 'proof theory', since:

    '..proof  theory  specifies how we can obtain  new  sentences 
    (theorems) from assumed ones (axioms) by means of pure symbol 
    manipulation (inference rules) .... answering questions is in 
    fact no different from proving theorems.'

    P. Flanch (1994)
    Logic and Logic Programming p.17 
    Simply Logical: Intelligent Reasoning by Example

Some of the difficulty for those new to the work lies in the fact that 
when  using a 4GL with a database such as PROBE, one's programming  is 
both declarative and procedural. 

    'Logic  programming  is the name of  a  programming  paradigm 
    which  was  developed  in  the 70s.  Rather  than  viewing  a 
    computer   program  as  a  step-by-step  description  of   an 
    algorithm, the program is conceived as a logical theory,  and 
    a  procedure call is viewed as a theorem of which  the  truth 
    needs  to  be  established. Thus executing  a  program  means 
    searching   for   a  proof.   In   traditional   (imperative) 
    programming   languages,   the  program   is   a   procedural 
    specification  of  how  a  problem needs  to  be  solved.  In 
    contrast,  a  logic  program concentrates  on  a  declarative 
    specification  of what the problem is. Readers familiar  with 
    imperative  programming  will  find  that  Logic  Programming 
    requires  a  quite different way of thinking.  Indeed,  their 
    knowledge   of  the  imperative  paradigm  will   be   partly 
    incompatible with the logic paradigm.

    This  is  certainly  true with regard to  the  concept  of  a 
    program  variable. In imperative languages, a variable  is  a 
    name  for a memory location which can store data  of  certain 
    types. While the contents of the location may vary over time, 
    the variable always points to the same location. In fact, the 
    term 'variable' is a bit of a misnomer here, since it  refers 
    to a value that is well-defined at every moment. In contrast, 
    a  variable  in  a  logic  program  is  a  variable  in   the 
    mathematical  sense, i.e. a placeholder that can take on  any 
    value.  In this respect, Logic Programming is therefore  much 
    closer to mathematical intuition than imperative programming.

    Imperative programming and Logic Programming also differ with 
    respect to the machine model they assume. A machine model  is 
    an  abstraction  of  the  computer  on  which  programs   are 
    executed.  The imperative paradigm assumes a dynamic,  state-
    based machine model, where the state of the computer is given 
    by  the  contents  of its memory. The  effect  of  a  program 
    statement  is a transition from one state to  another.  Logic 
    Programming  does  not assume such a dynamic  machine  model. 
    Computer plus program represent a certain amount of knowledge 
    about the world, which is used to answer queries.'

    P. Flanch (1994)
    Logic and Logic Programming p.1-2 
    Simply Logical: Intelligent Reasoning by Example
    
The key elements here have been expressed by Kowalski's (1979) equation:

                     algorithm = logic + control

where  'logic'  refers to declarative programming  or  knowledge,  and 
'control'  refers  to procedural programming or knowledge.  To  use  a 
system such as PROBE effectively, one requires both classes of skills, 
although, within a 4GL, the emphasis is on the logic component:

    'For  inexperienced  database  users  it  is  desirable  that 
    queries  be  expressed  in a formalism as  close  to  natural 
    language  as  possible.  Since  logic  originates  from   the 
    analysis  of  natural  language, it is  not  surprising  that 
    database query languages express only the logic component  of 
    algorithms.   Restricting  query  languages  to   the   logic 
    component  has other advantages. It has the consequence  that 
    storage and retrieval schemes can be changed and improved  in 
    the  control component without affecting the user's  view  of 
    the  data as defined by the logic component. In general.  the 
    higher  the  level of the programming language and  the  less 
    advanced  the  level of the programmer, the more  the  system 
    needs to assume responsibility for efficiency and to exercise 
    control over the use of the information given.

    The notion that: computation = controlled deduction

    was first proposed by Hayes (1973) and more recently by Bibel 
    (1978), Kowalski (1976), Pratt (1977) and Schwarz (1977). The 
    similar  thesis  that database systems be decomposed  into  a 
    relational component which defines the logic of the data, and 
    a control component which manages data storage and  retrieval 
    has been advocated by Codd (1970)...

    Natural Language = Logic + Control

    The procedural interpretation of Horn clauses reconciles  the 
    classical role of logic in the analysis of language with  the 
    interpretation  of  natural language statements  as  programs 
    (Winograd  1972). Like algorithms, natural language  combines 
    logic with control. The sentence:

    If  you want Mary to like you then give her presents  and  be 
    kind to animals

    combines the declarative information:

    Mary  likes  you  if you give her presents and  are  kind  to 
    animals.

    with the advice that it be used top-down to solve problems of 
    being liked by Mary to subproblems of giving her presents and 
    being kind to animals.'

    R. Kowalski (1979)
    The Procedural Interpretation of Horn Clauses
    Logic for Problem Solving

The  language  of  science can, as Quine has  clearly  shown,  get  by 
without  the psychological idioms which can not be  regimented  within 
the langauge of the Predicate Calculus, or the subset of that known as 
Horn  Clauses. Once one accepts the inevitability of  this  conceptual 
framework  (reduction to Conjunctive Normal Form, and Clause Form)  it 
should  be clear that there exists a technology for  asking  questions 
explicitly  and  unambiguously, and that it is no  longer  practically 
possible   (as   Hahn  1933  made  clear)   to   practice   psychology 
intensionally. It is only possible to analyse behaviour extensionally. 
At  this stage, it should also be clear what is meant by  saying  that 
this  is so because 'variables' are what we seek to  have  'satisfied' 
(Tarski  1956;  Barwise & Etchemendy 1992) by the constraints  of  our 
well   formed  formulae  (PQL  retrievals).  Variables  are   argument 
positions  which we substitute values into when passing the data  base 
data  'through' the logical conditions which comprise our  retrievals. 
That is, variables are:

    '...place   holders  that  indicate   relationships   between 
    quantifiers   and   the   argument   positions   of   various 
    predicates.'

    J. Barwise & J. Etchemendy (1992)
    The Language of First-Order Logic p.115


If  not as predicates, our descriptors might be  called  'attributes', 
but not 'variables'. With computerised quantification, the power of  a 
system  such as PROBE should become apparent.  'Sentence_Length',  for 
example,  is better conceived as a description of an individual, ie  a 
declarative  statement,  a predicate, or as a function of  a  specific 
arity.  Our  queries, whether in PROLOG, SQL or PQL are  satisfied  by 
conditions  which meet the conditions of  quantifiers  (instantiation, 
existential or universal). 

    'To  describe when quantified sentences are true, we need  to 
    introduce  the  auxiliary notion of satisfaction.  The  basic 
    idea  is simple, and can be illustrated with a few  examples. 
    We say that an object satisfies the atomic wff Cube(x) if and 
    only  if the subject is a cube. Similarly, we say  an  object 
    satisfies  the complex wff Cube(x) U Small(x) if and only  if 
    it  is both a cube and small. As a final example,  an  object 
    satisfies  the wff Cube(x) V ~Large(x) if and only if  it  is 
    either a cube or not large (or both).'

    Ibid p.119-20

Continued in next post.
-- 
David Longley

