Newsgroups: comp.ai,comp.ai.philosophy,sci.logic,sci.philosophy.tech,sci.psychology,sci.psychology.theory,sci.cognitive
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!purdue!haven.umd.edu!news.umbc.edu!eff!news.duke.edu!news.mathworks.com!newsfeed.internetmci.com!EU.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: On Going Beyond The Information Given & 'Cognition'
Message-ID: <jqbDD7qsx.4u5@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <400mb0$l5b@mp.cs.niu.edu> <808143622snz@longley.demon.co.uk> <jqbDD681v.3pw@netcom.com> <808214828snz@longley.demon.co.uk>
Date: Sat, 12 Aug 1995 19:43:45 GMT
Lines: 129
Sender: jqb@netcom22.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai:32454 comp.ai.philosophy:31622 sci.logic:13974 sci.philosophy.tech:19392 sci.psychology.theory:317 sci.cognitive:9041

In article <808214828snz@longley.demon.co.uk>,
David Longley  <David@longley.demon.co.uk> wrote:
>In article <jqbDD681v.3pw@netcom.com> jqb@netcom.com "Jim Balter" writes:
>
>> In article <808143622snz@longley.demon.co.uk>,
>> David Longley  <David@longley.demon.co.uk> wrote:
>> >I find it very difficult to see how anyone working in AI or Cognitive
>> >Science can  read the  following extracts from  a range of  important 
>> >papers over the past 20 years and still have the termity to write the
>> >audacious remarks that have been made about the need for 'debate'. 
>> 
>> What, you mean that the tools of behavior science aren't sufficient for you to
>> resolve this quandary regarding behavior?  I have had serious disagreements
>> with people such as you, Neil Rickert, Jeff Dalton, Wayne Throop, Matthew
>> Wiener, Steven Harnad, Chris Malcolm, Clayton Gillespie, Mikhail Zeleny, Ken
>> Colby, Pete Lupton, Timothy Murphy, et. al., in many cases concluding that
>> they are mistaken on some issue or concept or another.  Yet in each case I
>> have some fairly detailed model as to how it could be that they are so
>> gawl-darn stubborn and pig-headed :-) as to disagree with me, considering the
>> amount of evidence and the careful reasoning I have provided.  One aspect of
>> that model is the notion that, no matter how extensive my readings and
>> analysis of some subject, no matter how certain I am, *I may be wrong*.  Would
>> you call suggesting the possibility that you also might be wrong "termity "
>> (sic) or audacity?
>
>On the last sentence, not at all Jim. I'd like to think that  I'd  
>at  least learn something through finding something to be  wrong. 

Yes you would, but since your process (standing under Aaron's lamppost) keeps
you immune from finding such a thing, you will never obtain such learning.

>But with respect to the themes of research covered in  'Fragments 
>of Behaviour..' it  isn't  so much a matter of *me* wanting to be 
>*right*. That just isn't  the  way  to evaluate or appraise it  - 
>what  is  important is whether it looks useful   in  a  practical 
>sense.

Do you even remember what you wrote that I responded to above?  You repeatedly
equivocate over what is at issue here.  As far as the practical value of PROBE
and related methods, this is distinctly off-topic for c.a.p, and I do wish you
would take your shameless huckstering elsewhere, somewhere that you wouldn't
feel compelled to make sweeping generalizations about "Cognitive Scientists"
and their "temerity".  As for *philosophical* issues and interpretations,
issues of intensionality, what makes good or bad science, whether there is
"need for debate", etc. your "wanting to be right" is very much an issue.

>What this comes down to is, are we likely to go astray IFF 
>we  use  a relational database system in the  way  proposed,   ie  
>limiting  what we record on the grounds proposed,  and  investing 
>our time and  efforts in work on automated report generation  and 
>the  generation of distributions of data as base rates  to  guide 
>behaviour management along actuarial lines?

Perhaps you could explain the relationship between these questions and
philosophical issues regarding artificial intelligence.  Maybe I've missed
something.  Others have asked what the purpose of these reports is, what will
be done with them?  Will your system make *decisions*, or is that left in the
hands of non-artificial systems (like prison administrators)?

And don't complain that I would know if I had read your megabytes of postings.
It is rather arrogant of you to assume that people will want to read this
stuff *before* they know what it says.  As ole extensional Spock might say,
"That's not logical, Jim", but it is how the relatively slow chemical
processes of the human mind are able to do real-time processing of an
incredibly rich input stream.  As Neil said, human judgement is better than
what we'd expect of a rational agent.

>That  is,  there  are  technical  issues, which others  may  have  
>empirical experience  with, which could be profitably  discussed. 
>There  has  been none of this to date. If one looks over  at  the 
>neural networks group, one  can see something like that...
>
>How about some of it here?

Try comp.ai for technical issues and empirical experience.  Here, we talk
*philosophy*.

>(IF  what I have said about the intensional idioms is  true,  one 
>will almost inevitably get into 'flame wars' once one slips  into 
>that  realm.  There  are no facts of the matter  there,  and  the 
>indeterminacy  is   worse   there   than  anywhere.  Recall  what 
>Skinner said about exactly the same matter  back  in 1984? 

Ah, no concern for you or Skinner "wanting to be right".

>      Why  have  I  not been more  readily  understood?

Poor misunderstood Skinner; if everyone weren't using all these inferior
techniques, they surely would agree with him.  After all, what alternative is
there?

This is not an uncommon complaint: you disagree with me therefore you
misunderstand me (or haven't read what I've written).  One can be
misunderstood in individual cases; but when you think you are being
misunderstood *in general*, by a large class of intelligent people, it is time
to consider the possibility of paranoia and delusions of grandeur (as well as
being *wrong* in some significant way).

>     Why  is discussion in the behavioral sciences so  often 
>    personal?  I  do not believe that Einstein,  finding  it 
>    necessary to challenge some basic assumptions of Newton, 
>    alluded to Newton's senility. I do not think that Mendel 
>    and the other early geneticists, discovering facts  that 
>    Darwin  so  badly needed, then accused him  of  "totally 
>    ignoring" the genetic basis of evolution. I do not think 
>    that  those  who propounded the gas laws  for  so-called 
>    ideal   or  perfect  gases  were  condemned  for   their 
>    prejudice against the individual gas molecule.

Or how about Velikovsky?  Or Lysencko?  Or von Daniken?  Or people who write
letters to the editor with methods for trisecting angles or proofs that pi =
3.1416.  *Those* are examples of people who also made these claims of not
being "understood", and compared themselves to proven giants.  Such an
attitude doesn't mean that one is wrong, but it's not a good sign.  Einstein
didn't fret about whether he would be understood, except perhaps in his
opposition to QM, where again being "misunderstood" was a matter of being
*wrong*.

>Let's  have  more  contributions which  might  help  to  progress 
>research programmes and less of the fruitless, rhetorical debate.

Hey, you go first.  *Stop* making sweeping generalizations about "cognitive
scientists", the value of "the cognitive approach", the value of intensional
idioms, what pschologists ought to be doing instead of providing "care and
understanding", etc. etc. or stop claiming that all you are interested in is
the practicality of some rDBMS.
-- 
<J Q B>

