Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.fuzzy,sci.cognitive
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!purdue!lerc.nasa.gov!magnus.acs.ohio-state.edu!math.ohio-state.edu!usc!news.cerf.net!newsserver.sdsc.edu!nic-nac.CSU.net!charnel.ecst.csuchico.edu!csusac!csus.edu!netcom.com!kovsky
From: kovsky@netcom.com (Bob Kovsky)
Subject: Re: Minsky's Interacting Causes
Message-ID: <kovskyDC7Fyt.FHo@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <push-2207950118170001@mind.mit.edu> <kovskyDC4HuB.E55@netcom.com> <3uu4qq$5jk@cantaloupe.srv.cs.cmu.edu>
Date: Mon, 24 Jul 1995 05:16:04 GMT
Lines: 87
Sender: kovsky@netcom21.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai:31800 comp.ai.neural-nets:25674 comp.ai.fuzzy:5263 sci.cognitive:8524

Scott Fahlman responded to an article of mine and wrote:
>
>In article <kovskyDC4HuB.E55@netcom.com> kovsky@netcom.com (Bob Kovsky) writes:
>
>	   There is a domain of human experience involving inherently complex
>   phenomena where millions of very smart people have been working for
>   hundreds of years developing analytic techniques.  In that domain,
>   practitioners apply very powerful formalisms; and it could serve as a
>   magnificent arena for study of how to reason about complex phenomena. 
>   Practitioners of "artificial intelligence," however, appear to disdain it. 
>   That domain of experience is "law." 
>
>There have in fact been some attempts to use symbolic AI techniques to
>simulate legal reasoning (in particular cases and small sub-domains)
>and to find precedents that MIGHT be relevant to a give case.  The
>latter is a bit more feasible, since it doesn't have to be right all
>the time, just pretty good.
>
>Unfortunately, legal resoning is what some people call an "AI
>complete" problem.  To do it at all well, you need to be able to cope
>intelligently with a very large and not well-bounded universe of
>knowledge and experience.  A legal case might depend critically on the
>court's theory of personal responsibility in the presence of mental
>illness -- something that requires a usable model of thinking itself,
>as well as a broad knowledge of abnormal psychology, typical human
>behavior, community standards, and the philosophical arguments on
>which these stardards may be based.  Or a case may turn on the
>difficulty of putting on a leather glove after it has been soaked in
>blood and dried, and the interaction of that process with a wearer who
>doesn't want the glove to fit.  If we can get our AI programs to
>understand such things and all the other thing slike them, then AI
>will be ready to do just about anything a human intelligence can do.
>We're a long way from that right now (though we can build system that
>can think about any ONE of these things. or a few of them, if we work
>hard enough).
>
>And even within the areas where law seems to be a closed universe, it
>is not logical, clean, and consistent in the way that, say,
>qualitative physics is.  Reasoning formally about law is a bit like
>trying to draw a picture of an amoeba with compass and straightedge.
>The critter is too complex and curvy for that and it keeps channging.
>The law on any given day is a function of which judges have been
>appointed, what mood they are in, and which legislators have been
>bribed by which lobbyists.  In fact, it is in the interest of lawyers
>and legislators to have a system of laws that is much too complex for
>the average person to understand, not to mention the average AI
>program.
>
>So you are right in saying that law is complex enough to provide a
>real test for AI, but I think that it's a final exam.  We won't make
>too much progress on this until we first crack "common sense" and then
>go on to "hellaciously complicated systems that make no sense most of
>the time".
>
>Cheers,
>Scott

	You are, of course, correct about the way in which law draws upon 
the entire realm of human experience.  You missed the point of my 
article, however.   

	Law is a system of reasoning based on rules.  It is a system rich
in formalism.  It is a system that has been applied millions of times by
bright people and by people not so bright.  It is a system that works
pretty well most of the time.

	It is also a system that is not amenable to the techniques of 
AI.  Perhaps this says something about the limitations of AI.  It is AI 
that declares that anything can be reduced to a construction of compass and
straightedge.  Perhaps some different tools are needed.  I have suggested 
some (a structural approach to freedom), but AI dogma rejects such an 
approach without taking it seriously.

	Your cynicism about how legal decisions are made is common, but 
not well founded.  In fact, law is an imperfect instrument; and one 
method to overcome the imperfections is to increase the detail, i.e. 
complexity.  It is, in my opinion, not much subject to "moodiness" and 
only on the rarest occasions infected by bribery.  (I speak of the 
judicial system, not the legislative, where the nature of the influences 
are markedly different.)

	What I am suggesting is that a study of law as a system of formal
reasoning from and with rules might be fruitful.  AI researchers might
learn something new, instead of just announcing that a few more factors of
ten in computer power and program size will, somehow, create a true
general purpose machine. 

