Newsgroups: alt.christnet.philosophy,alt.philosophy.jarf,alt.philosophy.objectivism,alt.philosophy.zen,comp.ai.philosophy,sci.philosophy.meta,sci.philosophy.tech,talk.philosophy.humanism,talk.philosophy.misc
Path: cantaloupe.srv.cs.cmu.edu!rochester!rit!isc-newsserver!csh-newsserver.csh.rit.edu!pat
From: pat@csh.rit.edu (Pat (Jedi In Training))
Subject: Re: The Search For Truth
Message-ID: <L6#HID.mzA@csh-newsserver.csh.rit.edu>
Sender: news@ultb.isc.rit.edu (USENET News System)
Nntp-Posting-Host: mcp.csh.rit.edu
Organization: Computer Science House @ Rochester Institute of Technology
References: <mike.799620809@mik.uky.edu> <3ofo48$hsc@lyra.csx.cam.ac.uk> <Pine.SUN.3.91.950508090923.19120A-100000@ucsbuxa.ucsb.edu> <799953165snz@longley.demon.co.uk>
Distribution: inet
Date: Tue, 9 May 1995 19:48:51 GMT
Lines: 50
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:27855 sci.philosophy.meta:17943 sci.philosophy.tech:17939

In article <799953165snz@longley.demon.co.uk> David@longley.demon.co.uk writes:
>3) could someone  clarify for  me  the implications  of  Godel's  theorem for 
>   the potential  of intelligent  (ie  Prolog  like/deductive ) front ends to
>   relational databases? At  present, many of  us  spend a lot  of  time  and
>   effort programming in  4GLs  to retrieve  (deductively  derive)  relations
>   from data, and the addition of such technology (if it is in fact psosible)
>   would be a great help.

    While I'm not fully aware (yet -- maybe next month) of the
    AIs employed in deductive derivation from relational databases,
    from what I do know:

    The AIs employed would each have their own sets of rules governing
    what sorts of relationships they'll be able to recognize and what
    information (and other relationships) they'll be able to deduce
    from there.  Godel's work implies that (at least) one of the following
    is true for each AI (or set of AIs) employed in the task:
	1) it (or they) can deduce information or relationships
	    that contradict other information or relationships it
	    (or they) deduced before.
	2) it (or they) will never deduce some informations or
	    relationships that an encompassing system could.
	    (The crux of Godel Undecidability is that if you
	    ever manage to encompass the problem space fully,
	    you'll introduce problem #1).
    
    So... if the goal of your efforts is to find specific relationships
    in the data, then Godel doesn't particularly apply.  In a situtation
    where I only care that 1+0 is zero, I don't need a system that tells
    me what 1+1 is or 0+1 or 0+0.  I'm willingly accepting problem #2.  I
    concede that 1+1 might be a valid question, but my system doesn't answer
    it.

    If the goal of your efforts is to find *all* relationships in the data,
    you were doomed from the start (unless you started before 1938 8^) ).

    If the database contains all of the ways a tic-tac-toe board could be
    set up, your AIs may never "think" to group them into a game-tree.
    But, for a human (post-1950 mathemetician at least), it may seem
    an obvious relationship.

    hope this didn't muddy the waters,
    pat

-- 
Hope y'all find it weird, useful, useless, giggleworthy (look ma, no 
grammar), or at least find it (if you lose it, there may be something 
wrong with your computer.)
		-- pali151@netcom.com

