Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.sprintlink.net!howland.reston.ans.net!math.ohio-state.edu!magnus.acs.ohio-state.edu!csn!stortek!chrisk
From: chrisk@gomez.stortek.com (Chris Kostanick)
Subject: Re: Computers do not exercise freedom
Message-ID: <chrisk.805740009@gomez>
Sender: news@stortek.com
Organization: Storage Technology Corporation
References: <3u03sc$780@newsbf02.news.aol.com> <kovskyDBnz61.GGE@netcom.com>
Date: Fri, 14 Jul 1995 16:40:09 GMT
Lines: 56

kovsky@netcom.com (Bob Kovsky) writes:

>Mickwest wrote:
>>
>>I would like to ask the opponents of Strong AI for an example of a quality
>>that a human mind has that a computer could not possibly have, even with a
>>very advanced technology.
[snip]
>     Human beings exercise freedom.  Freedom is the capacity to
>act in a situation where structure is only partially defined. 
>Computers (at least as presently available) require fully defined
>structures.  Hence no computer can exercise freedom.  Indeed,
>what is "mechanical" is not "free" and vice-versa.

Watching people, I rarely see humans exercise any large degree of
freedom. What I do see most of the time is people making choices
from some list. (Whether the list is explicit or not seems a minor
point to me.) Surely we could write a planning program that would
try to move toward the overall goal by choosing among some list
of possibilites. If we then add a module to try to create the list,
we have something that does what a lot of human activity is.

>     Because freedom is exercised in situations where structure
>is only partially defined, it is impossible to describe a
>fully-structured situation where freedom is exercised.  It is,
>however, possible to describe a class of situations where
>structures are partially defined and freedom is exercised.  Such
>a class is presented where the task is the integration of two
>structures, but where there does not exist a general structure
>comprehending the two structures.

Why would things need to be fully structured? The point of a heuristic
is to try to get a good answer, when the best answer is computationally
intractable. If you add the ability to make attempts at solution, see that
they fail and then backoff to the start state and try again, you can do
a lot when the perfect answer is hard to find.

>     Here is a detailed example.  

[example snipped]

This just looks like planning under multiple constraints to me. Since
this is a task that humans do imperfectly, I'm not going to insist that
an AI perform it perfectly. 

As a counter example, there is work going on in software for an
autonymous Mars rover. Since the lighttime delay from Mars to Earth and
back can be half an hour or so, the rover must be able to move and take
care of itself without human intervention. It even needs a rudimentary
idea of what is "interesting" so as to figure out what to examine next.
Since the robot has conflicting goals, (cover ground, protect self, find 
novelty) the planner module has to figure out how to resolve these so
as to perform the overall goal, which is to explore Mars.

Chris Kostanick

