From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!att!linac!uwm.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Tue May 12 15:48:42 EDT 1992
Article 5372 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!att!linac!uwm.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Physical Symbol Systems Hypothesis
Message-ID: <1992May2.190707.26464@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992May2.031108.7475@beaver.cs.washington.edu>
Date: Sat, 2 May 92 19:07:07 GMT

In article <1992May2.031108.7475@beaver.cs.washington.edu> rex@cs.washington.edu (Rex Jakobovits) writes:

>I am looking for arguments against Newell & Simon's Physical Symbol
>Systems Hypothesis (PSSH): "A physical symbol system has the necessary
>and sufficient means for general intelligent action."  Elaine Rich
>declares this to be the underlying assumption "at the heart of
>research in ai".

I find the PSSH much too vague and poorly-specified to be useful.
There are two major problems: (1) lack of clarity in what's
meant by "symbol" and "symbol system"; (2) problems with the
"necessary and sufficient means" clause.

Taking the first problem first, the word "symbol" is usually
ambiguous between one of two meanings: there's the "syntactic"
sense, in which it refers to primitive computational tokens
(e.g. a mark on a Turing-machine tape), and there's the "semantic"
sense, in which it's more or less synonymous with "representation".

So if one construes "symbol" in the syntactic sense, then the
PSSH is more or less equivalent to the hypothesis that computational
AI is possible, and it doesn't distinguish between different kinds
of computation, e.g. traditional "symbolic" AI and connectionist AI
(as even connectionist networks have primitive computational tokens).

"Symbol" is more often construed as the conjunction of the syntactic
and semantic senses: i.e. a symbol is a primitive token that
represents.  Under this construal, the PSSH actually has some
bite: it lets in traditional "symbolic" systems, while excluding
e.g. distributed connectionist systems (whose computational tokens
are not representations, and whose representations are not
computational tokens; so these systems have no "symbols" in the
strong sense).

Different people construe the PSSH in either of these two ways, and
it's unclear to me which one Newell and Simon meant.  Some evidence
for the second interpretation is given by their stipulation (in
"Computer Science as Empirical Inquiry") that a symbol must
(a) designate, and (b) be atomic.  This is the way in which I, along
with many connectionists, have usually construed it (so that
one can believe in strong AI but still disbelieve PSSH).  However,
in more recent writings Simon has taken pains to point out that
connectionist AI is compatible with PSSH, effectively by dropping
the requirement that symbols be atomic (so even a distributed
representation can be a "symbol").  This actually leads to a third
interpretation of PSSH, where there is no commitment to
computational tokens, only to representations.

I don't know where this leaves things, exactly.  On the strong
construal, PSSH at least had some bite, even though it's
probably false.  On the weaker construals, it doesn't sem much
different from an avowal of the possibility of AI; or maybe
it's even weaker.  Personally, I can't see how an intelligent
system could *fail* to be a PSS on the last construal, as all
that's required is that it (a) be physical, and (b) have
representations.  Not much of a constraint.

Maybe someone who knows more about Newell and Simon's intentions,
e.g. Minsky or McDermott, could comment on this.

The invocation of "necessary and sufficient means" is also
very unclear.  Obviously, not *every* PSS will be intelligent.
On closer examination, by "sufficient" they mean that every PSS
*can be extended* into a system capable of intelligent action.
As far as I can tell, that's true of rocks as well.  As for the
"necessity" clause, that seems to be the kind of claim that, if
true, is probably not empirical but conceptual.  Newell and
Simon want to construe PSSH as an empirical claim about
intelligence, but it's not clear to me that it's empirical in
this form.

Instead of talking about "necessary and sufficient means", it
might be more useful to formulate PSSH something like this:

(1) Humans are physical symbol systems.
(2) There exists a certain class of physical symbol systems,
such that anything which instantiates a symbol system in
that class will be intelligent.

This is probably not a million miles from what Newell and
Simon had in mind, and it's a lot clearer.

>What are your opinions about this?  Do non-symbolic systems such as
>connectionist nets and Brooksian creatures exhibit intelligence,
>thereby invalidating the PSSH?  Is there evidence that the human
>subconscious does not resort to symbolic processing?  Does this imply
>that the power of symbolic level processing is inherently limited?

This all depends on how one construes "symbol"; see above.  If
"symbol" = "computational token", then connectionist networks and
Brooks's robots are PSS's.  If "symbol" = "computational token
that represents", then they are not.  If "symbol" = "representation",
then connectionist networks are PSS's, and Brooks's systems are
or are not depending on whether one believes or disbelieves Brooks's
claim that his systems don't have representations.

On similar lines, human subconscious processing almost certainly
uses representations; it's much less clear that it uses atomic
representations of the kind that traditional AI posits.

>Can one be skeptical of the PSSH and still believe in AI?

On the strong construal, certainly -- e.g. by believing in the
possibility of connectionist AI, or more generally in AI where
the computational level falls below the representational level.
On the weak construal where "symbol" = "computational token",
maybe not.  On the weak construal where "symbol" =
"representation", then arguably yes, e.g. if one is Rod Brooks
or Stephen Stich, although most people find the notion of
intelligence without representation very implausible.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


