Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.mathworks.com!news.alpha.net!uwm.edu!vixen.cso.uiuc.edu!howland.reston.ans.net!ix.netcom.com!netcom.com!nagle
From: nagle@netcom.com (John Nagle)
Subject: Re: Dennett versus Searle
Message-ID: <nagleD5JMxt.46F@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <3k2agh$i87@mp.cs.niu.edu>> <OZ.95Mar14152024@nexus.yorku.ca> <3k57hg$da@mp.cs.niu.edu> <1995Mar15.033421.18223@news.media.mit.edu>
Date: Thu, 16 Mar 1995 17:22:41 GMT
Lines: 22
Sender: nagle@netcom15.netcom.com

minsky@media.mit.edu (Marvin Minsky) writes:
>On a dimly related topic, there is a great theorem of Moore, Shannon,
>DeLeeuw and Shapiro (published in Automata Studies, around 1956) which
>shows that if we introduce a binary (coin-tossing) probabilistic
>variable into a Turing Machine's input then the resulting computation
>is, in a strong asymptotic sense, computable--if the coin's
>probability itself is a computable real number.  [It would take too
>long to explain precisely in what sense, but the paper explains it
>clearly.]

>Again, though, there's no way to know if a coin is "computable" so
>Rickert would be right about this one, too--but again I don't see much
>philosophical significance here.

      A practical implication of that result is that some algorithms
which have worst-case exponential time but average-case time better than
exponential can have the worst-case time reduced through the introduction
of a random variable.  Linear programming is an example.  If you break
ties at random when at a vertex where all the edges to follow look
equally good, performance in the pathological cases improves.

					John Nagle
