Newsgroups: rec.games.corewar,sci.philosophy.meta,sci.philosophy.tech,comp.ai.alife,alt.philosophy.objectivism
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!portc01.blue.aol.com!newsxfer2.itd.umich.edu!howland.erols.net!EU.net!usenet2.news.uk.psi.net!uknet!usenet1.news.uk.psi.net!uknet!uknet!bcc.ac.uk!news
From: Bjoern Guenzel <b.guenzel@ucl.ac.uk>
Subject: Re: intelligent robots ...
Sender: news@ucl.ac.uk (Usenet News System)
Message-ID: <328C841B.167E@ucl.ac.uk>
Date: Fri, 15 Nov 1996 14:54:19 GMT
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=us-ascii
References: <847397595.3012.1@ogham.demon.co.uk>
Mime-Version: 1.0
X-Mailer: Mozilla 2.02 (X11; I; AIX 2)
Organization: University College London
Lines: 11
Xref: glinda.oz.cs.cmu.edu sci.philosophy.meta:36589 sci.philosophy.tech:22665 comp.ai.alife:6910

This thread reminds me of another problem. I am thinking for example
about an automated contest  (like Corewar), where only machines are
allowed to submit (for example the offspring of genetic algorithms).
Now, if it's for example all done via email, how can you be sure that it
is really a machine? That requires the opposite of a turing test... Of
course it is simple to think of tests that only computers can pass, like
some ultra-fast calculations, but how can you be sure it is not a human
using a Computer just for this partial task?


Bjoern

