From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!news.u.washington.edu!milton.u.washington.edu!forbis Thu Feb 20 15:20:49 EST 1992
Article 3736 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!news.u.washington.edu!milton.u.washington.edu!forbis
>From: forbis@milton.u.washington.edu (Gary Forbis)
Subject: Re: Where lies the hardware break even point ?
Message-ID: <1992Feb14.165029.5597@u.washington.edu>
Sender: news@u.washington.edu (USENET News System)
Organization: University of Washington, Seattle
References: <uh311ae.698027538@sunmanager>
Date: Fri, 14 Feb 1992 16:50:29 GMT

In article <uh311ae.698027538@sunmanager> uh311ae@sunmanager.LRZ-Muenchen.DE (Henrik Klagges) writes:
>Let's assume a kind of strong AI will be possible (I can hear you
>laugh 8-). Let's also assume that a kind of generalized neural 
>network architecture will be appropriate. 
>At which level of hardware complexity do you expect semi-intelligence
>to emerge ? Current technology allows 1k random-wired synapses, each 
>updated about 25 times/s per $, with reasonable scaling properties.
>So, a million $ would give you a billion connections that get updated
>in (biological) real time.
>
>Cheers, Henrik
>IBM Research

Being an armchair hobbyist I will give my feeling here about what level
of complexity will set an upper limit.

Any specific neural net is a special case of a maximally connected nn with
the same number of inputs, outputs, and other units.  To the extent that
a nn can be subdivided into subnets with defined inputs and outputs the
number of connections can be reduced.

No less than maximally connected net can be guaranteed to be trainable to
any other less than maximally connected trained net.  Without knowing which
connections are necessary a random-wired net may just not do. 

--gary forbis@u.washington.edu


