From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!jvnc.net!yale.edu!spool.mu.edu!sol.ctr.columbia.edu!ira.uka.de!fauern!LRZnews!sunmanager!uh311ae Thu Feb 20 15:20:39 EST 1992
Article 3720 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!jvnc.net!yale.edu!spool.mu.edu!sol.ctr.columbia.edu!ira.uka.de!fauern!LRZnews!sunmanager!uh311ae
>From: uh311ae@sunmanager.LRZ-Muenchen.DE (Henrik Klagges)
Newsgroups: comp.ai.philosophy
Subject: Where lies the hardware break even point ?
Message-ID: <uh311ae.698027538@sunmanager>
Date: 14 Feb 92 00:32:18 GMT
Sender: news@news.lrz-muenchen.de (Mr. News)
Organization: Leibniz-Rechenzentrum, Muenchen (Germany)
Lines: 11

Let's assume a kind of strong AI will be possible (I can hear you
laugh 8-). Let's also assume that a kind of generalized neural 
network architecture will be appropriate. 
At which level of hardware complexity do you expect semi-intelligence
to emerge ? Current technology allows 1k random-wired synapses, each 
updated about 25 times/s per $, with reasonable scaling properties.
So, a million $ would give you a billion connections that get updated
in (biological) real time.

Cheers, Henrik
IBM Research


