Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: More FAQs
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D3Fy9C.Bpp@unx.sas.com>
Date: Fri, 3 Feb 1995 20:29:36 GMT
References:  <M6WWBYOS@math.fu-berlin.de>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 32


In article <M6WWBYOS@math.fu-berlin.de>, gc@pip.fpms.ac.be (Gustavo Calderon) writes:
|>      Why does a neuron need a bias input ?

This question has been getting asked an _awful_ lot lately. I propose
adding Scott's answer to the FAQ:

One way of looking at this is that the inputs to each unit in the net
define an N-dimensional space, and the unit draws a hyperplane through
that space, producing an "on" output on one side and an "off" output
on the other.  (With sigmoid units the plane will not be sharp --
there will be some gray area of intermediate values near the
separating plane -- but ignore this for now.)

The weights determine where this hyperplane is in the input space.
Without a bias input, this separating plane is constrained to pass
through the origin of the hyperspace defined by the inputs.  For some
problems that's OK, but in many problems the plane would be much more
useful somewhere else.  If you have many units in a layer, they share
the same input space and without bias would ALL be constrained to pass
through the origin.

|>      Why do we need activation functions ?

This one pops up every once in a while, too, and I remember that
somebody, maybe Scott again, has posted a nice answer.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
