Newsgroups: comp.ai,comp.ai.edu,comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!udel!gatech!howland.reston.ans.net!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: perceptrons
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D6uGCo.Jns@unx.sas.com>
Date: Tue, 11 Apr 1995 00:07:36 GMT
Distribution: usa
X-Nntp-Posting-Host: hotellng.unx.sas.com
References:  <D6IM6C.48s@ssbunews.ih.att.com>
Organization: SAS Institute Inc.
Lines: 14
Xref: glinda.oz.cs.cmu.edu comp.ai:28957 comp.ai.edu:2438 comp.ai.neural-nets:23372


In article <D6IM6C.48s@ssbunews.ih.att.com>, flb@odutsa.nw.att.com (blackmond) writes:
|> If the activation function of all hidden units is linear, show that
|> a multilayer perceptron is equivalent to a single-layer perceptron.

It isn't necessarily. For extra credit on your homework, consider what
happens if the number of hidden units is less than both the number of
inputs and the number of outputs.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
