Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!swrinde!howland.reston.ans.net!psinntp!psinntp!psinntp!psinntp!megatest!news
From: Dave Jones <djones>
Subject: Re: Q: NN in safety critical applications ?
Content-Type: text/plain; charset=us-ascii
Message-ID: <DMqM94.Er0@Megatest.COM>
Sender: news@Megatest.COM (News Admin)
Nntp-Posting-Host: pluto
Content-Transfer-Encoding: 7bit
Organization: Megatest Corporation
References: <4fnr16$pss@wmwap1.math.uni-wuppertal.de>
Mime-Version: 1.0
Date: Tue, 13 Feb 1996 23:22:16 GMT
X-Mailer: Mozilla 1.1N (X11; I; SunOS 5.4 sun4m)
X-Url: news:4fnr16$pss@wmwap1.math.uni-wuppertal.de
Lines: 22

Jens van Mahnen <mahnen> wrote:
>
>To my oppinion I'm not sure, if neural networks should be implemented in safety
>devices (let's think of the control of an autopilot of an airplane). Last time
>I read an article about the reliability of nn. In fact there is no 100%
>reliability.
>
>
>What do you think?
>

There is no 100% reliability in any program or machine man can
devise.  Whether or not you want to use a particular technology for a given
application, be it a "safety device" or whatever, is often a rather
involved question. Phrases like "design for test", "test coverage",
"failure cost", and "redundancy" come to mind. The problem of an undiscovered
"glitch" in the response function of a neural net is no different in kind from
the problem of a microprocessor that might have an undetected physical or
design flaw that causes it to fail on very rare sequences of instructions.

                Dave

