Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!gatech!rutgers!uwvax!sinetnews!news.u-tokyo.ac.jp!news.tisn.ad.jp!riksun!ankara!lbl
From: lbl@ankara.riken.go.jp ()
Subject: Re: Inverse, Reverse, ... ?
Message-ID: <1994Nov21.061118.18496@riksun.riken.go.jp>
Sender: news@riksun.riken.go.jp
Nntp-Posting-Host: ankara
Reply-To: lbl@ankara.riken.go.jp
Organization: Sun Microsystems
Date: Mon, 21 Nov 1994 06:11:18 GMT
Lines: 63

 saswss@hotellng.unx.sas.com (Warren Sarle) writes:

>To invert (everybody seems to be calling this inversion) the net,
>you use the same statements as before for computing the network
>outputs. But instead of using the training criterion, you tell NLP
>to fix the weights and to constrain each output to the desired value
>(these are nonlinear constraints). Then you invent some objective
>function and tell NLP to optimize it with respect to the inputs.

We have developed two algorithms for inverting feedforward neural nets 
using nonlinear programming (NLP) and linear programming (LP) 
techniques, and presented them in 

   Lu, B. L, Kita, H. and Nishikawa, Y.: "A New Method for Inverting
   Nonlinear Feedforward Networks", Proc. of IEEE International 
   Conference on Industrial Electronics, Control and Instrumentation
   (IECON'91), pp.1349-1354, Kobe, Japan, 1991.

>This is very simple to do, 

This may be true for inverting small nets, e.g., XOR net. But, it seems 
difficult to invert large-scale nets by NLP techniques because 
it requires to solve large-scale NLP problems. In order to 
overcome this drawback, we have formulated the inverse problem as a
separable nonlinear programming (SNLP) problem. The SNLP problem 
refers to a NLP problem where the objective function and the 
constraint functions can be expressed as a sum of functions, 
each involving only one variable. An important advantage of the 
SNLP problem over the NLP problems is that a SNLP problem can 
be solved by a variation of the simplex method, a common and 
efficient technique to solve LP problems. The more information 
can be found in 

 
   Lu, B. L, Kita, H. and Nishikawa, Y.: "Inversion of Feedforward
   Neural networks by a Separable Programming", Proc. of World
   Congress on Neural Networks, vol. 4, pp.415-420, Portland, 1993.
or
   Lu, B. L.: "Architectures, Learning and Inversion Algorithms
   for Multilayer Neural Networks", Ph. D. Thesis, Dept. of
   Electrical Engineering, Kyoto University, Jan., 1994.

>but my problem is that I have little
>information on what objective functions on the inputs would be
>useful in real-life applications,

We applied our inversion algorithm to examining and improving
generalization capability of trained nets, and solving inverse
kinematic problems for redundant manipulators. For example,
setting different objective functions, we obtain two kinds of
network inversions, i.e., IMSI (Inversion unilaterally Minimizing
or Maximizing Single Input variable) and INSI (Inversion Nearest
the Specified Input value). Hence, we can find multiple inverse
kinematic solutions for redundant manipulators for a desired
end-effector position, instead of only one inverse solution.

----

Bao-Liang Lu                | Bio-Mimetic Control Research Center
Email:lbl@nagoya.riken.go.jp| The Institute of Physical and Chemical Research
TEL:+81-52-654-9137         | (RIKEN)
FAX:+81-52-654-9138         | 3-8-31, Atsuta-ku, Nagoya 456, JAPAN

