Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!news.sprintlink.net!crash!mkppp.cts.com!user
From: Dean_Abbott@partech.com (dean abbott)
Subject: Re: AIM Abduction package
Organization: pgsc
Date: Fri, 20 Jan 1995 17:13:59 GMT
Message-ID: <Dean_Abbott-2001950920230001@mkppp.cts.com>
References: <3fh12e$au9$2@mhade.production.compuserve.com> <790514391snz@ecowar.demon.co.uk>
Sender: news@crash.cts.com (news subsystem)
Nntp-Posting-Host: mkppp.cts.com
Lines: 62

In article <790514391snz@ecowar.demon.co.uk>, jimmy@ecowar.demon.co.uk wrote:

> In article <3fh12e$au9$2@mhade.production.compuserve.com>
100021.1236@CompuServe.COM writes:
> 
> >Has anybody used the AIM package of AbTech. Based on polynomial 
> >networks it is astonishingly fast in certain tasks. 
> 
> There are several questions related to the GMDH method implemented by
> AbTech:
> 
>         1. polynomials are *global* approximation basis functions,
>            hence robustness is increased by prunning tree - a purely
>            heuristical procedure unfortunately (does anyone knowns
>            details about prunning algorithm in the AIM)

Polynomials are only as *global* as the domain over which they are used (just
like piece-wise linear models are localized because only a "piece" of each 
line is used over the specified data range).  But you are quite correct if you 
mean that polynomials in general (that is higher order polynomials) do not
extrapolate well (they become unbounded) and even interpolation can be hairy if
the order of the polynomial causes to much wiggle in the fit.

The pruning algorithm uses an information-theoretic criterion (Predicted
Squared Error) to compare nodes and models that have been fit so far.  PSE
is basically the sum of the fitting squared error and a complexity penalty
which is the number of weights (coefficients) in the model divided by the
number of examples in the database.  This quotient is multiplied by the
predicted model error variance, which, of course, is not known a priori,
but doesn't have to be exact (a high guess is better).  Pruning takes
place on individual coefficients, nodes in a layer, and determines the
number of layers (basically, keep going until it isn't possible to lower
the PSE, with a max of 4 layers, I think).

> 
>         2. It might not be useful in exploratory modelling.
> 

I think it is particularly good for exploratory modelling because it is
inductive, that is, it finds model structure from the data in addition to
just the coefficients.  That means too that it will select which features
produce the best model, which is exactly what you want out of exploratory
modelling (or course, this is not to say that data visualization and other
modelling methods should not be used in conjuction with something like
AIM)



>         BTW what is the largest model implemented in the AIM?
> 

Sorry, I don't know.  If by largest you mean most number of weights, the
biggest one I've seen had 256 inputs, but was only three layers (maybe a
total of  about 400 weights).


Dean Abbott

-- 
PAR Government Systems Corp.     |
1010 Prospect St., Suite 200     |
La Jolla, CA 92037               |
