                 MATLAB implementation of CMAC
                 =============================

The following are notes on using the files

                   ADDRCMAC.MEX
                   OPCMAC.MEX
                   MODCMAC.MEX
                   CMAC1D.M
                   CMAC2D.M

The source files ADDRCMAC.C, OPCMAC.C, MODCMAC.C and the help files
ADDRCMAC.M, OPCMAC.M and MODCMAC.M are supplied also.

The accompanying files implement the cerebellar model articulation
controller (CMAC), a neural network architecture proposed by J.S. Albus.
This network is applied to two of the example problems that accompany
the MATLAB Neural Network Toolbox, namely BCKPROP4 and CSTRAIN. These
simple function approximation examples illustrate some of the features
of CMAC.

Essentially, CMAC is a form of lookup table in which the entries are
distributed among a number of memory locations. This leads to local
generalisation properties that are particularly suitable for function
approximation.

CMAC may be viewed in a number of different ways. For example, it is
similar, in many respects, to B-spline networks and to radial basis
function networks. Also, it may be shown to be equivalent to some forms
of fuzzy system.

Because it is linear in the parameters (weights) adapted, CMAC may be
trained using an equivalent to Widrow's LMS algorithm.

For the purposes of implementation, however, it is simplest to view
CMAC as a distributed lookup table, the output of which is formed by
summing the contents of a number of memory locations. The addresses of the
memory locations used are a function of the input to the network. Hence,
the constituent parts of a CMAC network are

- an address generation function (the supplied function ADDRCMAC)
- some memory (a MATLAB matrix)
- an accumulator (a line or two of MATLAB code)

In addition to the function ADDRCMAC, functions OPCMAC and MODCMAC are
supplied. These implement the computation of a network output (OPCMAC)
and the computation of a network output after a training step (MODCMAC).

ADDRCMAC was written as a MEX file for speed. I have been using it in
control system simulations in SIMULINK in which it is called frequently.
I would welcome suggestions on how to code it to run faster still. There
is no particularly good reason why OPCMAC and MODCMAC are MEX, rather than
M files. As can be seen from the source files, OPCMAC and MODCMAC are
substantially similar to ADDRCMAC. The only differences are the addition of
an accumulator, in the case of OPCMAC and, in the case of MODCMAC, the LMS
equivalent training algorithm. These additions might just as easily be
implemented in M files for all the complexity involved.

The form of the function ADDRCMAC is

ADDRS = ADDRCMAC(IP,IPRANGE,C,WIDTH,MEMSIZE)

where

ADDRS   is a C-dimensional vector of addresses returned by the function
IP      is an input vector
IPRANGE is the range in which each element of IP is expected to fall, i.e.
        you must scale IP(i) to fall in the range 0 to IPRANGE for all i
C       is the number of addresses returned
WIDTH   is related to the extent of local generalisation
MEMSIZE is the total number of weights used by the network. It is the
        range in which the returned addresses will fall and therefore the
        size of matrix required to store the network weights. Setting
        MEMSIZE to 0 disables the hash coding of addresses that otherwise
        happens within the function ADDRCMAC otherwise. With hashing
        disabled, the range of addresses returned is given by 
        C*(IPRANGE/WIDTH)^IPDIM where IPDIM is the number of elements in
        the input IP. Other than for low dimensional inputs, the range of
        addresses returned will become unmanageably large - that's why hash
        coding is the default. However, hash coding introduces extra noise
        into the network output and for low dimensional problems, it may be
        useful to disable it.

For more information on CMAC, the original papers by Albus are a good starting
point. There is much literature on the subject.

Albus, J.S., A New Approach to Manipulator Control: The Cerebellar Model
Articulation Controller (CMAC), Trans. ASME J. DSMC vol.97 no.3, Sept 1975,
pp 312-324

Albus, J.S., Data Storage in the Cerebellar Model Articulation Controller,
Trans. ASME J. DSMC vol.97 no.3, Sept 1975, pp 228-233

In the CMAC implemented here, the different layers of hypercubic receptive
field functions are offset relative to eachother along hyperdiagonals in
input space. It has been shown that an improvement in performance can be
gained by using a different pattern of offsets. These offsets would need to
be calculated only once, when the network architecture is defined, and then
stored. There is no extra computation required within ADDRCMAC, but the
offsets would be read from memory rather than computed on-line as I have
implemented the function. Details of an algorithm to compute the offsets
can be found in

Edgar An, P.C., Miller, W.T., Parks, P.C., Design Improvements in Associative
Memories for Cerebellar Model Articulation Controllers (CMAC), Proc. ICANN,
1991, pp 1207-1210

The form of the function OPCMAC is

OP = OPCMAC(WTS,IP,IPRANGE,C,WIDTH,MEMSIZE)

where the parameters have the same significance as for ADDRCMAC and

OP      is the output of the network for the input IP
WTS     is the matrix of weights from which the output is formed

The form of the function MODCMAC is

OP = MODCMAC(WTS,IP,TARGET,BETA,IPRANGE,C,WIDTH,MEMSIZE)

where the parameters have the same significance as for OPCMAC and

TARGET  is the desired network output
BETA    is the learning rate for the LMS algorithm and should be in the
        range 0 to 2.

MODCMAC returns the output of the network for the input IP after modifying
the weights using the LMS equivalent training algorithm.


The use of ADDRCMAC, OPCMAC and MODCMAC is illustrated by the scripts
CMAC1D and CMAC2D. These are based on the MATLAB Neural Network Toolbox
examples BCKPROP4 and CSTRAIN. The Neural Network Toolbox is not required
in order to run CMAC1D and CMAC2D. However, it is interesting to
have the Neural Network Toolbox examples for comparison.

The following are suggestions for running the examples. It is instructive
also to use parameter values other than those suggested here.

CMAC1D
======

This example is based on the Neural Network Toolbox example BCKPROP4 in
which a one dimensional function is learned by a multilayer perceptron.
Here, the multilayer perceptron is replaced by a CMAC network.

Suggested parameter values are as follows.

WIDTH = 20     C = 20     BETA = 0.6    MEMSIZE = 0

Choose the 'damped sinusoid' as the function to be approximated.

The default version of BCKPROP4 trained in approximately 7000 epochs. Adding
momentum, adaptive learning rate and starting from carefully chosen initial
conditions, the example BCKPROP8 trains in only 120 epochs. You should find
that the CMAC network trains in 3 epochs.

The following parameter values introduce noise, due to hash coding, to
the network output. The level of noise increases as MEMSIZE decreases.

WIDTH = 20     C = 20     BETA = 0.6    MEMSIZE = 50

The following parameter values use a small number of weights at the
expense of a low resolution network output. Here the piecewise constant
nature of the network output is particularly evident.

WIDTH = 20     C = 5     BETA = 0.6    MEMSIZE = 0


CMAC2D
======

This example is based on the Neural Network Toolbox example CSTRAIN in
which a two dimensional function is learned by a multilayer perceptron.
Again, the multilayer perceptron has been replaced by a CMAC network.

Suggested parameter values are as follows.

WIDTH = 30     C = 30     BETA = 0.6    MEMSIZE = 0

Once the network has been trained, i.e. after running CMAC2D, try plotting
the output of the network for a number of different inputs, using the
following commands

for x=1:64
  for y=1:64
    netop(x,y) = opcmac(wts,[x*2+64;y*2+64],iprange,c,width,memsize);
  end
end
surfl(netop)

Note that the sum squared error computed by the script CMAC2D is for the
training data only. Try the following parameter values

WIDTH = 2    C = 1    BETA = 1.0   MEMSIZE = 0

and then plot the output surface again!


These files were developed using the Windows version of MATLAB 4.0 running
on a 33MHz 486 with 8Mb RAM. The MEX files were produced using MetaWare
High C/C++ 3.1 and PharLap 386LINK 5.1. I have used six dimensional inputs
to ADDRCMAC in some control system simulations and it worked fine. However,
I am sure that it would be quite simple to make the function fall over.
I do not, therefore, guarantee the performance of the accompanying files
in any way.

Donald Reay

Power Electronics Group
Department of Electrical Engineering
Heriot-Watt University
Edinburgh EH14 4AS
United Kingdom

e-mail dsr@cee.hw.ac.uk
