LMS Network

The LMS Network is a two layer feed-forward network that implements the Least Mean Squares or Delta rule for learning.

The LMS rule is a form of supervised learning, which means that the user must supply desired output values for each of a list of input values.

The LMS rule works as follows. The change in a weight is equal to the product of a learning rate ε, the pre-synaptic source activation, and the difference between the post-synaptic activation aj and a desired activation tj. The error is the difference between the desired and actual activation of the target neuron.

Repeated application of this rule mimizes reduces mean squared error on a set of training data.

This rule is also known as the "Widrow-Hoff" rule, and the "Delta Rule." Networks that use these rules are sometimes called "adalines" or "madalines" (for the multilayer case, which these networks do not currently implement). They are descendents of an early form of network studied by Rosenblatt called a "perceptron."

Initialization

Since these are two layer networks, they are initialized with a set number of input and output units. The resulting layer will be two layers of the specified number of neurons with feed-forward connections.

Output / Input Layer:

Number of Neurons: Sets number of neurons for the network.

Neuron Type: Sets the neuron type.

Tanh: TODO

Logistic: Click here for description.

Linear: TODO

Right Click Menu

Edit/Train LMS: Opens edit dialog to train LMS network.

Rename: Renames network

Remove Network: Deletes network.

View/Edit Data: View and edit training set data (input data and target data).

Training

Training a network involves specifying input data, target data, and then running the algorithm.  This process is covered here.

Parameters

Learning Rate: Learning rate is denoted by ε. This parameter determines how quickly synapses change.

Iterations Before Stopping: TODO

Stopping Condition: TODO (threshold error, threshold error in validation set, number of epochs, none (keep going until manual stop)), and error threshold.

Error Threshold: TODO