LMS Network

The LMS Network is a two layer feed-forward network that implements the standard Least Mean Squares rule for learning. 

The LMS rule is a form of supervised learning, which means that the user must supply desired output values for each of a list of input values. 

The LMS rule works as follows. The change in a weight is equal to the product of a learning rate ε, the pre-synaptic source activation, and the difference between the post-synaptic activation aj and a desired activation tj. The error is the difference between the desired and actual activation of the target neuron.

Repeated application of this rule mimizes reduces mean squared error on a set of training data.

This rule is also known as the "Widrow-Hoff" rule, and the "Delta Rule." Networks that use these rules are sometimes called "adalines" or "madalines" (for the multilayer case, which these networks do not currently implement). They are descendents of an early form of network studied by Rosenblatt called a "perceptron."

Initialization

Since these are two layer networks, they are initialized with a set number of input and output units. The resulting layer will be two layers of the specified number of neurons with feed-forward connections.

Output Layer:

Number of Neurons

Neuron Type

Input Layer:

Number of Neurons

Neuron Type

Right Click Menu

Edit/Train LMS

Rename

Remove Network

View/Edit Data

Training

Training a network involves specifying input data, target data, and then running the algorithm.  This process is covered here.

Parameters

Learning Rate: Learning rate is denoted by ε. This parameter determines how quickly synapses change.