Backprop Network

This is a three layer feed-forward network that uses a well-known variant of the Least Mean Squares rule for learning (the least mean squares rule is used directly by the LMS Network). When iterated normally, the backprop network behaves like a standard network. Unlike a standard network, though, the backprop network can be trained based on the backpropagation algorithm by right-clicking on the subnetwork tab and selecting "Train Backprop Network."

Currently Simbrain only uess three-layer networks. By default, the hidden layer and the output layer are made up of sigmoidal neurons, and the input layer is entire made up of clamped neurons.

Backprop is a form of supervised learning, which means that the user must supply desired output values for each of a list of input values. See training files.

Since the backprop algorithm has been described in detail elsewhere, details of the algorithm are not given here. The engine which runs backprop in Simbrain is SNARLI, a software package created by Simon Levy.

Initialization

Since these are three layer networks, they are initialized with a set number of input and output units. The resulting network comprises three layers of the specified number of neurons with feed-forward connections.

Parameters

Learning Rate: A standard learning rate. This determines how quickly synapses change.

Momentum: This scales the rate of weight change by the amount a given weight changed on the previous time step. This speeds up learning and prevents oscillations. Momentum should be between 0 and 1; 0.9 is a common value.

Randomize

The randomize option in the subnetwork tab context menu not only randomizes all synapses but the biases of all neurons as well.

Training

To train the backprop network select "Train" in the tab-context menu. (Right click on the subnetwork tab and select "train"). You must set an input and output file.

Input File: Use this button to select an input file for training (See training files).

Output File: Use this button to select an output file for training.

Randomize Network: Randomize weights and biases of the entire network.

User / Play: This repeatedly applies the backpropagation learning algorithm

User / Step: This applies the backpropagation learning algorithm once.

Batch / Epochs: In batch mode you specify a specific number of epochs for the algorithm to perform. An epoch is one iteration through the entire dataset. The number of epochs is the number of such complete iterations to perform.

Batch / Error Interval : How often error should be displayed. E.g. if this is set to 100 then every 100 epochs the current error will be displayed.

Batch / Train: Begin iterating for the number of iterations specified in the epochs field.

Props: Set parameters for this network: momentum and learning rate, described above.

RMS Error: Displays the current root mean squared error.