Training Dialog

Some version of a trainig dialog is used in many places, including: backprop, least mean squares, simple recurrent nets, echo state nets, hopfield nets, competitive nets, and self-organizing maps. This page describes the main elements that occur in these pages. For items specific to a particular network type see those pages.

Generally there is a main training tab, either one or two input data tabs, and a validation tab.

An example which steps through the process of  creating and training a network using supervised learning (backprop) is in the examples page.

Training Tab

This tab is used to train a network, after a training set has been specified. You use this to manage a supervised algorithm, and track error, which should decrease. You can also randomize a network from this dialog to try to find better initial conditions for the algorithm and hopefully better results.

Play: Repeatedly iterate the algorithm until stop is pressed.

Iterate: Iterate the algorithm once.

Show Updates: If this is selected then you will see the weights change in Simbrain while the algorithm is run. This slows things down but is pedagogically useful.

Preferences:Show preferences dialog for the current training method.

Randomize:Randomize the weights of the network being trained. Usually this randomizes the weights and biases, but the details of randomizaation depend on the network type. Useful to "restart" training, for example, if you suspect you are stuck in a local minimum in weight space.

Input Data / Target Data

Training a network generally involves specifying one or more datasets. These datasets appear as tables, and have functions that are described in the table documentation.

Unsupervised learning uses one data table (som, competitive, hopfield), while supervised learning (backprop, lms, simple recurrent, echo state) uses two tables.

For input data tables, columns correspond to input neurons. Columns of the training data table correspond to output neurons.

For networks that use supervised learning, each row of the input data table is an input vector, and the corresponding row of the target data table is the desired vector that should be produced for that input, if training is successful. The input and target data tables show have the same number of rows.

For example, suppose we want to train a network to compute these associations:

(0,0) > (0)
(1,0) > (1)
(0,1) > (1)
(1,1) > (0)

To do this we need to set the input and training datasets as follows:

Input Dataset
0,0
1,0
0,1
1,1
       Target Dataset
0
1
1
0

Validation Tab

This is a table of data that is not used in training, but rather in testing and validating a network after it has been trained. In addition to the standard table functions, this panel has tools for iterating through the rows of the table and sending each row's values to the network.