Competitive Network
A competitive network is a collection of neurons that compete with each other to represent clusters of inputs. Each unit in a competitive network should come to represent a particular cluster of inputs.
The competitive network combines elements of winner take all networks and hebbian learning. They are also related to self-organizing networks, which will be included in future releases.
After creating a competitive network, some neurons must be attached to its units. The resulting weights will then be trained using the competitive learning algorithm, which updates weights according to the following steps:
1) Compute the weighted input to every unit.
2) Determine the winner, which is the unit with the greatest weighted input. In the case of a tie this is done arbitrarily.
3) Update the weights attaching to the winning neuron only. These weights are changed by the following quantity: learning rate times the source activation divided by the total activation of the source layer minus the current strength of the weight:
ε is a learning rate. Sinputs is the sum of all the inputs to the unit, and ai is an input neuron's activation. This algorithm has the result that the winning unit's weights come over time to resemble the input vector that led that unit to win. The learning rate controls how quickly this happens.
The division by the sum of inputs maintains the normalization of the weight vectors. Thus, if more strength is added to one weight, it is taken away from another.
For more on competitive learning see this site. Also see the PDP volumes, volume 1, chapter 5.
Initialization
Competitive networks are initialized with some number of units, and are by default laid out as a line. There are no connections. Connections must be made leading in to the network, and they should be constrained to only take values between 0 and 1.
A variant of the competitive learning algorithm called "leaky learning" requires all weights to learn on each time step, rather than just the winning weight. The algorithm for the weight change on the losing units is the same as above, but a new learning rate parameter is used, which will typically be smaller than the winning unit learning rate, so that weights attaching to losing neurons learn more slowly.
Parameters
Number of Competitive Neurons:
Number of Input Neurons:
Update Method:
Rummelhart-Zipser:
Alvarez-Squire:
Epsilon: a standard learning rate, which determines how quickly synapses change.
Winner Value: the value for the winning neuron.
Loser Value: the value for all losing neurons.
Use Leaky Learning: whether to use the leaky learning rule. Leaky learning requires all weights to learn, not just the weights attaching to the winning unit.
Leaky epsilon : The learning rate for losing neurons, when leaky learning is used.
Normalize Inputs: if selected, inputs are normalized prior to being used in setting weights.
Synapse Decay Percent:
Right Click Menu
Edit/Train Competitive
Add Current Pattern To Input Data
Train On Current Pattern
Randomize Synapses