Neurons

Neurons or "nodes" are represented by circles. The numbers inside the neurons corresponds to their "activation level." (What this level represents varies with neuron type: in many cases it can be thought of as representing the rate of a firing of a neuron.) Visibly, these numbers are rounded; functionally, however, the exact value is used. This exact value can be seen by lingering over a neuron or double clicking on it. The color of a neuron represents whether the activation is: greater than 0, red; less than 0, blue; or equal to 0, white. These color conventions can be customized, as described in the preference dialog. The closeness of the neuron's activiation level to the neuron's upper or lower bound is visualized by the intensity of the color.

There are a variety of different neuron types in Simbrain, each of which has numerous parameters that can be set. This page addresses properties common to all of the different neuron types. Details on particular neuron types can be found on their respective pages.

Setting Neuron Properties

Neuron parameters can be adjusted as follows:

Setting single neurons: Double click on a neuron or right click on it and select set properties.

Setting multiple neurons: Select more than one neuron, and either double click on one of the neurons, or select set properties from the popup or network menu. A multiple neuron dialog box will appear. If the selected neurons are of different types, only the common properties appear. Otherwise all properties appear. Consistent properties are shown, inconsistent properties are marked by a null string: ". . ." Changes you make will apply to all selected neurons.

Common Neuron Properties

Activation

The current level of activity of this neuron. How this should be interpreted is dependent on the neuron type and application (e.g. firing rate, voltage potential, etc.). This is represented by the neuron's color, and can be seen exactly by lingering over the neuron or observing the value in the neuron's properties.

Label

Label neuron with a name.

Clamped

See below.

Clipping

If a neuron uses clipping, then if its activation exceeds its upper or lower bound, the activation is set to the upper or lower bound that it exceeds. Similarly with weights and their strength.

Upper bound

This field plays different roles depending on the type of the neuron (documented accordingly) but in general it determines the maximum level of activity of a node. It also determines the range of colors which a neuron can take on. The upper and lower bound determine the bounds of randomization.

Lower bound

This field plays different roles depending on the type of the neuron but in general it determines the minimum level of activity of a node. It also determines the range of colors which a neuron can take on.

Increment

This sets the amount that a neuron is incremented when it is manually adjusted. For example, if increment is set to .1, then each time the up arrow is pressed the neuron will increase its activation by .1. This feature does not affect the neuron while the network is being iterated.

Priority

This field sets the neuron with a desired priority. The lower the value, the higher the priority.

Input Type

This field sets the input as weighted or synaptic.



Clamping

In general, a clamped synapse or node will not change over time; it is "clamped" to its current value. There are two ways to clamp neurons and synapses in Simbrain.

(1) Set the type of a synapse to clamped synapse or the type of a neuron to clamped neuron. This allows specific subsets of synapses or neurons to be permanently clamped.

(2) In the Toolbar or Edit menu, use the "Clamp weights" Clamp Weight or "Clamp neurons" Clamp Neuron buttons.   These cause all weights or synapses in a simulation to be clamped. This is useful as a temporary override and can be used to train certain types of networks (e.g. recurrent auto-associators using hebbian synapses).

Note that some subnetworks override clamping, in particular when they manually train their weights. Also note that the activation of a clamped neuron can be changed by using the up and down keys, and similarly with clamped weights.

Weighted Input

Each input to a node can have different amounts of influence on that node. The amount of influence that an input has is called the "weight" of that input. We represent the weight of the ith input by wi and the activation level from the ith node by ai. The weighted input is then:

Note that a sensory input term I is also added to the weighted input if the node has a sensory coupling attached to it.

When a source neuron is a spiking neuron (see spiking networks) the weight times activation term for that neuron is replaced by something more complex. See spiking networks.

"Weighted input" is also referred to as "net input" in much of the connectionist literature. To make a neuron whose acitvation value equals its weighted input, use a linear neuron with slope = 1 and bias = 0.