Neurons

Neurons or "nodes" are represented by circles. The numbers inside the neurons corresponds to their "activation level." (What this level represents varies with neuron type: in many cases it can be thought of as representing the rate of a firing of a neuron.) Visibly, these numbers are rounded; functionally, however, the double precision floating point values are used. This precision value can be seen by lingering over a neuron or double clicking on it.
The color of a neuron represents the overall state of the neuron depending upon its update rule. Loosely neuron models can be broken up into two classes: spiking and non-spiking, and this distinction accounts for bulk of any differences in the meaning of colors. In general, blue corresponds to a quiescent or inhibitory state, white corresponds to a 0 or neutral state, and red corresponds to an active or excitatory state. For neuron models which have explicit action potentials (spiking neurons), red indicates the neuron is near its threshold, a yellow flash indicates an action potential. A comparison of the two can be seen pictured right.
These color conventions can be customized, as described in the preference dialog. The closeness of the neuron's activiation level to the neuron's upper or lower bound is visualized by the intensity of the color.
There are a variety of different neuron types in Simbrain, each of which has numerous parameters that can be set. This page addresses properties common to all of the different neuron types. Details on particular neuron types can be found on their respective pages.
Neuron Update Rules (a.k.a. "Types")
The neuron object itself is not what determine's the neuron's behavior. Neurons contain a generic set of parameters and utilities common to virtually any model neuron (activation, boundaries, lists of incoming/outgoing synapses, methods to sum synapse strengths, etc.), but themselves have no "type" beyond what is delegated to an update rule. When we speak of a neuron of a given type we are really referring to a neuron which posesses an update rule of that type, since the update rule is what ultimately determines the behavior of the neuron. To the right is a menu of all the different neuron models (neuron update rules), which Simbrain currently posesses out of the box. Editing a neuron allows the user to also edit the properties of its update rule or change the update rule entirely (thereby changing the neuron model).
Setting Neuron Properties
Neuron parameters can be adjusted as follows:
Setting single neurons: Double click on a neuron or right click on it and select set properties.
Setting multiple neurons: Select more than one neuron, and either double click on one of the neurons, or select set properties from the popup or network menu. A multiple neuron dialog box will appear. If the selected neurons are of different types, only the common properties appear. Otherwise all properties appear. Consistent properties are shown, inconsistent properties are marked by a null string: ". . ." Changes you make will apply to all selected neurons.
Activation: The current level of activity of this neuron. How this should be interpreted is dependent on the neuron type and application (e.g. firing rate, voltage potential, etc.). This is represented by the neuron's color, and can be seen exactly by lingering over the neuron or observing the value in the neuron's properties.
Label: Label neuron with a name.
Clamped: See below.Clipping: If a neuron uses clipping, then if its activation exceeds its upper or lower bound, the activation is set to the upper or lower bound that it exceeds. Similarly with weights and their strength.
Upper bound: This field plays different roles depending on the type of the neuron (documented accordingly) but in general it determines the maximum level of activity of a node. It also determines the range of colors which a neuron can take on. The upper and lower bound determine the bounds of randomization.
Lower bound: This field plays different roles depending on the type of the neuron but in general it determines the minimum level of activity of a node. It also determines the range of colors which a neuron can take on.
Increment: This sets the amount that a neuron is incremented when it is manually adjusted. For example, if increment is set to .1, then each time the up arrow is pressed the neuron will increase its activation by .1. This feature does not affect the neuron while the network is being iterated.
Priority: This field sets the neuron with a desired priority. The lower the value, the higher the priority.
Input Type: This field sets the input as weighted or synaptic.
In general, a clamped synapse or node will not change over time; it is "clamped" to its current value. There are two ways to clamp neurons and synapses in Simbrain.
(1) Set the type of a synapse to clamped synapse or the type of a neuron to clamped neuron. This allows specific subsets of synapses or neurons to be permanently clamped.
(2) In the Edit menu, use the "Clamp weights"
or "Clamp neurons"
buttons. These cause all weights or synapses in a simulation to be clamped. This is useful as a temporary override and can be used to train certain types of networks (e.g. recurrent auto-associators using hebbian synapses).
Note that some subnetworks override clamping, in particular when they manually train their weights. Also note that the activation of a clamped neuron can be changed by using the up and down keys, and similarly with clamped weights.
Input Type
A major component of a neuron's behavior is how it goes about interpreting its inputs. In Simbrain there are two different ways a neuron can sum up its inputs: a Weighted Sum or a Synaptic Sum. Loosely these correspond to non-spiking and spiking neurons respectively, but there are some cases where it is useful for a non-spiking neuron to interpret its inputs as synaptic input and vice versa with spiking neurons.
Weighted
This is the simplest and most common way of determining input for most ANNs. As its name suggests it is a weighted sum over the neuron's incoming synapses. The influence of a pre-synaptic neuron on a post synaptic neuron is then dependent on two values: 1) the activation value of the pre-synaptic neuron and 2) the weight or "strength" of the synapse connecting the pre-synaptic neuron to the post-synaptic neuron. The total influence of a given pre-synaptic neuron is represented by the product of its activation and the strength of the connecting synapse.
We represent the weight of the jth input where j ∈ {1, 2, ... , N} for N inputs to neuron i by wij and the steady activation level from the jth node by aj. The weighted input is then:
Where $f$ represents the function neuron i's update rule applies to its inputs, and neti is the weighted or "net" input to neuron i. Often times the connectionist literature refers to the weighted input to a neuron as its net input.
Note that a sensory input term I is also added to the weighted input if the node has a sensory coupling attached to it.
Tip: To make a neuron whose acitvation value equals its weighted input, use a linear neuron with slope = 1 and bias = 0. Linear neurons are great for quickly displaying unfiltered values.
Synaptic
Synaptic inputs are a bit more complicated than weighted inputs as they attempt to capture some of the dynamics of real synapses in the brain. This type of input is meant specifically for translating action potentials (spikes) created when spiking neurons fire into a continuous value which can be interpreted by other neurons. Thus it should be noted that this input type is non-sensical if the pre-synaptic neuron does not produce action potentials. Essentially, the synaptic input type performs a weighted sum over the post-synaptic responses of the incoming synapses, which are themselves goverend by spike responders. More details can be found in the spike responder and spiking neuron) documentation pages, but the quick version is that spikes are modeled as being instantaneous in time and spike responders generate a continuous value from this instantaneous one.