Neurodynamics – complete guide
Dynamical Systems
Nullcline
What is the idea behind the concept of a „nullcline“?
A nullcline is defined relative to one dimension of the phase space. It is the set of points(/set of states) in which the system does not change in the corresponding direction (i.e. . Graphically, these points compose a manifold/curve through the phase plane.
 usually partition phase space in 4 different areas (in 2D system) in which either:
How to analytically derive nullclines of an ordinary differential equation?
Solve the ODE corresponding to your nullcline’s variable (say, x) for x’=0 (that’s just the definition of a nullcline applied to the equation).
How do fixed points and nullclines relate to each other?
 fixed points are intersections of nullclines
 in other words: the points where neither of the variables changes and the system stays in these points
How do I calculate the arrows (see picture below), the so called trajectories in a 2D system?
 solve the ordinary differential equation for exemplary points of the system
 in other words: just insert x and y and voilá
 plot the solution as a vector
How many nullclines has a linear system X in principal?
example:
this equals
x1 and x2 will be the fixpoints, so to get a nullcline we simply set them zero
From this we see that both x1 and x2 would be of a linear nature. From this we can assume that both must be lines (what a linear system usually is :D). And what can 2 lines do?
 cross each other → one fixpoint at intersection of nullclines
 parallel to each other → no fixed points (they never cross)
 identical → infinitely many fixed points
Fixed point
conceptual
What is the idea behind the concept of a „phase space“?
The phase space contains all possible states of a (dynamical) system. Each individual point in the space corresponds to a possible state the system can be in. Depending on the system it may be discrete or continuous and may have a finite number of dimensions.
What is the idea behind the concept of a „fixed point“?
Once the system arrives at a state corresponding to a fixed point, it will not change further in any direction for t>inf. Geometrically, in a 2D phase plane a fixed point is at the intersection of two nullclines. Each nullcline must correspond to a different dimension! (e.g. x,ynullclines).
Which types of stability exist for fixed points in a twodimensional dynamical system?
stable node: attracts x from all directions (“black hole”)
unstable node: x moves away from it in all directions.
saddle: attracts x in one direction, repels it in another. (see lecture for illustration.)
Additional info: A saddle has a stable manifold and an unstable manifold. It attracts x as long as x is exactly on the stable manifold. It repels x along the unstable manifold. If x is slightly off the stable manifold, it’s trajectory will converge toward the unstable manifold and while getting pushed away from the saddle.
How to graphically determine the stability type of a fixed point in a twodimensional dynamical system?
If you have the arrows in the phase plane, or if you have example trajectories, just check whether x moves towards the fixed point or away from it (or saddle).
I DON’T KNOW WHAT TO DO (GRAPHICALLY) IF YOU DON’T HAVE THIS INFORMATION.
What is the idea behind an attractor?
An attractor is a stable fixed point, i.e. it asymptotically attracts the system if it is near it. The trajectory towards the fixed point can be (more or less) straight (eigenvalues real,for an attractor both eigenvals negative), or in a circular fashion (like a boat in a maelstroem) (complex eigenvalues, negative real part). (see lecture for illustration)
 nonoscillatory: eigenvalues real and both negative
 oscillatory: eigenvalues complex and negative real part
What is the idea behind a limit cycle?
A limit cycle is an orbit around a fixed point. If the system is on the limit cycle, it will rotate on it periodically. There are stable/”attractor” limit cycles, and unstable ones.
What is the difference between a homo and a heteroclinic orbit?
A homoclinic orbit has the same starting and endpoint. A heteroclinic orbit ends somewhere (close, but) distinct from where it started.
What is the idea behind a „separatrix“?
A separatrix is a manifold within a (2D) phase space. It divides the space into domains corresponding to single attractors. Graphically, it might be a curve dividing your 2D phase space into a left side and a right side. If the system start left of the separatrix, it will converge to one attractor. If it starts right, to the other. The sides are the domains.
Also, in a 2D system the separatrix is exactly the stable manifold of a saddle described above. If you start right on the separatrix (and there are no perturbations), you will converge to neither stable node but to the saddle. In real life, this is basically impossible.
The separatrix can be interpreted as the threshold (subthreshold > go back resting state, superthreshold > go to firing state). The takehome message is that the threshold of a 2Dsystem is not a single value, but a manifold and thus depends on both variables.
procedural
How to analytically derive all fixed points of a dynamical system?
Determine each variable’s nullclines by solving its differential equation for 0 (x’ = 0, y’ = 0, …).
Then calculate intersection for each of the variable’s nullclines with all other variables’ nullclines. Attention: Do not calculate the intersection of e.g. two xnullclines!
How to analytically determine the stability type of a fixed pointf of a onedimensional dynamical system?
You will notice that at a stable fixpoint, the derivative changes its sign from + to . Unstable the other way. Calculate the second derivative: x’’ < 0 > stable, >0 unstable.
How to analytically determine the stability type of a fixed point of a twodimensional dynamical system?
Linearize the system’s differential equations (usually there is a separate DE given for each variable) around the fixed point. Check the lecture or Mathe für Anwender II script or elsewhere for “Jacobian matrix”.
Finding Eigenvalues & Eigenvectors with Jacobian matrix in 2×2 system
Because both eigenvalues have different signs it is a saddle with the calculation of the next question lambda1 = (6+sqrt(6^24*(16)))/2 = 8 lambda2 = (6–sqrt(6^24*(16)))/2 = 2 a = 7, b = 3 , c = 3, d = 1 Trace: tau = a+d = 7 + (1) = 6 Determinant: delta = a*db*c = 7*(1) – 3*3 = 7 – 9 = – 16 ——————————————— Determinant < 0 > saddle det > 0: trace < 0 > stable trace > 0 > unstable tau^24det < 0 <> complex eigenval <> rotating trajectory 6^24*(16) = 100 → not smaller 0 → real eigenvalue → nonrotating trajectory Jo passt! 
Determine its eigenvalues (should be 2 for 2D system fixed point).
both negative > stable
both positive > unstable
different signs > saddle (attracts along vector corresponding to negative eigenval, repels along other).
How to infer the stability of fixed points of a 2D dynamical system using trace and determinant of the Jacobian matrix of that system?
The diagram in the lecture (with tau and delta as axes) can be used to determine the stability (or at least illustrates the idea behind it). For this you need the eigenvalues of the Jacobian matrix of a specific fixed point (obtained by linearization). The following formula will yield 2 eigenvals for the corresponding fixed point:
You calculate the Jacobian matrix of the system for the general case. Usually, you will have your variables in the matrix. To check at a certain fixed point (the location of which you already know), you just replace the variables in the jacoban accordingly. Then you calculate:
For 2×2 matrix (Jacobians are always square)
Trace: tau = a+d
determinant: delta = a*db*c
connection to eigenvalues:
Determinant < 0 > saddle
det > 0:
trace < 0 > stable
trace > 0 > unstable
tau^24det < 0 <> complex eigenval <> rotating trajectory
To understand this, try to understand the illustration in the lecture and the lambda 1,2 formula.
Synchronisation
conceptual
What is the idea behind „synchronisation“ of neurons?
Depending on their dynamics, firing frequency and “amplitude”/strength of synaptic connection, two periodically firing, coupled neurons may synchronize. This means one or both of them slightly modulate their firing frequency and/or periodic phase until a point where for all the future, the current state x_1(t) can be determined by a function of the other neuron’s state f(x_2(t).
Which types of synchronisation of two neurons exist?
 generalized: The neurons’ behaviors are temporally related, but the relation may be anything:
 classical: Two oscillatory neurons fire with the same rhythm
 identical: Both neurons’ outputs are the same:
 phase: Two irregular oscillators: The difference between their phases is bound, their amplitudes are uncorrelated.
 lag: One neuron’s state depends on the other, but with a certain time delay/lag:
 noiseinduced: oscillation of both neurons is noise induced, it does not necessarily mean that they fire at the same time or in the same amplitude/rhythm, it just says that they both respond to a specific input the same way
What is the idea behind an „isochron“?
In the lecture we encountered isochrons in continuous 2D phase planes. (Perhaps you can define them also as manifolds of higher dimensional spaces.) Imagine an attractor limit cycle: All states in its neighbourhood converge to it over time. During this process (and as the system might actually reach the orbit only in infinity) the attracted system already orbits with the same period as the limit cycle. This means that after a certain while, in terms of the phase of the periodic orbit it will not matter whether you started right on the orbit, or a bit next to it. An isochron is a curve through the phase plane/a set of states that contains all those states that have the same phase in the orbit. Or, put a little more graphic, it’s the set of all states that will eventually turn out to be the same point circling on the orbit.
As the phase space is continuous and the orbit is continuous, there are infinitely many isochrons (one correlating to each point on the orbit) which are continuous both in itselves and between themselves. If you see one or more isochron curves plotted in the phase plane, remember that these are only a selected few in order to illustrate the periodic behavior.
What is the idea behind a „phase response curve“?
For a phase response curve, you plot the phase shift caused by an incoming stimulus (synaptic drive) against the phase of the periodic system when the stimulus sets in. In other words, for a given stimulus strength the PRC will tell you: If i stimulate the periodic system at a certain phase, how much will the system’s phase be permanently shifted from then on?
 system periodically oscillating
 pulse influencing system
 system not anymore on limit cycle
 we can only define system when it’s on limit cycle
 isochron is the solution
 showing the path that all points landing on isochron take back to limit cycle
 open question: how much phase difference will there be after this?
 peak of phase response curve (e.g. T/2)
 PRC = qnew – q
How do phase response and „phase transition curves“ differ?
They are basically two different ways of showing the same thing:
For a stimulus at a certain phase,
PRC shows the phase difference caused by it (see above), while
PTC shows the new phase after the shift:
Thus, PRC is good as long as shifts are small. Once the range of shifts is getting near the phase, one better switches to PTC so the diagram doesn’t get too “jumpy”.
(see lecture for illustration)
What is the idea behind a „Poincare phase map“?
PRC and PTC illustrate the effect of a single incoming stimulus on the phase of a system.
 solution for stable synchronization is on diagonal line
 distance between ideal & current synchronization
 better for the effects of a tonic external pulse
 shows in how far the phase differs from the “old” stimulus
 after some time it usually stays around one value, this is due to synchronization of the neuron with the incoming stimulus
But what about pulsed stimulation?
The Poincaré phase map relates two consecutive phases of a system: is the phase of the system when the nth pulse arrives. It is mapped to , i.e. the “new” phase after the input pulse, kinda in the fashion of a PTC:
with Ts being the period of the pulsed stimulation.
We need to know the phase at the first pulse, , and the pulse period, and can determine all future phases.
As to the plot: It helps to first have a look at https://en.wikipedia.org/wiki/Cobweb_plot
procedural
Phase space
How to construct a phase space of a twodimensional dynamical system?
Calculate the nullclines and fixpoints. Chose axis scale accordingly and draw them into the plot.
Determine stability of the fixed points analytically or cheat a little by calculating the derivatives at several points in their vicinity (make sure to check if they are saddles). You can now graphically infer the stability of the fixed points and also draw the derivatives you calculated into the plot as arrows.
Neuron models
conceptual
What is special about the HodgkinandHuxley model?
The HodgkinHuxley model is a very detailed model: It has 11 params and nonlinearly depends on 6 functions of the voltage. It can hardly be fit to any data or used to simulate neurons, but it’s a good starting point to understand the behavior of them.
What is special about the INa,Pmodel?
Starting from the HodgkinHuxley model, we use only the fast channels (throw out Kchannels). The benefit from this is that we can now assume that the channels themselves change so fast, they appear to change “instantaneously” in our timescale (“adiabatic assumption”). This further simplifies the model, so eventually we yield a 1Dmodel! (check lecture for the formulas to see exactly what’s happening). Keep in mind that with a 1D model you can only model the upstroke, but not the repolarization!
What is special about the INa,P+IKmodel?
The INa,P + IK model also includes the Kchannels and suffices for a minimal model of up and downstroke.
What is special about the FitzHughNagumomodel?
The FitzHughNagumo model replaces the opening probability function of the Nachannels with a V³ dependence. It also replaces the opening probability function of the slower Kchannels with a slow recovery variable w. This leaves us with two variables (V, w), and 4 parameters. However, you can still simulate both upand downstroke (2Dmodel) and it’s surprisingly close to the numerically much more complex INa,P + IK model.
What is special about the Izhikevich model?
The Izhikevich model is a 2Dmodel with a twist: Instead of simulating the dynamics of the steep downstroke, it simply resets the membrane voltage after an upstroke (i.e. when membrane voltage reaches a certain level) to a given value. AHP dynamics is simulated by adding a spike activated net current to the recovery variable.
As it turns out, this model can fit all typical types of behavior observed in human cortical cells with only 4 parameters surprisingly well (see lecture for illustration).
How to reduce model complexity while maintaining essential
properties of the model?
Reducing model complexity means to remove as many variables and functions (especially nonlinear ones) from your model as possible. Of course you want to find the sweet spot between simplicity and good results. So you check for parts of your model that are computationally demanding and still have a small effect size, isolate them and simply discard of them or replace them with a simpler function (or some other trick, like the voltage reset mentioned above).
What is the idea behind an „integrator“?
An integrator in our case is a neuron that adds up incoming stimuli. Of course as long as the threshold is not reached the membrane potential declines back to the resting state, but if the incoming pulse is fast enough, several (too) small stimuli can still make the potential reach the threshold.
What is the idea behind a „resonator“?
A resonator in our case is a neuron that prefers a temporal impulse pattern with a certain frequency. Only if the frequency is about right the stimuli are added up.
How to determine whether a particular neuron model behaves like a
resonator?
In the case of a 2Dsystem, resonant behavior is caused by the resting state being an oscillatory attractor: Only if the phase is right, a small stimulus moves the system away from the attractor/ the resting state. As one stimulus is not enough, the phase needs to be always right when each pulse arrives.
How to determine whether a particular neuron model behaves like an
integrator?
In this case, the resting state must be a nonoscillatory stable fix point: As long as the pulses arrive fast enough (relative to their size) to outpower the attraction of the fix point, the system is gradually moved away from the attractor and until outside its domain(i.e. toward the spiking state, in our case).
Mathematics
conceptual: Differential Analysis
What is an ordinary differential equation?
A differential equation relates a function to its derivative.
In the case of an ordinary differential equation (ODE) the function is in one variable.
If it is in several, the equation is called partial differential equation (PDE).
What is a linear differential equation?
A DE is not linear if
 the function is multiplied by itself or a derivative of itself
 or if the derivative is multiplied by another derivative
What is the idea behind the „euler method“ for solving ordinary differential equations?
You need to have a starting point for your ODE, and you chose a step size.
Calculate the tangent vector at the starting point and stretch it to step size.
Pretend the new point is still on the line, calculate its tangent vector * step size. Repeat.
What is the idea behind the method of „local linearization“ of a point in the phase plane?
The idea is to zoom into this point that the phase plane around it can be treated as if it was linear, without getting too much of an error.
What is the idea behind “eigenvectors” (and their corresponding „eigenvalues“) of a matrix?
Any linear function can be expressed as a matrix and vice versa. Usually it’s hard to determine what the matrix does from only looking at it. Thus, we look for special cases, i.e. vectors, where all the matrix/function does is stretch it by a scalar
Such vectors are called eigenvectors for a corresponding eigenvalue. (See Mathe für Anwender II Script for more details.)
How to analytically solve separable, firstorder differential equations?
Separable means you can put the ODE into the following form (e.g. using clever substitutions):
.
is an implicit solution. For this you will usually need an initial value .
Again, MfA II Script for more details.
How to solve a linear system of equations?
The coefficients of the linear system of equations can be written as a matrix. You then calculate the eigenvalues and eigenvectors of the resulting matrix.
If v is an eigenvector and lambda an eigenvalue, then
is a solution of the system.
For 2Dsystems we get the following general solution:
More details:
http://tutorial.math.lamar.edu/Classes/DE/RealEigenvalues.aspx
How to compute the trace of a square matrix?
add up along the diagonal (top left to bottom right) to get the trace:
How to compute the determinant of a 2D square matrix?
determinant
How to compute the characteristic polynomial of a square matrix?
Characteristic polynomial . You can now “sort” by the power of x such that you get your polynomial. Starting from there, you can also rearrange the equation into a factorization, i.e. . Then you can read off the “Nullstellen” directly.
Bifurcation
What is the idea behind the concept of a „bifurcation“?
A bifurcation occurs when a small, smooth change to a parameter changes the behavior of the system qualitatively. Depending on your viewpoint, this means it will change the number of solutions, or the number/stability of fixed points.
 Bifurcation theory is the mathematical study of changes in the qualitative or topological structure of a given family.
 bifurcation occurs when a small smooth change is made to the parameter values (bifurcation parameters)
 step in between Bistability and Monostability. The unstable equilibrium gets lowered until both the stable equilibrium (attractor) and unstable equilibrium are “overlapping” and cancel each other out.
 Bifurcation occur in both continuous systems (ODEs, DDEs or PDEs) and discrete systems
What is a „bifurcation parameter“?
The bifurcation parameter is exactly this parameter that the system is very sensitive to (see above).
 e.g. r value in logistic map, but is every factor having an influence on bifurcation changes
 results in a bifurcation diagram, which is a diagram showing all the forks at bifurcation points
What is a „bifurcation diagram“?
horizontal axis: value of “critical” bifurcation parameter
vertical axis: longterm values (“solutions”) for . usually only solutions corresponding to a stable node are shown.
The idea is to illustrate that for given values of the bifurcation parameter (r in the logistic map) an infinitesimal change to it will change the number of solutions (in case of the logistic map from 1 to 2 to 4 etc.) and thus the qualitative behavior of the system.
What is the idea behind a „saddlenode bifurcation”?
Remember that even though the bifurcations below are described in only one direction, they can always occur in both directions (i.e. saddlenode bifurcation = “nodesaddle” bifurcation)
 Imagine a parameter change will move an unstable node and a stable node closer together. The saddlenode bifurcation occurs at the moment they „collide“ and thus form a saddle node („stable“ in one direction, „unstable“ in the other). Further change to the parameter will make the saddle disappear.
 Professional answer: if magnitude of injected current or any bifurcation parameter changes, a stable equilibrium corresponding to resting state is approached by an unstable equilibrium. They coalesce and annihilate each other. Since the resting state no longer exists, the trajectory describing the evolution of the system jumps to the limit cycle attractor, indicating that the neuron starts to fire tonic spikes.
What is the idea behind „saddlenode on invariant circle (SNIC) bifurcation“?
 Special type of saddlenode bifurcation: here, unstable and stable equilibrium both are on a limit cycle (invariant circle). They move towards each other and annihilate each other, forming a cycle without resting state → tonic spikes
 Professional answer: similar to saddle node bifurcation, except that there is an invariant circle at the moment of bifurcation, which then becomes a limit cycle attractor
What is the idea behind „subcritical AndronovHopf bifurcation“?
 a small unstable limit cycle shrinks to a stable equilibrium and makes it lose stability. Because of instabilities, the trajectory diverges from the equilibrium and approaches a largeamplitude spiking limit cycle (this is what you always see in the lectures diagrams, but keep in mind that this largeamplitude limit cycle doesn’t come from the bifurcation!) or some other attractor
What is the idea behind „supercritical AndronovHopf bifurcation“?
 the stable equilibrium loses stability and gives birth to a smallamplitude limit cycle attractor. As the magnitude of the injected current increases, the amplitude of the limit cycle increases and it becomes a fullsize spiking limit cycle
Bistable (coexistence of resting and spiking states) 
monostable (NO coexistence of resting and spiking states) 

Integrator (NO subthreshold oscillations) 
saddle node Bistable because: threshold = unstable node, resting state = stable node, limit cycle = spike. Since stable node and limit cycle are apart, there are separate resting and firing states. Integrates because repeated stimuli drive system away from resting state no matter in what pattern they come in. 
saddlenode on invariant circle Monostable: stable node/resting state is on cycle > no separate states. 
resonator (subthreshold oscillation) 
subcritical AndronovHopf Bistable because: stable fixed point in the middle = resting state, unstable limit cycle: threshold, stable limit cycle: firing state. Resonates because: the stable node is oscillatory, see ”resonator”

supercritical AndronovHopf Monostable because: Only limit cycle is stable > always firing Resonates because: same as left 
How to simulate the behavior of a neuron model, given its parameters and inputs?
 Hodgkin Huxley model: complex, uses all membrane voltages & co, but really computationally demanding
 INaP/INaPK: better, already loads faster
 Izhekevich: fastest method, just resets voltage to resting potential as soon as it hits one specific value
# author Justin Guese # email [email protected] # version 11.07.2015 from matplotlib.pyplot import * from numpy import * T = 1000 # total simulation time span tau = 1 # simulation step size n = round(T/tau) # number of simulation time steps name = ‚1 – tonicspiking‘ a=0.02 b=0.2 c=65 d=6 I = random.choice([0,1],size=(n,),p=[0.9, 0.1]) V = –70 speed = 0.0001 u=b*V VV=[] uu=[] tspan = linspace(1,tau,T) it = 1 for t in tspan: it = it+1 # differential equations solved step by step numerically V = V + tau*(0.04*(V**2)+5*V+140–u+I[it]) # V is equal to I(input current) + squared old V – recovery variable u = u + tau*a*(b*V–u) # Spike and reset upon crossing a certain threshold if V==30: VV.append(30) V = c u = u + d # output variables for solutions, i.e. neuron trajectories VV.append(V) uu.append(u)
figure() ion() V2=[] for t in range(T): clf() #plotting V2.append(VV[t]) plot(V2,label=„Voltage“) pause(speed) 
Which are fundamental classes of firing behavior for mammalian cortical neurons?
 Regular spiking neurons
 firing in a regular pattern (=tonic firing) in response to injected pulses of DC current
 usually Class 1 excitability(interspike frequency vanishes when amplitude of injected current vanishes)
 For comparison: Class 2 excitability – direct fast firing at special DC input
 spiny stellate cells in layer 4, pyramidal cells in 2,3,4 and 6
 Intrinsically bursting neurons
 burst of spikes in beginning
 after that tonic (regular spiking)
 excitatory pyramidal neurons found in all cortical layers, most abundant in layer 5
 Chattering neurons
 high frequency bursts of spikes with relatively short interburst periods
 sounds like a helicopter
 visual cortex of adult cats, spiny stellate or pyramidal neurons in layers 2,3 (mostly),4
 Fast spiking
 high frequency tonic spikes
 relatively constant period
 Class 2 excitability
 when magnitude of injected current decreases below certain critical value they fire irregular spikes (not firing regularly anymore)
 sparsely spiny or aspiny nonpyramidal cells along horizontal direction of neocortex
 Lowthreshold spiking neurons
 able to fire lowfrequency spike trains
 fast firing with short interspiking periods in the beginning
 then decreasing AHPs and amplitudes
 excitability class not determined
 nonpyramidal interneurons providing local inhibition along vertical direction of neocortex
 Late spiking neurons
 delayed spiking with latencies as long as 1 sec
 due to exhibition of voltage ramp in response to injected DC current
 nonpyramidal interneurons in neocortex, especially layer 1
http://www.dis.uniroma1.it/~gori/Sito_GG/Modellistica_files/2003%20Net.pdf
Reservoir Computing
Multiple Choice Collection
TRUE 
FALSE 
Reservoir Computing has universal computational properties 
A reservoir computer relies on stable states like fixpoint attractors 
With a reservoir computer any arbitrary function with fading memory can be approximated 
Reservoir computing, like other attractor based methods, i.e. Hopfield networks, stores patterns by means of learned attractors 
Reservoir computing is an any time computation 
Standard reservoir computing predicts receptive fields in V1 
Reservoir computing relies on transient dynamics 
Representations of memorized stimuli in V1 are super sparse and unique 
A reservoir computer can act as a generative model once the readout is trained 
patterns that form representations of past inputs have vary very little across trials 
The output of the reservoir can be used as feedback into the reservoir 
non linearity arises by non linear links between linear nodes 
rather complex activity patterns store past and recent stimuli in V1 
nodes are always memory free 
reservoir states are non linear combinations of past and recent states and inputs 
To study the forgetting strength we generate random states and compare their feedback with the decay rate 
non linear nodes implement non linear computation 
Spectral radius should be exactly one 
network mediates the non linear combination of states and inputs 

readout is memory free 

Echo state property is guaranteed if forgetting is stronger than feedback 

The network effect can be interpreted as a linear mapping of the reservoir states on to reservoir at t+1 

Feedback is strongest when it is along a principal direction of one of the eigenvectors 
conceptual
What is the idea behind „reservoir computing“?
In complex network of neurons we observe that the dynamics are dominated by intrinsic activity. To model this, an input signal is mapped nonlinearly into a higherdimensional reservoir state, thus performing a feature expansion. (It can be imagined like a recurrent neural network with one neuron for each dimension). The model’s behavior is then simulated using concrete time steps. Since the neurons are interconnected, the current time step depends on the current input plus the liquid state during the last time point (which in turn depends on the input at t1 plus the state at t2 and so on). To obtain output, we train a function (e.g. by linear regression) to linearly transform the liquid state into an output of desired dimensionality.
 Recurrent neural network (nodes form circles)
 Mapping of input space to higher dimensional non‐linear space with memory component
 Interpretation of system state done with training of the linear model and learning of output weights (usually called W_out)
 Only the linear system must be trained
 more efficient
 online learning possible
 Biologically plausible
 Many possible enhancements
 e.g.
 Nonrandom initialization of weights
 Feed output back into reservoir → prolongs memory
 SORN: Self organizing recurrent neural network
 inhibitory neurons
 …
What is the difference between a „Liquid State Machine“ and an „Echo State Network“?
The nodes of a LSM are modelled by a spiking function, while the ones of an ESN are modelled using any continuous, monotonic function on a closed interval.
Echo State Networks (Jaeger)
 Use rate coding
 optional: Directly from input to output
 optional: Selfconnections/leakage rate
Liquid State Machines (Maass)
 Use spiking neurons
 optional: inhibitory neurons
Why is PRC1 & PRC2 in combination with a resonator or integrator worse?
At the stable fix point the phase change compensates the difference in the period length
Neueste Kommentare