Re-synaptic neuron fires one time step just before the post-synaptic neuron, and is depressed when a post-synaptic spike precedes a pre-synaptic spike by a single time step: Dwij (tz1) gsp xj (t):xi (tz1){xi (t):xj (tz1) , Network Cholecystokinin octapeptide web ArchitectureIn this paper, the model recurrent network is of the k-Winner-TakeAll (kWTA) type [27] that consists of n memoryless binary neurons from which only k neurons are active. The discrete-time dynamics of the recurrent network at each time step t[Zz is given by x(tz1) f :x(t){hzd(t) where gsp is the synaptic plasticity learning rate set to 0.001. To prevent weights from switching signs or growing uncontrollably, we enforce hard bounds such that the weights remain within the interval [0, 1]. Competition between synapses due to STDP leads to neurons with synapses that won the competition to fire consistently and those who lost the competition to be constantly silent [57]. To counteract this pathological state, the time-averaged firing rate for a neuron is modulated through homeostatic modification of its excitability threshold using intrinsic plasticity (IP) [6,7]: Dhi (tz1) gip i (tz1){k=n where x[Rn is the network state. The nonlinear function f sets the k units with the highest activities to 1 (spiking), and the rest to 0 (silent). As such, the population firing rate is held constant at k, and there is no need to introduce inhibitory neurons to balance excitation and inhibition. Recurrent synaptic efficacy is defined by the weight matrix w[,1n|n with wij being the efficacy of the synapse connecting neuron j to neuron i. Self-coupling is avoided by setting diagonal elements wii to 0. h[Rn defines neuronal firing thresholds that modulate the neurons’ resistance to firing, and hence, their excitability. d[Rn is the external drive whose dynamics depends on the task performed. More formally, the set of possible network states is a metric space: Definition 1. Given the set Y Bn f0,1gn of all binary vectors of size n, we define the Hamming metric by the function:PLOS Computational Biology | www.ploscompbiol.orgwhere gip is the intrinsic plasticity learning rate set to 0.001. This rule uses subtractive normalization to pull the time-averaged firing rate of each neuron closer to the population firing rate k.Computational TasksNeural circuits in different brain regions adapt to best serve the region’s functional purpose. To that end, we constructed three tasks, each of which resembles in spirit the demands of one such canonical function. We then, under the stimulation conditions of each task, compared the performance, information content, and dynamical response of networks optimized by combining both STDP and IP with networks that are optimized by STDP alone or IP alone.Computations in an Excitable and Plastic BrainIn all tasks, the network is subject to perturbation by a set of inputs P. The receptive fields of non-overlapping subsets of neurons x(p) are tuned exclusively to each input p[P. As such, each input p has its corresponding receptive field x(p) RFp in the recurrent neural network. When an input p drives the network, all neurons x(p) PubMed ID:http://www.ncbi.nlm.nih.gov/pubmed/20167888 receive a positive drive d 0:25, while the rest x\x(p) receive none. Readouts are trained on the current network state x(t) to compute a function over input sequences ut1 t2 (t) p(tzt1 ),:::,p(tzt2 ) , t1 and t2 being time-lags at which target inputs are applied where positive lags corresponds to future inputs and negative lags to past ones. We restrict time-lags t to the rang.