# Autoassociative spatial-temporal pattern memory based on stochastic spiking neurons.

1. INTRODUCTION

A spiking neuron is a neuron model, operating with continuously arriving multidimensional stream of current impulses (spikes)--a sequence of the identical point events.

We assume that a spike sequence processing is the one of the basic neural operations that is carried out in real neurons of live organisms. The spiking neuron is capable to carry out some useful tasks of a multidimensional impulse patterns processing. Although even a single neuron is capable to detect a spatial-temporal pattern (Sinyavskiy O.Y. et al., 2010), a spiking neural network is imagined as a more powerful spiking patterns processor. An interesting and practically significant task is the task of patterns retention. A spiking neural net should serve as the autoassociative memory unit in this task that is capable to store patterns and to restore them having initial clues.

Spatial-temporal memory schemes were proposed by a lot of researches, but some of them lack the biological plausibility (for exampe, see (Lipo Wang, 1999)). In this paper we propose a spatial-temporal scheme based on plausible spiking neuron models (Gerstner W. et al., 2002).

2. EXTENDED SRM0 MODEL

We use an extended stochastic Spiking Response Model (SRM0) (Gerstner W. et al., 2002; Sinyavskiy O.Y. et al., 2009) as a base element of the memory network. The SRM0 neuron is a simple 1-dimensional voltage integrator. A fixed set of postsynaptic potentials is generated at the every moment of spike arriving at the specific input channel of the neuron. These potentials have a form of alpha functions with various time features. Each alpha function of input channel is attached to its weight value. The weighted sum of alpha functions has a meaning of a resulting postsynaptic potential for the specific input channel. Regulating alpha functions weights, it is possible to regulate the form of the total postsynaptic potential for i -th input channel, in particular, to control a magnitude Aui and a time moment Ati of its maximum.

A neuron with three alpha functions on each input channel was used in all experiments in the given work. Thus, if neuron has n input channels, all neuron's weights W can be set up by a n x 3 dimensional matrix. A neuron membrane voltage at some time point is the sum of postsynaptic potentials of all input channels adding a value of refractory function:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

where [a.sub.j](t)--alpha-functions, [eta](t)- refractory function, [u.sub.0]--initial value of membrane potential. After every output spike membrane voltage is reset to some refractory value [u.sub.refr].

A stochastic component of neuron model is realized using a stochastic threshold. Intensity of density of probability of spike generation [lambda](u(t)) at the time moment t depends non-linearly (we use the sigmoidal dependence) on proximity of membrane voltage u(t) value with some threshold value. The probability distribution density [p.sub.T]{[y.sub.T]|[[bar.x].sub.T]} of spikes generation is defined as follows (Pfister J.P. et al., 2006):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

where [lambda](u)--stochastic threshold function, [[bar.x].sub.T]--input spike pattern.

3. NEURON LEARNING WITH TEACHER

Neuron's learning mechanisms are explicitly or implicitly defined by the information about the degree of approaching of a neuron's behavior to behavior that solves the task that has been set up. The special signals carrying the information only about the task in view and that are used only in the course of learning we will call the learning signals. The learning spike occurrence at some time moment [t.sup.s.sub.i] is a request from the teacher to neuron to generate an output spike at this moment. The neuron should make adjustment of its parameters in such a way that in case of repetition of the similar sensory input conditions in the future an output spike will be generated by neuron itself.

For a description of quantitative estimation of working characteristics of generalized spiking neuron model it is convenient to use information theory (Stratonovich R.L., 1975) language. The simplest learning with the teacher task is one in which it is required to generate a pattern [y.sub.T] in reply to an input pattern [[bar.x].sub.T]. It is possible to formulate such task as a problem of particular entropy minimization [h.sub.T]([y.sub.T]|[[bar.x].sub.T]) min in a point [y.bar.T] of all output patterns space under condition of an input pattern [[bar.x].sub.T]. In order to minimize particular entropy value we use gradient descent method: weight change [[DELTA].sub.wj] of j -th alpha functions in i -th input channel during learning processes is proportional to the weight [w.sub.ij] derivative of the entropy h([y.sub.T]) (Pfister J.P. et al., 2006):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

where y - learning coefficient, [lambda](u) - stochastic threshold function, [a.sub.j] - alpha-functions in i -th input channel, [t.sup.s_in.sub.k] [member of] [x.sup.i.sub.T]--sensory input spike times, [t.sup.s_out] - target output spike time.

4. AUTOASSOCIATIVE MEMORY NETWORK

The memory network consists of n -neurons with all-to-all sensory connections. It means that each neuron receives pattern of activity of all other neurons. Also there are n global inputs to the network on which external sensory stimuli arrive in the form of spiking patterns. Each global input channel is connected with corresponding neuron, i.e. i -th neuron gets spikes from i -th global input channel. In order to use learning with the teacher approach (eq. (3)) there must be a source of learning signals that tells neuron when it is needed to generate an output spike. In the autoassociative memory presented here the global input signals themselves serve as learning signals for the neurons. Learning spike from i -th global input channel arrives on learning input of i -th neuron's.

The output of each neuron is connected to the respective global output. The overall connectivity of the memory network consisting of 5 neurons is depicted on fig. 1. There is a global input sensory pattern on the left of the fig. 1. It consists of 5 impulse channels; a sample spike pattern is depicted on it. The neurons are depicted with large dark circles. Each neuron has 3 connection points depicted with small circles: top white circle is a sensory dendrite, bottom white circle is a learning dendrite and grey right circle is an axon. Global inputs are connected to sensory and to learning dendrites. These connections are depicted with two black lines from input pattern connection points to neuron's connection points. The axon of each neuron is connected to the global output (white triangles) as well as to sensory dendrites of all other neurons (this all-to-all connectivity depicted as background connection lines).

5. RESULTS AND DISCUSSION

We conducted some experiments with the described memory network in the spiking pattern storing/restoring task. During the learning the memory network consisted of 5 neurons (fig. 1) received by turn two 5-dimensional spike patterns depicted on the left part of fig. 2. After receiving of each input pattern network had an additional time for finishing of all transitional processes (refractory, learning, postsynaptic potential propagation). Then another pattern was presented to the network. This storing phase consisted of 100 iterations with two sequential representations of input patterns. During this learning process each neuron received learning spikes. Each learning spike made neuron to enable learning mechanisms (eq. (3)) in order to generate an output spike in the future having the same sensory context. Therefore the whole input learning pattern made all neurons to repeat it with their output spikes. Each learning spike was induced by global input pattern and came simultaneously with global sensor input spike, thus actually neurons were learned to repeat global sensory patterns.

[FIGURE 1 OMITTED]

[FIGURE 2 OMITTED]

Restoring phase consisted of representations to the network of incomplete spike patterns--clues (fig. 2, right). After receiving a clue network recollected the whole pattern stored before that was closer to the clue. Recollecting abilities of the memory network are based on the all-to-all neurons connectivity. When neuron receives a learning spike it usually already has got a sensory context consisting of all spikes generated previously by other neurons. Each neuron learns to generate a spike in condition of particular pattern of previous internal network activity. For example, the neuron in the middle (with index 4) on the fig. 2 learned to generate spike if it receives sequential spikes from the first two neurons (index 2 then index 3) or from the last to neurons (index 6 then index 5). When the learned network got a pattern clue, neurons tried to detect a learned context pattern in clue, and if clue has it, they sequentially recollected the whole pattern. All simulations have been implemented on self-made software "NeuroTeach". The future work will be devoted to the application of the described memory networks for preprocessing of sensory input in robot control systems.

6. ACKNOWLEDGEMENTS

This work is supported by Russian Foundation for Basic Research (grant 08-01-00498-a).

7. REFERENCES

Gerstner W, K. W. (2002). Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press

Lipo Wang. (1999). Multi-associative Neural Networks and Their Applications to Learning and Retrieving Complex Spatio-Temporal Sequences. Systems, Man, and Cybernetics, Part B: Cybernetics

Pfister J.P., Toyoizumi T., Barber D., Gerstner W. (2006). Optimal Spike-Timing Dependent Plasticity for Precise Action Potential Firing in Supervised Learning. Neural computation ISSN 0899-7667, 18 (6)

Sinyavskiy O.Y., Kobrin A.I. (2010). Learning with teacher of spiking neuron in spatial-temporal impulse pattern detection task (in Russian). Neurocomputers: Development and Application, vol.8

Sinyavskiy O.Y., Kobrin A.I. (2009). Using of informational characteristics of impulse signals stream for spiking neurons learning (in Russian). Proceeding of Integrated models and soft calculations in artificial intelligence 2009 conference

Stratonovich R.L. (1975). Information theory (in Russian). Moscow: Sovetskoe Radio

A spiking neuron is a neuron model, operating with continuously arriving multidimensional stream of current impulses (spikes)--a sequence of the identical point events.

We assume that a spike sequence processing is the one of the basic neural operations that is carried out in real neurons of live organisms. The spiking neuron is capable to carry out some useful tasks of a multidimensional impulse patterns processing. Although even a single neuron is capable to detect a spatial-temporal pattern (Sinyavskiy O.Y. et al., 2010), a spiking neural network is imagined as a more powerful spiking patterns processor. An interesting and practically significant task is the task of patterns retention. A spiking neural net should serve as the autoassociative memory unit in this task that is capable to store patterns and to restore them having initial clues.

Spatial-temporal memory schemes were proposed by a lot of researches, but some of them lack the biological plausibility (for exampe, see (Lipo Wang, 1999)). In this paper we propose a spatial-temporal scheme based on plausible spiking neuron models (Gerstner W. et al., 2002).

2. EXTENDED SRM0 MODEL

We use an extended stochastic Spiking Response Model (SRM0) (Gerstner W. et al., 2002; Sinyavskiy O.Y. et al., 2009) as a base element of the memory network. The SRM0 neuron is a simple 1-dimensional voltage integrator. A fixed set of postsynaptic potentials is generated at the every moment of spike arriving at the specific input channel of the neuron. These potentials have a form of alpha functions with various time features. Each alpha function of input channel is attached to its weight value. The weighted sum of alpha functions has a meaning of a resulting postsynaptic potential for the specific input channel. Regulating alpha functions weights, it is possible to regulate the form of the total postsynaptic potential for i -th input channel, in particular, to control a magnitude Aui and a time moment Ati of its maximum.

A neuron with three alpha functions on each input channel was used in all experiments in the given work. Thus, if neuron has n input channels, all neuron's weights W can be set up by a n x 3 dimensional matrix. A neuron membrane voltage at some time point is the sum of postsynaptic potentials of all input channels adding a value of refractory function:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

where [a.sub.j](t)--alpha-functions, [eta](t)- refractory function, [u.sub.0]--initial value of membrane potential. After every output spike membrane voltage is reset to some refractory value [u.sub.refr].

A stochastic component of neuron model is realized using a stochastic threshold. Intensity of density of probability of spike generation [lambda](u(t)) at the time moment t depends non-linearly (we use the sigmoidal dependence) on proximity of membrane voltage u(t) value with some threshold value. The probability distribution density [p.sub.T]{[y.sub.T]|[[bar.x].sub.T]} of spikes generation is defined as follows (Pfister J.P. et al., 2006):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

where [lambda](u)--stochastic threshold function, [[bar.x].sub.T]--input spike pattern.

3. NEURON LEARNING WITH TEACHER

Neuron's learning mechanisms are explicitly or implicitly defined by the information about the degree of approaching of a neuron's behavior to behavior that solves the task that has been set up. The special signals carrying the information only about the task in view and that are used only in the course of learning we will call the learning signals. The learning spike occurrence at some time moment [t.sup.s.sub.i] is a request from the teacher to neuron to generate an output spike at this moment. The neuron should make adjustment of its parameters in such a way that in case of repetition of the similar sensory input conditions in the future an output spike will be generated by neuron itself.

For a description of quantitative estimation of working characteristics of generalized spiking neuron model it is convenient to use information theory (Stratonovich R.L., 1975) language. The simplest learning with the teacher task is one in which it is required to generate a pattern [y.sub.T] in reply to an input pattern [[bar.x].sub.T]. It is possible to formulate such task as a problem of particular entropy minimization [h.sub.T]([y.sub.T]|[[bar.x].sub.T]) min in a point [y.bar.T] of all output patterns space under condition of an input pattern [[bar.x].sub.T]. In order to minimize particular entropy value we use gradient descent method: weight change [[DELTA].sub.wj] of j -th alpha functions in i -th input channel during learning processes is proportional to the weight [w.sub.ij] derivative of the entropy h([y.sub.T]) (Pfister J.P. et al., 2006):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

where y - learning coefficient, [lambda](u) - stochastic threshold function, [a.sub.j] - alpha-functions in i -th input channel, [t.sup.s_in.sub.k] [member of] [x.sup.i.sub.T]--sensory input spike times, [t.sup.s_out] - target output spike time.

4. AUTOASSOCIATIVE MEMORY NETWORK

The memory network consists of n -neurons with all-to-all sensory connections. It means that each neuron receives pattern of activity of all other neurons. Also there are n global inputs to the network on which external sensory stimuli arrive in the form of spiking patterns. Each global input channel is connected with corresponding neuron, i.e. i -th neuron gets spikes from i -th global input channel. In order to use learning with the teacher approach (eq. (3)) there must be a source of learning signals that tells neuron when it is needed to generate an output spike. In the autoassociative memory presented here the global input signals themselves serve as learning signals for the neurons. Learning spike from i -th global input channel arrives on learning input of i -th neuron's.

The output of each neuron is connected to the respective global output. The overall connectivity of the memory network consisting of 5 neurons is depicted on fig. 1. There is a global input sensory pattern on the left of the fig. 1. It consists of 5 impulse channels; a sample spike pattern is depicted on it. The neurons are depicted with large dark circles. Each neuron has 3 connection points depicted with small circles: top white circle is a sensory dendrite, bottom white circle is a learning dendrite and grey right circle is an axon. Global inputs are connected to sensory and to learning dendrites. These connections are depicted with two black lines from input pattern connection points to neuron's connection points. The axon of each neuron is connected to the global output (white triangles) as well as to sensory dendrites of all other neurons (this all-to-all connectivity depicted as background connection lines).

5. RESULTS AND DISCUSSION

We conducted some experiments with the described memory network in the spiking pattern storing/restoring task. During the learning the memory network consisted of 5 neurons (fig. 1) received by turn two 5-dimensional spike patterns depicted on the left part of fig. 2. After receiving of each input pattern network had an additional time for finishing of all transitional processes (refractory, learning, postsynaptic potential propagation). Then another pattern was presented to the network. This storing phase consisted of 100 iterations with two sequential representations of input patterns. During this learning process each neuron received learning spikes. Each learning spike made neuron to enable learning mechanisms (eq. (3)) in order to generate an output spike in the future having the same sensory context. Therefore the whole input learning pattern made all neurons to repeat it with their output spikes. Each learning spike was induced by global input pattern and came simultaneously with global sensor input spike, thus actually neurons were learned to repeat global sensory patterns.

[FIGURE 1 OMITTED]

[FIGURE 2 OMITTED]

Restoring phase consisted of representations to the network of incomplete spike patterns--clues (fig. 2, right). After receiving a clue network recollected the whole pattern stored before that was closer to the clue. Recollecting abilities of the memory network are based on the all-to-all neurons connectivity. When neuron receives a learning spike it usually already has got a sensory context consisting of all spikes generated previously by other neurons. Each neuron learns to generate a spike in condition of particular pattern of previous internal network activity. For example, the neuron in the middle (with index 4) on the fig. 2 learned to generate spike if it receives sequential spikes from the first two neurons (index 2 then index 3) or from the last to neurons (index 6 then index 5). When the learned network got a pattern clue, neurons tried to detect a learned context pattern in clue, and if clue has it, they sequentially recollected the whole pattern. All simulations have been implemented on self-made software "NeuroTeach". The future work will be devoted to the application of the described memory networks for preprocessing of sensory input in robot control systems.

6. ACKNOWLEDGEMENTS

This work is supported by Russian Foundation for Basic Research (grant 08-01-00498-a).

7. REFERENCES

Gerstner W, K. W. (2002). Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press

Lipo Wang. (1999). Multi-associative Neural Networks and Their Applications to Learning and Retrieving Complex Spatio-Temporal Sequences. Systems, Man, and Cybernetics, Part B: Cybernetics

Pfister J.P., Toyoizumi T., Barber D., Gerstner W. (2006). Optimal Spike-Timing Dependent Plasticity for Precise Action Potential Firing in Supervised Learning. Neural computation ISSN 0899-7667, 18 (6)

Sinyavskiy O.Y., Kobrin A.I. (2010). Learning with teacher of spiking neuron in spatial-temporal impulse pattern detection task (in Russian). Neurocomputers: Development and Application, vol.8

Sinyavskiy O.Y., Kobrin A.I. (2009). Using of informational characteristics of impulse signals stream for spiking neurons learning (in Russian). Proceeding of Integrated models and soft calculations in artificial intelligence 2009 conference

Stratonovich R.L. (1975). Information theory (in Russian). Moscow: Sovetskoe Radio

Printer friendly Cite/link Email Feedback | |

Author: | Sinyavskiy, Oleg Y. |
---|---|

Publication: | Annals of DAAAM & Proceedings |

Article Type: | Report |

Geographic Code: | 1USA |

Date: | Jan 1, 2010 |

Words: | 1663 |

Previous Article: | Virtual prototyping, programming and functioning simulation of modular designed flexible manufacturing cell. |

Next Article: | Design of automated disassembly process as an element of intelligent assembly cell. |

Topics: |