Normalization for probabilistic inference with neurons |
| |
Authors: | Chris Eliasmith James Martens |
| |
Affiliation: | Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON N2L3G1, Canada. celiasmith@uwaterloo.ca |
| |
Abstract: | Recently, there have been a number of proposals regarding how biologically plausible neural networks might perform probabilistic inference (Rao, Neural Computation, 16(1):1-38, 2004; Eliasmith and Anderson, Neural engineering: computation, representation and dynamics in neurobiological systems, 2003; Ma et?al., Nature Neuroscience, 9(11):1432-1438, 2006; Sahani and Dayan, Neural Computation, 15(10):2255-2279, 2003). To be able to repeatedly perform such inference, it is essential that the represented distributions be appropriately normalized. Past approaches have considered normalization mechanisms independently of inference, often leaving them unexplored, or appealing to a notion of divisive normalization that requires pooling across many neurons. Here, we demonstrate how normalization and inference can be combined into an appropriate connection matrix, eliminating the need for pooling or a division-like operation. We algebraically demonstrate that such a solution is available regardless of the inference being performed. We show that such a solution is relevant to neural computation by implementing it in a recurrent spiking neural network. |
| |
Keywords: | |
本文献已被 PubMed SpringerLink 等数据库收录! |
|