Neural systems in light of distribution codes and information theory
No Thumbnail Available
URL
Journal Title
Journal ISSN
Volume Title
Doctoral thesis (monograph)
Checking the digitized thesis and permission for publishing
Instructions for the author
Instructions for the author
Unless otherwise stated, all rights belong to the author. You may download, display and print this publication for Your own personal use. Commercial use is prohibited.
Authors
Date
2000-11-03
Major/Subject
Mcode
Degree programme
Language
en
Pages
133
Series
Helsinki University of Technology Laboratory of Computational Engineering publications. Report B, 22
Abstract
Distribution code is defined as a code that uses the joint probability distribution of a vector of binary-valued random variables to represent numeric values. When applied to neural systems, the values of the source variables (i.e., input variables) determine the relative frequencies of input vectors either deterministically or stochastically. Similarly, the relative frequencies of the output vectors determine the values of the target variables (i.e., output variables). A mathematical model is proposed allowing unified treatment of biological neural systems and binary-valued artificial neural systems. The distribution function of a neural system is defined as the mapping from the distribution of the input vectors to the distribution of output vectors. It is conceptually similar to the input-output function. Information-theoretic properties of probabilistic and deterministic one-dimensional distribution codes are studied. One result is that the channel capacity of such codes has an upper limit with logarithmic asymptotic behaviour in terms of the size of the sample set and that there are codes whose channel capacity reaches the upper limit. The distribution function of a deterministic memoryless neural system using a one-dimensional distribution code is shown to be monotonic. Several other kinds of neural systems are studied and shown not to have this limitation. It is shown that, when the input weights are fixed, the distribution function of a binary neuron can be controlled to much finer degree – it is an adjustable linear combination of several parts – than the input-output function of a real-valued neuron, which can merely be translated. Multidimensional distribution codes can be used to encode source variable values whose number greatly exceeds the number of the physical inputs of the neural system. There are multidimensional distribution codes that make it possible to change the output distribution of the system without changing any of the marginal distributions of the physical inputs. It can be argued that real-valued neurons do not have an analogue for this property. The last part of this work focuses on experimental work and biological neural systems. A literature survey of information theoretic study on biological neural systems shows a need for new methods for generating data in a controlled and easy manner. A simulation-based methodology is proposed, implemented and evaluated. Two sets of simulations are carried out as a proof of concept. They focus on studying the information-theoretic implications of the topology of a pyramidal neuron.Description
Keywords
neural networks, distribution codes, pulse coding, information theory