首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We discuss the ability of dynamic neural fields to track noisy population codes in an online fashion when signals are constantly applied to the recurrent network. To report on the quantitative performance of such networks we perform population decoding of the ‘orientation’ embedded in the noisy signal and determine which inhibition strength in the network provides the best decoding performance. We also study the performance of decoding on time-varying signals. Simulations of the system show good performance even in the very noisy case and also show that noise is beneficial to decoding time-varying signals.  相似文献   

2.
Information is encoded in the brain by populations or clusters of cells, rather than by single cells. This encoding strategy is known as population coding. Here we review the standard use of population codes for encoding and decoding information, and consider how population codes can be used to support neural computations such as noise removal and nonlinear mapping. More radical ideas about how population codes may directly represent information about stimulus uncertainty are also discussed.  相似文献   

3.
The operations of encoding and decoding in communication agree with filtering operations of convolution and deconvolution for Gaussian signal processing. In an analogy with power transmission in thermodynamics, an autoregressive model of information transmission is proposed for representing a continuous communication system which requires a pair of an internal noise source and a signal source to encode or decode a message. In this model transinformation (informational entropy) equals the increase in stationary nonequilibrium organization formed through the amplification of white noise by a positive feedback system. The channel capacity is finite due to the existence of inherent noise in the system. The maximum entropy criterion in information dynamics corresponds to the 2nd law of thermodynamics. If the process is stationary, the communication system is invertible, and has the maximum efficiency of transformation. The total variation in informational entropy is zero in the cycle of the invertible system, while in the noninvertible system the entropy of decoding is less than that of encoding. A noisy autoregressive coding which maximizes transinformation is optimum, but is also ideal.  相似文献   

4.
A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.  相似文献   

5.
Khan AH  Ossadtchi A  Leahy RM  Smith DJ 《Genomics》2003,81(2):157-165
We describe a microarray design based on the concept of error-correcting codes from digital communication theory. Currently, microarrays are unable to efficiently deal with "drop-outs," when one or more spots on the array are corrupted. The resulting information loss may lead to decoding errors in which no quantitation of expression can be extracted for the corresponding genes. This issue is expected to become increasingly problematic as the number of spots on microarrays expands to accommodate the entire genome. The error-correcting approach employs multiplexing (encoding) of more than one gene onto each spot to efficiently provide robustness to drop-outs in the array. Decoding then allows fault-tolerant recovery of the expression information from individual genes. The error-correcting method is general and may have important implications for future array designs in research and diagnostics.  相似文献   

6.
Many experiments have successfully demonstrated that prosthetic devices for restoring lost body functions can in principle be controlled by brain signals. However, stable long-term application of these devices, required for paralyzed patients, may suffer substantially from on-going signal changes for example adapting neural activities or movements of the electrodes recording brain activity. These changes currently require tedious re-learning procedures which are conducted and supervised under laboratory conditions, hampering the everyday use of such devices. As an efficient alternative to current methods we here propose an on-line adaptation scheme that exploits a hypothetical secondary signal source from brain regions reflecting the user’s affective evaluation of the current neuro- prosthetic’s performance. For demonstrating the feasibility of our idea, we simulate a typical prosthetic setup controlling a virtual robotic arm. Hereby we use the additional, hypothetical evaluation signal to adapt the decoding of the intended arm movement which is subjected to large non-stationarities. Even with weak signals and high noise levels typically encountered in recording brain activities, our simulations show that prosthetic devices can be adapted successfully during everyday usage, requiring no special training procedures. Furthermore, the adaptation is shown to be stable against large changes in neural encoding and/or in the recording itself.  相似文献   

7.
The degeneracy of the genetic code confers a wide array of properties to coding sequences. Yet, its origin is still unclear. A structural analysis has shown that the stability of the Watson–Crick base pair at the second position of the anticodon–codon interaction is a critical parameter controlling the extent of non-specific pairings accepted at the third position by the ribosome, a flexibility at the root of degeneracy. Based on recent cryo-EM analyses, the present work shows that residue A1493 of the decoding center provides a significant contribution to the stability of this base pair, revealing that the ribosome is directly involved in the establishment of degeneracy. Building on existing evolutionary models, we show the evidence that the early appearance of A1493 and A1492 established the basis of degeneracy when an elementary kinetic scheme of translation was prevailing. Logical considerations on the expansion of this kinetic scheme indicate that the acquisition of the peptidyl transferase center was the next major evolutionary step, while the induced-fit mechanism, that enables a sharp selection of the tRNAs, necessarily arose later when G530 was acquired by the decoding center.  相似文献   

8.
Perceptual decision making is prone to errors, especially near threshold. Physiological, behavioural and modeling studies suggest this is due to the intrinsic or ‘internal’ noise in neural systems, which derives from a mixture of bottom-up and top-down sources. We show here that internal noise can form the basis of perceptual decision making when the external signal lacks the required information for the decision. We recorded electroencephalographic (EEG) activity in listeners attempting to discriminate between identical tones. Since the acoustic signal was constant, bottom-up and top-down influences were under experimental control. We found that early cortical responses to the identical stimuli varied in global field power and topography according to the perceptual decision made, and activity preceding stimulus presentation could predict both later activity and behavioural decision. Our results suggest that activity variations induced by internal noise of both sensory and cognitive origin are sufficient to drive discrimination judgments.  相似文献   

9.
POLYMORPHIC TAXA, MISSING VALUES AND CLADISTIC ANALYSIS   总被引:2,自引:0,他引:2  
Abstract Missing values have been used in cladistic analyses when data are unavailable, inapplicable or sometimes when character states are variable within terminal taxa. The practice of scoring taxa as having "missing values" for polymorphic characters introduces errors into the calculation of cladogram lengths and consistency indices because some character change is hidden within terminals. Because these hidden character steps are not counted, the set of most parsimonious cladograms may differ from those that would be found if polymorphic taxa had been broken into monomorphic subunits. In some cases, the trees found when polymorphisms are scored as missing values may not include any of the most parsimonious trees found when the data are scored properly. Additionally, in some cases, polymorphic taxa may be found to be polyphyletic when broken into monomorphic subunits; this is undetected when polymorphisms are treated as missing. Because of these problems, terminal units in cladistic analysis should be based on unique, fixed combinations of characters. Polymorphic taxa should be subdivided into subunits that are monomorphic for each character used in the analysis. Disregarding errors in topology, the additional hidden steps in a cladogram in which polymorphisms are scored as missing can be calculated by a simple formula, based on the observation that if it is assumed that polymorphic terminals include all combinations of character states, 2 p − 1 additional steps are required for each taxon in which p polymorphic binary characters are scored as missing values. Thus, when several polymorphisms are scored as missing in the same taxon, very large errors can be introduced into the calculation of tree length.  相似文献   

10.
Neural codes often seem tailored to the type of information they must carry. Here we contrast the encoding strategies for two different communication signals in electric fish and describe the underlying cellular and network properties that implement them. We compare an aggressive signal that needs to be quickly detected, to a courtship signal whose quality needs to be evaluated. The aggressive signal is encoded by synchronized bursts and a predictive feedback input is crucial in separating background noise from the communication signal. The courtship signal is accurately encoded through a heterogenous population response allowing the discrimination of signal differences. Most importantly we show that the same strategies are used in other systems arguing that they evolved similar solutions because they faced similar tasks.  相似文献   

11.
Neural responses are known to be variable. In order to understand how this neural variability constrains behavioral performance, we need to be able to measure the reliability with which a sensory stimulus is encoded in a given population. However, such measures are challenging for two reasons: First, they must take into account noise correlations which can have a large influence on reliability. Second, they need to be as efficient as possible, since the number of trials available in a set of neural recording is usually limited by experimental constraints. Traditionally, cross-validated decoding has been used as a reliability measure, but it only provides a lower bound on reliability and underestimates reliability substantially in small datasets. We show that, if the number of trials per condition is larger than the number of neurons, there is an alternative, direct estimate of reliability which consistently leads to smaller errors and is much faster to compute. The superior performance of the direct estimator is evident both for simulated data and for neuronal population recordings from macaque primary visual cortex. Furthermore we propose generalizations of the direct estimator which measure changes in stimulus encoding across conditions and the impact of correlations on encoding and decoding, typically denoted by Ishuffle and Idiag respectively.  相似文献   

12.
DNA error correcting codes over the edit metric consist of embeddable markers for sequencing projects that are tolerant of sequencing errors. When a genetic library has multiple sources for its sequences, use of embedded markers permit tracking of sequence origin. This study compares different methods for synthesizing DNA error correcting codes. A new code-finding technique called the salmon algorithm is introduced and used to improve the size of best known codes in five difficult cases of the problem, including the most studied case: length six, distance three codes. An updated table of the best known code sizes with 36 improved values, resulting from three different algorithms, is presented. Mathematical background results for the problem from multiple sources are summarized. A discussion of practical details that arise in application, including biological design and decoding, is also given in this study.  相似文献   

13.
Signal data from DNA-microarray ("chip") technology can be noisy; i.e., the signal variation of one gene on a series of repetitive chips can be substantial. It is becoming more and more recognized that a sufficient number of chip replicates has to be made in order to separate correct from incorrect signals. To reduce the systematic fraction of the noise deriving from pipetting errors, from different treatment of chips during hybridization, and from chip-to-chip manufacturing variability, normalization schemes are employed. We present here an iterative nonparametric nonlinear normalization scheme called simultaneous alternating conditional expectation (sACE), which is designed to maximize correlation between chip repeats in all-chip-against-all space. We tested sACE on 28 experiments with 158 Affymetrix one-color chips. The procedure should be equally applicable to other DNA-microarray technologies, e.g., two-color chips. We show that the reduction of noise compared to a simple normalization scheme like the widely used linear global normalization leads to fewer false-positive calls, i.e., to fewer genes which have to be laboriously confirmed by independent methods such as TaqMan or quantitative PCR.  相似文献   

14.

Background

Encoding arbitrary digital information in DNA has attracted attention as a potential avenue for large scale and long term data storage. However, in order to enable DNA data storage technologies there needs to be improvements in data storage fidelity (tolerance to mutation), the facility of writing and reading the data (biases and systematic error arising from synthesis and sequencing), and overall scalability.

Results

To this end, we have developed and implemented an encoding scheme that is suitable for detecting and correcting errors that may arise during storage, writing, and reading, such as those arising from nucleotide substitutions, insertions, and deletions. We propose a scheme for parallelized long term storage of encoded sequences that relies on overlaps rather than the address blocks found in previously published work. Using computer simulations, we illustrate the encoding, sequencing, decoding, and recovery of encoded information, ultimately demonstrating the possibility of a successful round-trip read/write. These demonstrations show that in theory a precise control over error tolerance is possible. Even after simulated degradation of DNA, recovery of original data is possible owing to the error correction capabilities built into the encoding strategy. A secondary advantage of our method is that the statistical characteristics (such as repetitiveness and GC-composition) of encoded sequences can also be tailored without sacrificing the overall ability to store large amounts of data. Finally, the combination of the overlap-based partitioning of data with the LZMA compression that is integral to encoding means that the entire sequence must be present for successful decoding. This feature enables inordinately strong encryptions. As a potential application, an encrypted pathogen genome could be distributed and carried by cells without danger of being expressed, and could not even be read out in the absence of the entire DNA consortium.

Conclusions

We have developed a method for DNA encoding, using a significantly different fundamental approach from existing work, which often performs better than alternatives and allows for a great deal of freedom and flexibility of application.
  相似文献   

15.
Understanding of how neurons transform fluctuations of membrane potential, reflecting input activity, into spike responses, which communicate the ultimate results of single-neuron computation, is one of the central challenges for cellular and computational neuroscience. To study this transformation under controlled conditions, previous work has used a signal immersed in noise paradigm where neurons are injected with a current consisting of fluctuating noise that mimics on-going synaptic activity and a systematic signal whose transmission is studied. One limitation of this established paradigm is that it is designed to examine the encoding of only one signal under a specific, repeated condition. As a result, characterizing how encoding depends on neuronal properties, signal parameters, and the interaction of multiple inputs is cumbersome. Here we introduce a novel fully-defined signal mixture paradigm, which allows us to overcome these problems. In this paradigm, current for injection is synthetized as a sum of artificial postsynaptic currents (PSCs) resulting from the activity of a large population of model presynaptic neurons. PSCs from any presynaptic neuron(s) can be now considered as “signal”, while the sum of all other inputs is considered as “noise”. This allows us to study the encoding of a large number of different signals in a single experiment, thus dramatically increasing the throughput of data acquisition. Using this novel paradigm, we characterize the detection of excitatory and inhibitory PSCs from neuronal spike responses over a wide range of amplitudes and firing-rates. We show, that for moderately-sized neuronal populations the detectability of individual inputs is higher for excitatory than for inhibitory inputs during the 2–5 ms following PSC onset, but becomes comparable after 7–8 ms. This transient imbalance of sensitivity in favor of excitation may enhance propagation of balanced signals through neuronal networks. Finally, we discuss several open questions that this novel high-throughput paradigm may address.  相似文献   

16.

Background

DNA sequence comparison is a well-studied problem, in which two DNA sequences are compared using a weighted edit distance. Recent DNA sequencing technologies however observe an encoded form of the sequence, rather than each DNA base individually. The encoded DNA sequence may contain technical errors, and therefore encoded sequencing errors must be incorporated when comparing an encoded DNA sequence to a reference DNA sequence.

Results

Although two-base encoding is currently used in practice, many other encoding schemes are possible, whereby two ore more bases are encoded at a time. A generalized k-base encoding scheme is presented, whereby feasible higher order encodings are better able to differentiate errors in the encoded sequence from true DNA sequence variants. A generalized version of the previous two-base encoding DNA sequence comparison algorithm is used to compare a k-base encoded sequence to a DNA reference sequence. Finally, simulations are performed to evaluate the power, the false positive and false negative SNP discovery rates, and the performance time of k-base encoding compared to previous methods as well as to the standard DNA sequence comparison algorithm.

Conclusions

The novel generalized k-base encoding scheme and resulting local alignment algorithm permits the development of higher fidelity ligation-based next generation sequencing technology. This bioinformatic solution affords greater robustness to errors, as well as lower false SNP discovery rates, only at the cost of computational time.  相似文献   

17.
Predictive models of tumor response based on heterogeneity metrics in medical images, such as textural features, are highly suggestive. However, the demonstrated sensitivity of these features to noise does affect the model being developed. An in-depth analysis of the noise influence on the extraction of texture features was performed based on the assumption that an improvement in information quality can also enhance the predictive model. A heuristic approach was used that recognizes from the beginning that the noise has its own texture and it was analysed how it affects the quantitative signal data. A simple procedure to obtain noise image estimation is shown; one which makes it possible to extract the noise-texture features at each observation. The distance measured between the textural features in signal and estimated noise images allows us to determine the features affected in each observation by the noise and, for example, to exclude some of them from the model. A demonstration was carried out using synthetic images applying realistic noise models found in medical images. Drawn conclusions were applied to a public cohort of clinical images obtained using FDG-PET to show how the predictive model could be improved. A gain in the area under the receiver operating characteristic curve between 10 and 20% when noise texture information is used was shown. An improvement between 20 and 30% can be appreciated in the estimated model quality.  相似文献   

18.
Cellular signaling systems show astonishing precision in their response to external stimuli despite strong fluctuations in the molecular components that determine pathway activity. To control the effects of noise on signaling most efficiently, living cells employ compensatory mechanisms that reach from simple negative feedback loops to robustly designed signaling architectures. Here, we report on a novel control mechanism that allows living cells to keep precision in their signaling characteristics – stationary pathway output, response amplitude, and relaxation time – in the presence of strong intracellular perturbations. The concept relies on the surprising fact that for systems showing perfect adaptation an exponential signal amplification at the receptor level suffices to eliminate slowly varying multiplicative noise. To show this mechanism at work in living systems, we quantified the response dynamics of the E. coli chemotaxis network after genetically perturbing the information flux between upstream and downstream signaling components. We give strong evidence that this signaling system results in dynamic invariance of the activated response regulator against multiplicative intracellular noise. We further demonstrate that for environmental conditions, for which precision in chemosensing is crucial, the invariant response behavior results in highest chemotactic efficiency. Our results resolve several puzzling features of the chemotaxis pathway that are widely conserved across prokaryotes but so far could not be attributed any functional role.  相似文献   

19.
Salinas E  Bentley NM 《Bio Systems》2007,89(1-3):16-23
We derive a simple measure for quantifying the average accuracy with which a neuronal population can represent a stimulus. This quantity, the basis set error, has three key properties: (1) it makes no assumptions about the form of the neuronal responses; (2) it depends only on their second order statistics, so although it is easy to compute, it does take noise correlations into account; (3) its magnitude has an intuitive interpretation in terms of the accuracy with which information can be extracted from the population using a simple method-"simple" meaning linear. We use the basis set error to characterize the efficacy of several types of population codes generated synthetically in a computer. In general, the basis set error typically ranks different encoding schemes in a way that is qualitatively similar to Shannon's mutual information, except when nonlinear readout methods are necessary. Because this measure is concerned with signals that can be read out easily (i.e., through linear operations), it provides a lower bound on coding accuracy relative to the computational capabilities that are accessible to a neuronal population.  相似文献   

20.
The control of access of SOX proteins to their nuclear target genes is a powerful strategy to activate or repress complex genetic programs. The sub-cellular targeting sequences of SOX proteins are concentrated within the DNA binding motif, the HMG (for high mobility group) domain. Each SOX protein displays two different nuclear localization signals located at the N-terminal and C-terminal part of their highly conserved DNA binding domain. The N-terminal nuclear localization signal binds calmodulin and is potentially regulated by intracellular calcium signalling, while the C-terminal nuclear localization signal, which binds importin-β, responds to other signalling pathways such as cyclic AMP/protein kinase A. Mutations inducing developmental disorders like sex reversal have been reported in both NLSs of SRY, interfering with its nuclear localization and suggesting that both functional nuclear localization signal are required for its nuclear activity. A nuclear export signal is also present in the HMG box of SOX proteins. Group E SOX proteins harbour a perfect consensus nuclear export signal sequence in contrast to all other SOX proteins, which display only imperfect ones. However, observations made during mouse embryonic development suggest that non-group E SOX proteins could also be regulated by a nuclear export mechanism. The presence of nuclear localization and nuclear export signal sequences confers nucleocytoplasmic shuttling properties to SOX proteins, and suggests that cellular events regulated by SOX proteins are highly dynamic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号