首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Recursive state and parameter reconstruction is a well-established field in control theory. In the current paper we derive a continuous-discrete version of recursive prediction error algorithm and apply the filter in an environmental and biological setting as a possible alternative to the well-known extended Kalman filter. The framework from which the derivation is started is the so-called 'innovations-format' of the (continuous time) system model, including (discrete time) measurements. After the algorithm has been motivated and derived, it is subsequently applied to hypothetical and 'real-life' case studies including reconstruction of biokinetic parameters and parameters characterizing the dynamics of a river in the United Kingdom. Advantages and characteristics of the method are discussed.  相似文献   

2.

Background

Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP) genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it.

Methods and Results

By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity.

Conclusions

We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.  相似文献   

3.
The complex-valued backpropagation algorithm has been widely used in fields of dealing with telecommunications, speech recognition and image processing with Fourier transformation. However, the local minima problem usually occurs in the process of learning. To solve this problem and to speed up the learning process, we propose a modified error function by adding a term to the conventional error function, which is corresponding to the hidden layer error. The simulation results show that the proposed algorithm is capable of preventing the learning from sticking into the local minima and of speeding up the learning.  相似文献   

4.
While retinal defocus is believed to be myopigenic in nature, the underlying mechanism has remained elusive. We recently constructed a theory of refractive error development to investigate its fundamental properties. Our Incremental Retinal-Defocus Theory is based on the principle that the change in retinal-defocus magnitude during an increment of genetically-programmed ocular growth provides the requisite sign for the appropriate alteration in subsequent environmentally-induced ocular growth. This theory was tested under five experimental conditions: lenses, diffusers, occlusion, crystalline lens removal, and prolonged nearwork. Predictions of the theory were consistent with previous animal and human experimental findings. In addition, simulations using a MATLAB/SIMULINK model supported our theory by demonstrating quantitatively the appropriate directional changes in ocular growth rate. Thus, our Incremental Retinal-Defocus Theory provides a simple and logical unifying concept underlying the mechanism for the development of refractive error.  相似文献   

5.
X Liu  K Y Liang 《Biometrics》1992,48(2):645-654
Ignoring measurement error may cause bias in the estimation of regression parameters. When the true covariates are unobservable, multiple imprecise measurements can be used in the analysis to correct for the associated bias. We suggest a simple estimating procedure that gives consistent estimates of regression parameters by using the repeated measurements with error. The relative Pitman efficiency of our estimator based on models with and without measurement error has been found to be a simple function of the number of replicates and the ratio of intra- to inter-variance of the true covariate. The procedure thus provides a guide for deciding the number of repeated measurements in the design stage. An example from a survey study is presented.  相似文献   

6.
A frameshift error detection algorithm for DNA sequencing projects.   总被引:2,自引:1,他引:2       下载免费PDF全文
During the determination of DNA sequences, frameshift errors are not the most frequent but they are the most bothersome as they corrupt the amino acid sequence over several residues. Detection of such errors by sequence alignment is only possible when related sequences are found in the databases. To avoid this limitation, we have developed a new tool based on the distribution of non-overlapping 3-tuples or 6-tuples in the three frames of an ORF. The method relies upon the result of a correspondence analysis. It has been extensively tested on Bacillus subtilis and Saccharomyces cerevisiae sequences and has also been examined with human sequences. The results indicate that it can detect frameshift errors affecting as few as 20 bp with a low rate of false positives (no more than 1.0/1000 bp scanned). The proposed algorithm can be used to scan a large collection of data, but it is mainly intended for laboratory practice as a tool for checking the quality of the sequences produced during a sequencing project.  相似文献   

7.
8.
9.
10.
MOTIVATION: Before performing a polymerase chain reaction experiment, a pair of primers to clip the target DNA subsequence is required. However, this is a tedious task as too many constraints need to be satisfied. Various kinds of approaches for designing a primer have been proposed in the last few decades, but most of them do not have restriction sites on the designed primers and do not satisfy the specificity constraint. RESULTS: The proposed algorithm imitates nature's process of evolution and genetic operations on chromosomes in order to achieve optimal solutions, and is a best fit for DNA behavior. Experimental results indicate that the proposed algorithm can find a pair of primers that not only obeys the design properties but also has a specific restriction site and specificity. Gel electrophoresis verifies that the proposed method really can clip out the target sequence. AVAILABILITY: A public version of the software is available on request from the authors.  相似文献   

11.
In this paper, we present a mathematical foundation, including a convergence analysis, for cascading architecture neural network. Our analysis also shows that the convergence of the cascade architecture neural network is assured because it satisfies Liapunov criteria, in an added hidden unit domain rather than in the time domain. From this analysis, a mathematical foundation for the cascade correlation learning algorithm can be found. Furthermore, it becomes apparent that the cascade correlation scheme is a special case from mathematical analysis in which an efficient hardware learning algorithm called Cascade Error Projection(CEP) is proposed. The CEP provides efficient learning in hardware and it is faster to train, because part of the weights are deterministically obtained, and the learning of the remaining weights from the inputs to the hidden unit is performed as a single-layer perceptron learning with previously determined weights kept frozen. In addition, one can start out with zero weight values (rather than random finite weight values) when the learning of each layer is commenced. Further, unlike cascade correlation algorithm (where a pool of candidate hidden units is added), only a single hidden unit is added at a time. Therefore, the simplicity in hardware implementation is also achieved. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.  相似文献   

12.
Accumulation of fatigue microdamage in cortical bone specimens is commonly measured by a modulus or stiffness degradation after normalizing tissue heterogeneity by the initial modulus or stiffness of each specimen measured during a preloading step. In the first experiment, the initial specimen modulus defined using linear elastic beam theory (LEBT) was shown to be nonlinearly dependent on the preload level, which subsequently caused systematic error in the amount and rate of damage accumulation measured by the LEBT modulus degradation. Therefore, the secant modulus is recommended for measurements of the initial specimen modulus during preloading. In the second experiment, different measures of mechanical degradation were directly compared and shown to result in widely varying estimates of damage accumulation during fatigue. After loading to 400,000 cycles, the normalized LEBT modulus decreased by 26% and the creep strain ratio decreased by 58%, but the normalized secant modulus experienced no degradation and histology revealed no significant differences in microcrack density. The LEBT modulus was shown to include the combined effect of both elastic (recovered) and creep (accumulated) strain. Therefore, at minimum, both the secant modulus and creep should be measured throughout a test to most accurately indicate damage accumulation and account for different damage mechanisms. Histology revealed indentation of tissue adjacent to roller supports, with significant sub-surface damage beneath large indentations, accounting for 22% of the creep strain on average. The indentation of roller supports resulted in inflated measures of the LEBT modulus degradation and creep. The results of this study suggest that investigations of fatigue microdamage in cortical bone should avoid the use of four-point bending unless no other option is possible.  相似文献   

13.
Energy stores are critical for successful breeding, and longitudinal studies require nonlethal methods to measure energy stores ("body condition"). Nonlethal techniques for measuring energy reserves are seldom verified independently. We compare body mass, size-corrected mass (SCM), plasma lipids, and isotopic dilution with extracted total body lipid content in three seabird species (thick-billed murres Uria lomvia, all four measures; northern fulmars Fulmarus glacialis, three measures; and black-legged kittiwakes Rissa tridactyla, two measures). SCM and body mass were better predictors of total body lipids for the species with high percent lipids (fulmars; R2 = 0.5-0.6) than for the species with low percent lipids (murres and kittiwakes; R2 = 0.2-0.4). The relationship between SCM and percent body lipids, which we argue is often a better measure of condition, was also poor (R2 < 0.2) for species with low lipids. In a literature comparison of 17 bird species, percent lipids was the only predictor of the strength of the relationship between mass and total body lipids; we suggest that SCM be used as an index of energy stores only when lipids exceed 15% of body mass. Across all three species we measured, SCM based on the ordinary least squares regression of mass on the first principal component outperformed other measures. Isotopic dilution was a better predictor of both total body lipids and percent body lipids than were mass, SCM, or plasma lipids in murres. Total body lipids decreased through the breeding season at both sites, while total and neutral plasma lipid concentrations increased at one site but not another, suggesting mobilization of lipid stores for breeding. A literature review showed substantial variation in the reliability of plasma markers, and we recommend isotopic dilution (oxygen-18, plateau) for determination of energy reserves in birds where lipid content is below 15%.  相似文献   

14.
Bayesian inference for variance components using only error contrasts   总被引:6,自引:0,他引:6  
HARVILLE  DAVID A. 《Biometrika》1974,61(2):383-385
  相似文献   

15.

Background  

The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms.  相似文献   

16.
MOTIVATION: Ranking feature sets is a key issue for classification, for instance, phenotype classification based on gene expression. Since ranking is often based on error estimation, and error estimators suffer to differing degrees of imprecision in small-sample settings, it is important to choose a computationally feasible error estimator that yields good feature-set ranking. RESULTS: This paper examines the feature-ranking performance of several kinds of error estimators: resubstitution, cross-validation, bootstrap and bolstered error estimation. It does so for three classification rules: linear discriminant analysis, three-nearest-neighbor classification and classification trees. Two measures of performance are considered. One counts the number of the truly best feature sets appearing among the best feature sets discovered by the error estimator and the other computes the mean absolute error between the top ranks of the truly best feature sets and their ranks as given by the error estimator. Our results indicate that bolstering is superior to bootstrap, and bootstrap is better than cross-validation, for discovering top-performing feature sets for classification when using small samples. A key issue is that bolstered error estimation is tens of times faster than bootstrap, and faster than cross-validation, and is therefore feasible for feature-set ranking when the number of feature sets is extremely large.  相似文献   

17.
Attempts to import existing measures developed in other countries when constructing research instruments for use with older people can result in several problems including inappropriate wording, unsuitable response sets, and insufficient attention to cultural nuances. This paper addresses such problems by discussing a mixed methods approach to measurement development (i.e. both qualitative and quantitative) that incorporates input from the aging adults for whom the measure is intended. To test this approach, a step-by step process to the development of a culturally-grounded measure for older Thai people is described. Using focus groups and in-depth interviews, the process begins with an identification of the culturally meaningful domains of the construct under study. Next, input is gathered from other studies; a preliminary quantitative measure is developed; the measure is reviewed by a panel of experts; and then it is pilot-tested. Cognitive interviews are utilized when pilot-testing of the items detects problems with measurement construction or interview methods. When these problems are remedied, the measure is incorporated into a large-scale survey and tested for its psychometric qualities. In addition to providing a template for culturally-sensitive measurement development in gerontology, this paper also highlights issues that researchers should consider when attempting to develop measures and provides suggestions for how to address such issues.  相似文献   

18.

Background  

Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data.  相似文献   

19.
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.  相似文献   

20.
Two measures of multivariate niche overlap defined on p resource variables are presented. By measuring the niche overlap on the discriminant variable the multivariate problem is reduced to a univariate problem while preserving the relevant multivariate information. The niche overlap is then calculated by two different techniques. The first technique uses the MacArthur-Levins (Amer. Natur.101, 377–385, 1967) measure for probabilities of joint occurrence, while the second computes the density overlap of two use curves. An illustration of the multivariate approach to actual field data is demonstrated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号