首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new approach to estimation of quantal release distribution of transmitter under conditions of high synaptic activity is presented. Postsynaptic responses of neuromuscular excitatory synapse in muscle-opener of nipper of the lobster, which are obtained by focal extracellular recording, are used as original data set. Based on two data groups (value of evoked and spontaneous postsynaptic responses), the linear regression model is constructed. Parameters of this model describe completely the quantal release distribution. To evaluate the parameters, biased modifications of the least squares method—the penalized least squares method and the principal components method—were applied. As a result, it was possible to achieve estimations of the quantal release distribution with sufficiently low standard errors. Modeling studies have shown that the gain of accuracy of the estimation due to a decrease of the standard error exceeds considerably losses caused by its bias.  相似文献   

2.
This article considers the parameter estimation of multi-fiber family models for biaxial mechanical behavior of passive arteries in the presence of the measurement errors. First, the uncertainty propagation due to the errors in variables has been carefully characterized using the constitutive model. Then, the parameter estimation of the artery model has been formulated into nonlinear least squares optimization with an appropriately chosen weight from the uncertainty model. The proposed technique is evaluated using multiple sets of synthesized data with fictitious measurement noises. The results of the estimation are compared with those of the conventional nonlinear least squares optimization without a proper weight factor. The proposed method significantly improves the quality of parameter estimation as the amplitude of the errors in variables becomes larger. We also investigate model selection criteria to decide the optimal number of fiber families in the multi-fiber family model with respect to the experimental data balancing between variance and bias errors.  相似文献   

3.
The interpretation of fluorescence intensity decay times in terms of protein structure and dynamics depends on the accuracy and sensitivity of the methods used for data analysis. The are many methods available for the analysis of fluorescence decay data, but justification for choosing any one of them is unclear. In this paper we generalize the recently proposed Padé-Laplace method [45] to include deconvolution with respect to the instrument response function. In this form the method can be readily applied to the analysis of time-correlated single photon counting data. By extensive simulations we have shown that the Padé-Laplace method provides more accurate results than the standard least squares method with iterative reconvolution under the condition of closely spaced lifetimes. The application of the Padé-Laplace method to several experimental data sets yielded results consistent with those obtained by use of the least squares analysis. Offprint requests to: F. G. Prendergast  相似文献   

4.
S R Lipsitz 《Biometrics》1992,48(1):271-281
In many empirical analyses, the response of interest is categorical with an ordinal scale attached. Many investigators prefer to formulate a linear model, assigning scores to each category of the ordinal response and treating it as continuous. When the covariates are categorical, Haber (1985, Computational Statistics and Data Analysis 3, 1-10) has developed a method to obtain maximum likelihood (ML) estimates of the parameters of the linear model using Lagrange multipliers. However, when the covariates are continuous, the only method we found in the literature is ordinary least squares (OLS), performed under the assumption of homogeneous variance. The OLS estimates are unbiased and consistent but, since variance homogeneity is violated, the OLS estimates of variance can be biased and may not be consistent. We discuss a variance estimate (White, 1980, Econometrica 48, 817-838) that is consistent for the true variance of the OLS parameter estimates. The possible bias encountered by using the naive OLS variance estimate is discussed. An estimated generalized least squares (EGLS) estimator is proposed and its efficiency relative to OLS is discussed. Finally, an empirical comparison of OLS, EGLS, and ML estimators is made.  相似文献   

5.
A mathematical method is developed for computing the coefficients of soft tissue energy density polynomials, satisfying certain constraints. The polynomial coefficients are computed in the least squares sense. It is demonstrated that this method: (a) determines up to 30 polynomial coefficients whereas the unmodified least squares method based on Maclaurin power series determines up to nine coefficients; an (b) increases numerical stability and accuracy by several orders of magnitude. All computations are carried out in single precision on a LS1-11/23 laboratory minicomputer. The algorithm is particularly useful for on-line data analysis using small laboratory computers.  相似文献   

6.
Huang J  Ma S  Xie H 《Biometrics》2006,62(3):813-820
We consider two regularization approaches, the LASSO and the threshold-gradient-directed regularization, for estimation and variable selection in the accelerated failure time model with multiple covariates based on Stute's weighted least squares method. The Stute estimator uses Kaplan-Meier weights to account for censoring in the least squares criterion. The weighted least squares objective function makes the adaptation of this approach to multiple covariate settings computationally feasible. We use V-fold cross-validation and a modified Akaike's Information Criterion for tuning parameter selection, and a bootstrap approach for variance estimation. The proposed method is evaluated using simulations and demonstrated on a real data example.  相似文献   

7.
This paper considers the sampling distribution problem of the least squares estimator for the parameter of some special autoregressive models. The Edgeworth approximation has been derived and a modification is proposed to improve its accuracy. Comparisons with the exact distribution and the so called Edgeworth-B approximation have been discussed. The results show that the proposed approximation performs more accurately than the Edgeworth-B approximation, especially, when models are close to the non-stationary boundary.  相似文献   

8.
The receiver operating characteristic (ROC) curve is a popular tool to evaluate and compare the accuracy of diagnostic tests to distinguish the diseased group from the nondiseased group when test results from tests are continuous or ordinal. A complicated data setting occurs when multiple tests are measured on abnormal and normal locations from the same subject and the measurements are clustered within the subject. Although least squares regression methods can be used for the estimation of ROC curve from correlated data, how to develop the least squares methods to estimate the ROC curve from the clustered data has not been studied. Also, the statistical properties of the least squares methods under the clustering setting are unknown. In this article, we develop the least squares ROC methods to allow the baseline and link functions to differ, and more importantly, to accommodate clustered data with discrete covariates. The methods can generate smooth ROC curves that satisfy the inherent continuous property of the true underlying curve. The least squares methods are shown to be more efficient than the existing nonparametric ROC methods under appropriate model assumptions in simulation studies. We apply the methods to a real example in the detection of glaucomatous deterioration. We also derive the asymptotic properties of the proposed methods.  相似文献   

9.

Background

The objective of the present study was to test the ability of the partial least squares regression technique to impute genotypes from low density single nucleotide polymorphisms (SNP) panels i.e. 3K or 7K to a high density panel with 50K SNP. No pedigree information was used.

Methods

Data consisted of 2093 Holstein, 749 Brown Swiss and 479 Simmental bulls genotyped with the Illumina 50K Beadchip. First, a single-breed approach was applied by using only data from Holstein animals. Then, to enlarge the training population, data from the three breeds were combined and a multi-breed analysis was performed. Accuracies of genotypes imputed using the partial least squares regression method were compared with those obtained by using the Beagle software. The impact of genotype imputation on breeding value prediction was evaluated for milk yield, fat content and protein content.

Results

In the single-breed approach, the accuracy of imputation using partial least squares regression was around 90 and 94% for the 3K and 7K platforms, respectively; corresponding accuracies obtained with Beagle were around 85% and 90%. Moreover, computing time required by the partial least squares regression method was on average around 10 times lower than computing time required by Beagle. Using the partial least squares regression method in the multi-breed resulted in lower imputation accuracies than using single-breed data. The impact of the SNP-genotype imputation on the accuracy of direct genomic breeding values was small. The correlation between estimates of genetic merit obtained by using imputed versus actual genotypes was around 0.96 for the 7K chip.

Conclusions

Results of the present work suggested that the partial least squares regression imputation method could be useful to impute SNP genotypes when pedigree information is not available.  相似文献   

10.

Background

Genomic selection (GS) uses molecular breeding values (MBV) derived from dense markers across the entire genome for selection of young animals. The accuracy of MBV prediction is important for a successful application of GS. Recently, several methods have been proposed to estimate MBV. Initial simulation studies have shown that these methods can accurately predict MBV. In this study we compared the accuracies and possible bias of five different regression methods in an empirical application in dairy cattle.

Methods

Genotypes of 7,372 SNP and highly accurate EBV of 1,945 dairy bulls were used to predict MBV for protein percentage (PPT) and a profit index (Australian Selection Index, ASI). Marker effects were estimated by least squares regression (FR-LS), Bayesian regression (Bayes-R), random regression best linear unbiased prediction (RR-BLUP), partial least squares regression (PLSR) and nonparametric support vector regression (SVR) in a training set of 1,239 bulls. Accuracy and bias of MBV prediction were calculated from cross-validation of the training set and tested against a test team of 706 young bulls.

Results

For both traits, FR-LS using a subset of SNP was significantly less accurate than all other methods which used all SNP. Accuracies obtained by Bayes-R, RR-BLUP, PLSR and SVR were very similar for ASI (0.39-0.45) and for PPT (0.55-0.61). Overall, SVR gave the highest accuracy.All methods resulted in biased MBV predictions for ASI, for PPT only RR-BLUP and SVR predictions were unbiased. A significant decrease in accuracy of prediction of ASI was seen in young test cohorts of bulls compared to the accuracy derived from cross-validation of the training set. This reduction was not apparent for PPT. Combining MBV predictions with pedigree based predictions gave 1.05 - 1.34 times higher accuracies compared to predictions based on pedigree alone. Some methods have largely different computational requirements, with PLSR and RR-BLUP requiring the least computing time.

Conclusions

The four methods which use information from all SNP namely RR-BLUP, Bayes-R, PLSR and SVR generate similar accuracies of MBV prediction for genomic selection, and their use in the selection of immediate future generations in dairy cattle will be comparable. The use of FR-LS in genomic selection is not recommended.  相似文献   

11.
To analyse intervertebral movements, methods with a high level of accuracy are required. Stereoradiographic methods have been used for a number of years to describe intervertebral movements, but their major problem is to identify the same anatomical landmarks, not only on the pair of radiographs used for three-dimensional reconstruction, but also on all the pairs used to analyse the displacements. To minimize the errors due to the incorrect identification of anatomical landmarks, a least squares method to resolve the parameters of Euler's angles was validated by means of measurements made on a spine obtained from a cadaver. The accuracy of this method varied between 0.69° and 0.71° in rotation and between 0.28 mm and 0.77 mm in translation. In addition, this method significantly corrected the position of the anatomical landmarks. Euler's angles, used with a least squares estimate, can provide accurate and precise results.  相似文献   

12.
Zhou XH  Castelluccio P  Zhou C 《Biometrics》2005,61(2):600-609
In the evaluation of diagnostic accuracy of tests, a gold standard on the disease status is required. However, in many complex diseases, it is impossible or unethical to obtain such a gold standard. If an imperfect standard is used, the estimated accuracy of the tests would be biased. This type of bias is called imperfect gold standard bias. In this article we develop a nonparametric maximum likelihood method for estimating ROC curves and their areas of ordinal-scale tests in the absence of a gold standard. Our simulation study shows that the proposed estimators for the ROC curve areas have good finite-sample properties in terms of bias and mean squared error. Further simulation studies show that our nonparametric approach is comparable to the binormal parametric method, and is easier to implement. Finally, we illustrate the application of the proposed method in a real clinical study on assessing the accuracy of seven specific pathologists in detecting carcinoma in situ of the uterine cervix.  相似文献   

13.
Our research is motivated by 2 methodological problems in assessing diagnostic accuracy of traditional Chinese medicine (TCM) doctors in detecting a particular symptom whose true status has an ordinal scale and is unknown-imperfect gold standard bias and ordinal scale symptom status. In this paper, we proposed a nonparametric maximum likelihood method for estimating and comparing the accuracy of different doctors in detecting a particular symptom without a gold standard when the true symptom status had an ordered multiple class. In addition, we extended the concept of the area under the receiver operating characteristic curve to a hyper-dimensional overall accuracy for diagnostic accuracy and alternative graphs for displaying a visual result. The simulation studies showed that the proposed method had good performance in terms of bias and mean squared error. Finally, we applied our method to our motivating example on assessing the diagnostic abilities of 5 TCM doctors in detecting symptoms related to Chills disease.  相似文献   

14.
The paper discusses the possibility of implementing a minimum risk classifier using the learning machine approach. Necessary conditions on the choice of pairwise classification costs are imposed so that the minimum risk classifier can be implemented using pairwise class separating functions. Parameters of these functions are obtained using a two stage algorithm which minimizes a modified least squares criterion of class separation. In comparison to normal least squares objective function, this criterion increases the sensitivity of the learning scheme near the class separating surface and, consequently, allows for an improvement in the performance of the discriminant function decision making processor. Simplicity of the design procedure is achieved by partitioning the multimodal classes into unimodal subsets, since discriminant functions of unimodal classes can usually be implemented simply and with sufficient accuracy as low order polynomials. The proposed design approach is tested experimentally on an artificial pattern recognition problem.  相似文献   

15.
Papermaking wastewater accounts for a large proportion of industrial wastewater, and it is essential to obtain accurate and reliable effluent indices in real-time. Considering the complexity, nonlinearity, and time variability of wastewater treatment processes, a dynamic kernel extreme learning machine (DKELM) method is proposed to predict the key quality indices of effluent chemical oxygen demand (COD). A time lag coefficient is introduced and a kernel function is embedded into the extreme learning machine (ELM) to extract dynamic information and obtain better prediction accuracy. A case study for modeling a wastewater treatment process is demonstrated to evaluate the performance of the proposed DKELM. The results illustrate that both training and prediction accuracy of the DKELM model is superior to other models. For the prediction of the quality indices of effluent COD, the determinate coefficient of the DKELM model is increased by 27.52 %, 21.36 %, 10.42 %, and 10.81 %, compared with partial least squares, ELM, dynamic ELM, and kernel ELM, respectively.  相似文献   

16.
ANDERSON and POSPAHALA (1970) investigated the estimation of wildlife population size using the belt or line transect sampling method and devised a correction for bias, thus leading to a class of estimators with desirable characteristics. This work was given a basic and rigorous mathematica framework by BURNHAM and ANDERSON (1976). In the present article we use this mathematical framework to develop an estimator of population size and density using weighted least squares. The approach is a two-stage Method.  相似文献   

17.
A genetic model was proposed to simultaneously investigate genetic effects of both polygenes and several single genes for quantitative traits of diploid plants and animals. Mixed linear model approaches were employed for statistical analysis. Based on two mating designs, a full diallel cross and a modified diallel cross including F2, Monte Carlo simulations were conducted to evaluate the unbiasedness and efficiency of the estimation of generalized least squares (GLS) and ordinary least squares (OLS) for fixed effects and of minimum norm quadratic unbiased estimation (MINQUE) and Henderson III for variance components. Estimates of MINQUE (1) were unbiased and efficient in both reduced and full genetic models. Henderson III could have a large bias when used to analyze the full genetic model. Simulation results also showed that GLS and OLS were good methods to estimate fixed effects in the genetic models. Data on Drosophila melanogaster from Gilbert were used as a worked example to demonstrate the parameter estimation. Received: 11 November 2000 / Accepted: 2 May 2001  相似文献   

18.
MOTIVATION: Gene expression data often contain missing expression values. Effective missing value estimation methods are needed since many algorithms for gene expression data analysis require a complete matrix of gene array values. In this paper, imputation methods based on the least squares formulation are proposed to estimate missing values in the gene expression data, which exploit local similarity structures in the data as well as least squares optimization process. RESULTS: The proposed local least squares imputation method (LLSimpute) represents a target gene that has missing values as a linear combination of similar genes. The similar genes are chosen by k-nearest neighbors or k coherent genes that have large absolute values of Pearson correlation coefficients. Non-parametric missing values estimation method of LLSimpute are designed by introducing an automatic k-value estimator. In our experiments, the proposed LLSimpute method shows competitive results when compared with other imputation methods for missing value estimation on various datasets and percentages of missing values in the data. AVAILABILITY: The software is available at http://www.cs.umn.edu/~hskim/tools.html CONTACT: hpark@cs.umn.edu  相似文献   

19.
Bondell HD  Reich BJ 《Biometrics》2008,64(1):115-123
Summary .   Variable selection can be challenging, particularly in situations with a large number of predictors with possibly high correlations, such as gene expression data. In this article, a new method called the OSCAR (octagonal shrinkage and clustering algorithm for regression) is proposed to simultaneously select variables while grouping them into predictive clusters. In addition to improving prediction accuracy and interpretation, these resulting groups can then be investigated further to discover what contributes to the group having a similar behavior. The technique is based on penalized least squares with a geometrically intuitive penalty function that shrinks some coefficients to exactly zero. Additionally, this penalty yields exact equality of some coefficients, encouraging correlated predictors that have a similar effect on the response to form predictive clusters represented by a single coefficient. The proposed procedure is shown to compare favorably to the existing shrinkage and variable selection techniques in terms of both prediction error and model complexity, while yielding the additional grouping information.  相似文献   

20.
We propose a simple method for comparison of series of matched observations. While in all our examples we address “individual bioequivalence” (IBE), which is the subject of much discussion in pharmaceutical statistics, the methodology can be applied to a wide class of cross‐over experiments, including cross‐over imaging. From the statistical point of view the considered models belong to the class of the “error‐in‐variables” models. In computational statistics the corresponding optimization method is referred to as the “least squares distance” and the “total least squares” method. The derived confidence regions for both intercept and slope provide the basis for formulation of the IBE criteria and methods for its assessing. Simple simulations show that the proposed approach is very intuitive and transparent, and, at the same time, has a solid statistical and computational background.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号