首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The method of generalized least squares (GLS) is used to assess the variance function for isothermal titration calorimetry (ITC) data collected for the 1:1 complexation of Ba(2+) with 18-crown-6 ether. In the GLS method, the least squares (LS) residuals from the data fit are themselves fitted to a variance function, with iterative adjustment of the weighting function in the data analysis to produce consistency. The data are treated in a pooled fashion, providing 321 fitted residuals from 35 data sets in the final analysis. Heteroscedasticity (nonconstant variance) is clearly indicated. Data error terms proportional to q(i) and q(i)/v are well defined statistically, where q(i) is the heat from the ith injection of titrant and v is the injected volume. The statistical significance of the variance function parameters is confirmed through Monte Carlo calculations that mimic the actual data set. For the data in question, which fall mostly in the range of q(i)=100-2000 microcal, the contributions to the data variance from the terms in q(i)(2) typically exceed the background constant term for q(i)>300 microcal and v<10 microl. Conversely, this means that in reactions with q(i) much less than this, heteroscedasticity is not a significant problem. Accordingly, in such cases the standard unweighted fitting procedures provide reliable results for the key parameters, K and DeltaH(degrees) and their statistical errors. These results also support an important earlier finding: in most ITC work on 1:1 binding processes, the optimal number of injections is 7-10, which is a factor of 3 smaller than the current norm. For high-q reactions, where weighting is needed for optimal LS analysis, tips are given for using the weighting option in the commercial software commonly employed to process ITC data.  相似文献   

2.
Most methods for analyzing real-time quantitative polymerase chain reaction (qPCR) data for single experiments estimate the hypothetical cycle 0 signal y0 by first estimating the quantification cycle (Cq) and amplification efficiency (E) from least-squares fits of fluorescence intensity data for cycles near the onset of the growth phase. The resulting y0 values are statistically equivalent to the corresponding Cq if and only if E is taken to be error free. But uncertainty in E usually dominates the total uncertainty in y0, making the latter much degraded in precision compared with Cq. Bias in E can be an even greater source of error in y0. So-called mechanistic models achieve higher precision in estimating y0 by tacitly assuming E = 2 in the baseline region and so are subject to this bias error. When used in calibration, the mechanistic y0 is statistically comparable to Cq from the other methods. When a signal threshold yq is used to define Cq, best estimation precision is obtained by setting yq near the maximum signal in the range of fitted cycles, in conflict with common practice in the y0 estimation algorithms.  相似文献   

3.
Isothermal titration calorimetry data for very low c (≡K[M]0) must normally be analyzed with the stoichiometry parameter n fixed — at its known value or at any reasonable value if the system is not well characterized. In the latter case, ΔH° (and hence n) can be estimated from the T-dependence of the binding constant K, using the van't Hoff (vH) relation. An alternative is global or simultaneous fitting of data at multiple temperatures. In this Note, global analysis of low-c data at two temperatures is shown to estimate ΔH° and n with double the precision of the vH method.  相似文献   

4.
In isothermal titration calorimetry (ITC), the two main sources of random (statistical) error are associated with the extraction of the heat q from the measured temperature changes and with the delivery of metered volumes of titrant. The former leads to uncertainty that is approximately constant and the latter to uncertainty that is proportional to q. The role of these errors in the analysis of ITC data by nonlinear least squares is examined for the case of 1:1 binding, M+X right arrow over left arrow MX. The standard errors in the key parameters-the equilibrium constant Ko and the enthalpy DeltaHo-are assessed from the variance-covariance matrix computed for exactly fitting data. Monte Carlo calculations confirm that these "exact" estimates will normally suffice and show further that neglect of weights in the nonlinear fitting can result in significant loss of efficiency. The effects of the titrant volume error are strongly dependent on assumptions about the nature of this error: If it is random in the integral volume instead of the differential volume, correlated least-squares is required for proper analysis, and the parameter standard errors decrease with increasing number of titration steps rather than increase.  相似文献   

5.
6.
7.
Population variance in bone shape is an important consideration when applying the results of subject-specific computational models to a population. In this letter, we demonstrate the ability of partial least squares regression to provide an improved shape prediction of the equine third metacarpal epiphysis, using two easily obtained measurements.  相似文献   

8.
We consider the problem of predicting survival times of cancer patients from the gene expression profiles of their tumor samples via linear regression modeling of log-transformed failure times. The partial least squares (PLS) and least absolute shrinkage and selection operator (LASSO) methodologies are used for this purpose where we first modify the data to account for censoring. Three approaches of handling right censored data-reweighting, mean imputation, and multiple imputation-are considered. Their performances are examined in a detailed simulation study and compared with that of full data PLS and LASSO had there been no censoring. A major objective of this article is to investigate the performances of PLS and LASSO in the context of microarray data where the number of covariates is very large and there are extremely few samples. We demonstrate that LASSO outperforms PLS in terms of prediction error when the list of covariates includes a moderate to large percentage of useless or noise variables; otherwise, PLS may outperform LASSO. For a moderate sample size (100 with 10,000 covariates), LASSO performed better than a no covariate model (or noise-based prediction). The mean imputation method appears to best track the performance of the full data PLS or LASSO. The mean imputation scheme is used on an existing data set on lung cancer. This reanalysis using the mean imputed PLS and LASSO identifies a number of genes that were known to be related to cancer or tumor activities from previous studies.  相似文献   

9.
Accuracy in quantitative real-time polymerase chain reaction (qPCR) requires the use of stable endogenous controls. Normalization with multiple reference genes is the gold standard, but their identification is a laborious task, especially in species with limited sequence information. Coffee (Coffea ssp.) is an important agricultural commodity and, due to its economic relevance, is the subject of increasing research in genetics and biotechnology, in which gene expression analysis is one of the most important fields. Notwithstanding, relatively few works have focused on the analysis of gene expression in coffee. Moreover, most of these works have used less accurate techniques such as northern blot assays instead of more accurate techniques (e.g., qPCR) that have already been extensively used in other plant species. Aiming to boost the use of qPCR in studies of gene expression in coffee, we uncovered reference genes to be used in a number of different experimental conditions. Using two distinct algorithms implemented by geNorm and Norm Finder, we evaluated a total of eight candidate reference genes (psaB, PP2A, AP47, S24, GAPDH, rpl39, UBQ10, and UBI9) in four different experimental sets (control versus drought-stressed leaves, control versus drought-stressed roots, leaves of three different coffee cultivars, and four different coffee organs). The most suitable combination of reference genes was indicated in each experimental set for use as internal control for reliable qPCR data normalization. This study also provides useful guidelines for reference gene selection for researchers working with coffee plant samples under conditions other than those tested here. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

10.
Phylogenetic comparative methods use tree topology, branch lengths, and models of phenotypic change to take into account nonindependence in statistical analysis. However, these methods normally assume that trees and models are known without error. Approaches relying on evolutionary regimes also assume specific distributions of character states across a tree, which often result from ancestral state reconstructions that are subject to uncertainty. Several methods have been proposed to deal with some of these sources of uncertainty, but approaches accounting for all of them are less common. Here, we show how Bayesian statistics facilitates this task while relaxing the homogeneous rate assumption of the well-known phylogenetic generalized least squares (PGLS) framework. This Bayesian formulation allows uncertainty about phylogeny, evolutionary regimes, or other statistical parameters to be taken into account for studies as simple as testing for coevolution in two traits or as complex as testing whether bursts of phenotypic change are associated with evolutionary shifts in intertrait correlations. A mixture of validation approaches indicates that the approach has good inferential properties and predictive performance. We provide suggestions for implementation and show its usefulness by exploring the coevolution of ankle posture and forefoot proportions in Carnivora.  相似文献   

11.
This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re‐weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (?F and F‐ratio) under ideal or non‐ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non‐ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions.  相似文献   

12.
Two-dimensional difference gel electrophoresis (DIGE) is a tool for measuring changes in protein expression between samples involving pre-electrophoretic labeling ith cyanine dyes. In multi-gel experiments, univariate statistical tests have been used to identify differential expression between sample types by looking for significant changes in spot volume. Multivariate statistical tests, which look for correlated changes between sample types, provide an alternate approach for identifying spots with differential expression. Partial least squares-discriminant analysis (PLS-DA), a multivariate statistical approach, was combined with an iterative threshold process to identify which protein spots had the greatest contribution to the model, and compared to univariate test for three datasets. This included one dataset where no biological difference was expected. The novel multivariate approach, detailed here, represents a method to complement the univariate approach in identification of differentially expressed protein spots. This new approach has the advantages of reduced risk of false-positives and the identification of spots that are significantly altered in terms of correlated expression rather than absolute expression values.  相似文献   

13.
14.
分析棉铃虫核多角体病毒基因组 ,结合GenBank中已知的序列 ,发现iap2基因位于其基因组的BamHⅠ F片段上 ,回收此片段作为模板 ,设计引物 ,通过PCR扩增得到了抗细胞凋亡基因iap2的DNA片段。将扩增产物克隆到pGEM T载体上 ,再进一步将插入片段酶切并连接到表达载体pET 2 8a上 ,构建了重组质粒pET iap2。DNA序列分析结果表明 ,克隆得到的DNA序列与所发表序列完全相同。含重组质粒pET iap2的大肠杆菌BL2 1 (DE3)表达了抗细胞凋亡蛋白IAP2。  相似文献   

15.
Background, aim, and scope  Analysis of uncertainties plays a vital role in the interpretation of life cycle assessment findings. Some of these uncertainties arise from parametric data variability in life cycle inventory analysis. For instance, the efficiencies of manufacturing processes may vary among different industrial sites or geographic regions; or, in the case of new and unproven technologies, it is possible that prospective performance levels can only be estimated. Although such data variability is usually treated using a probabilistic framework, some recent work on the use of fuzzy sets or possibility theory has appeared in the literature. The latter school of thought is based on the notion that not all data variability can be properly described in terms of frequency of occurrence. In many cases, it is necessary to model the uncertainty associated with the subjective degree of plausibility of parameter values. Fuzzy set theory is appropriate for such uncertainties. However, the computations required for handling fuzzy quantities has not been fully integrated with the formal matrix-based life cycle inventory analysis (LCI) described by Heijungs and Suh (2002). Materials and methods  This paper integrates computations with fuzzy numbers into the matrix-based LCI computational model described in the literature. The approach uses fuzzy numbers to propagate the data variability in LCI calculations, and results in fuzzy distributions of the inventory results. The approach is developed based on similarities with the fuzzy economic input–output (EIO) model proposed by Buckley (Eur J Oper Res 39:54–60, 1989). Results  The matrix-based fuzzy LCI model is illustrated using three simple case studies. The first case shows how fuzzy inventory results arise in simple systems with variability in industrial efficiency and emissions data. The second case study illustrates how the model applies for life cycle systems with co-products, and thus requires the inclusion of displaced processes. The third case study demonstrates the use of the method in the context of comparing different carbon sequestration technologies. Discussion  These simple case studies illustrate the important features of the model, including possible computational issues that can arise with larger and more complex life cycle systems. Conclusions  A fuzzy matrix-based LCI model has been proposed. The model extends the conventional matrix-based LCI model to allow for computations with parametric data variability represented as fuzzy numbers. This approach is an alternative or complementary approach to interval analysis, probabilistic or Monte Carlo techniques. Recommendations and perspectives  Potential further work in this area includes extension of the fuzzy model to EIO-LCA models and to life cycle impact assessment (LCIA); development of hybrid fuzzy-probabilistic approaches; and integration with life cycle-based optimization or decision analysis. Additional theoretical work is needed for modeling correlations of the variability of parameters using interacting or correlated fuzzy numbers, which remains an unresolved computational issue. Furthermore, integration of the fuzzy model into LCA software can also be investigated.  相似文献   

16.
17.
Mutations in the leucine-rich, glioma-inactivated 1 gene, LGI1, cause autosomal-dominant lateral temporal lobe epilepsy via unknown mechanisms. LGI1 belongs to a subfamily of leucine-rich repeat genes comprising four members (LGI1-LGI4) in mammals. In this study, both comparative developmental as well as molecular evolutionary methods were applied to investigate the evolution of the LGI gene family and, subsequently, of the functional importance of its different gene members. Our phylogenetic studies suggest that LGI genes evolved early in the vertebrate lineage. Genetic and expression analyses of all five zebrafish lgi genes revealed duplications of lgi1 and lgi2, each resulting in two paralogous gene copies with mostly nonoverlapping expression patterns. Furthermore, all vertebrate LGI1 orthologs experience high levels of purifying selection that argue for an essential role of this gene in neural development or function. The approach of combining expression and selection data used here exemplarily demonstrates that in poorly characterized gene families a framework of evolutionary and expression analyses can identify those genes that are functionally most important and are therefore prime candidates for human disorders.  相似文献   

18.
One of the most important issues in improving the competitiveness of the fish production sector is to improve the growth rate of fish. The genetic background to this trait is at present poorly understood. In this study, we compared the relative gene expression levels of the Akt1s1, FGF, GH, IGF1, MSTN, TLR2, TLR4 and TLR5 genes in blood in groups of common carps (Cyprinus carpio), which belonged to different growth types and phenotypes. Fish were divided into groups based on growth rate (normal group: n = 6; slow group: n = 6) and phenotype (scaled group: n = 6; mirror group: n = 6). In the first 18 weeks, we measured significant differences (p < 0.05) between groups in terms of body weight and body length. Over the next 18 weeks, the fish in the slow group showed more intense development. In the same period, the slow group was characterized by lower expression levels for most genes, whereas GH and IGF1 mRNA levels were higher compared to the normal group. We found that phenotype was not a determining factor in differences of relative expression levels of the genes studied.  相似文献   

19.
20.
By optimizing the Mg2+ concentration, Taq enzyme dosage, SYBR Green I (SGI) concentration, and plate reading temperature in PCR system, we established the method for detecting the expression levels of nitrogen assimilation-related genes in rice by using RT-PCR technique. Based on this qualified method, we investigated the variations of OsAMT1.1 (one of nitrogen uptake genes) and OsGlt1 (one of nitrogen metabolism genes) expression levels in rice seedlings under conditions of varying nitrogen supply. The results show that by optimizing the parameters in the PCR system to fit the characters of target genes best, we can successfully quantify the low-abundant nitrogen transport and metabolism genes in rice quickly and exactly using fluorescence RT-PCR technique. Published in Russian in Fiziologiya Rastenii, 2006, Vol. 53, No. 4, pp. 625–636. The text was submitted by the authors in English.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号