首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Introduction

T2 relaxometry has become an important tool in quantitative MRI. Little focus has been put on the effect of the refocusing flip angle upon the offset parameter, which was introduced to account for a signal floor due to noise or to long T2 components. The aim of this study was to show that B1 imperfections contribute significantly to the offset. We further introduce a simple method to reduce the systematic error in T2 by discarding the first echo and using the offset fitting approach.

Materials and Methods

Signal curves of T2 relaxometry were simulated based on extended phase graph theory and evaluated for 4 different methods (inclusion and exclusion of the first echo, while fitting with and without the offset). We further performed T2 relaxometry in a phantom at 9.4T magnetic resonance imaging scanner and used the same methods for post-processing as in the extended phase graph simulated data. Single spin echo sequences were used to determine the correct T2 time.

Results

The simulation data showed that the systematic error in T2 and the offset depends on the refocusing pulse, the echo spacing and the echo train length. The systematic error could be reduced by discarding the first echo. Further reduction of the systematic T2 error was reached by using the offset as fitting parameter. The phantom experiments confirmed these findings.

Conclusion

The fitted offset parameter in T2 relaxometry is influenced by imperfect refocusing pulses. Using the offset as a fitting parameter and discarding the first echo is a fast and easy method to minimize the error in T2, particularly for low to intermediate echo train length.  相似文献   

2.
PurposeTo analyze the uncertainties of the rectum due to anisotropic shape variations by using a statistical point distribution model (PDM).Materials and methodsThe PDM was applied to the rectum contours that were delineated on planning computed tomography (CT) and cone-beam CT (CBCT) at 80 fractions of 11 patients. The standard deviations (SDs) of systematic and random errors of the shape variations of the whole rectum and the region in which the rectum overlapped with the PTV (ROP regions) were derived from the PDMs at all fractions of each patient. The systematic error was derived by using the PDMs of planning and average rectum surface determined from rectum surfaces at all fractions, while the random error was derived by using a PDM-based covariance matrix at all fractions of each patient.ResultsRegarding whole rectum, the population SDs were larger than 1.0 mm along all directions for random error, and along the anterior, superior, and inferior directions for systematic error. The deviation is largest along the superior and inferior directions for systematic and random errors, respectively. For ROP regions, the population SDs of systematic error were larger than 1.0 mm along the superior and inferior directions. The population SDs of random error for the ROP regions were larger than 1.0 mm except along the right and posterior directions.ConclusionsThe anisotropic shape variations of the rectum, especially in the ROP regions, should be considered when determining a planning risk volume (PRV) margins for the rectum associated with the acute toxicities.  相似文献   

3.
A Markov chain Monte Carlo (MCMC) algorithm to sample an exchangeable covariance matrix, such as the one of the error terms (R0) in a multiple trait animal model with missing records under normal-inverted Wishart priors is presented. The algorithm (FCG) is based on a conjugate form of the inverted Wishart density that avoids sampling the missing error terms. Normal prior densities are assumed for the ''fixed'' effects and breeding values, whereas the covariance matrices are assumed to follow inverted Wishart distributions. The inverted Wishart prior for the environmental covariance matrix is a product density of all patterns of missing data. The resulting MCMC scheme eliminates the correlation between the sampled missing residuals and the sampled R0, which in turn has the effect of decreasing the total amount of samples needed to reach convergence. The use of the FCG algorithm in a multiple trait data set with an extreme pattern of missing records produced a dramatic reduction in the size of the autocorrelations among samples for all lags from 1 to 50, and this increased the effective sample size from 2.5 to 7 times and reduced the number of samples needed to attain convergence, when compared with the ''data augmentation'' algorithm.  相似文献   

4.
All of our perceptual experiences arise from the activity of neural populations. Here we study the formation of such percepts under the assumption that they emerge from a linear readout, i.e., a weighted sum of the neurons’ firing rates. We show that this assumption constrains the trial-to-trial covariance structure of neural activities and animal behavior. The predicted covariance structure depends on the readout parameters, and in particular on the temporal integration window w and typical number of neurons K used in the formation of the percept. Using these predictions, we show how to infer the readout parameters from joint measurements of a subject’s behavior and neural activities. We consider three such scenarios: (1) recordings from the complete neural population, (2) recordings of neuronal sub-ensembles whose size exceeds K, and (3) recordings of neuronal sub-ensembles that are smaller than K. Using theoretical arguments and artificially generated data, we show that the first two scenarios allow us to recover the typical spatial and temporal scales of the readout. In the third scenario, we show that the readout parameters can only be recovered by making additional assumptions about the structure of the full population activity. Our work provides the first thorough interpretation of (feed-forward) percept formation from a population of sensory neurons. We discuss applications to experimental recordings in classic sensory decision-making tasks, which will hopefully provide new insights into the nature of perceptual integration.  相似文献   

5.
Ko H  Davidian M 《Biometrics》2000,56(2):368-375
The nonlinear mixed effects model is used to represent data in pharmacokinetics, viral dynamics, and other areas where an objective is to elucidate associations among individual-specific model parameters and covariates; however, covariates may be measured with error. For additive measurement error, we show substitution of mismeasured covariates for true covariates may lead to biased estimators for fixed effects and random effects covariance parameters, while regression calibration may eliminate bias in fixed effects but fail to correct that in covariance parameters. We develop methods to take account of measurement error that correct this bias and may be implemented with standard software, and we demonstrate their utility via simulation and application to data from a study of HIV dynamics.  相似文献   

6.

Background  

High-throughput screening (HTS) is a key part of the drug discovery process during which thousands of chemical compounds are screened and their activity levels measured in order to identify potential drug candidates (i.e., hits). Many technical, procedural or environmental factors can cause systematic measurement error or inequalities in the conditions in which the measurements are taken. Such systematic error has the potential to critically affect the hit selection process. Several error correction methods and software have been developed to address this issue in the context of experimental HTS [17]. Despite their power to reduce the impact of systematic error when applied to error perturbed datasets, those methods also have one disadvantage - they introduce a bias when applied to data not containing any systematic error [6]. Hence, we need first to assess the presence of systematic error in a given HTS assay and then carry out systematic error correction method if and only if the presence of systematic error has been confirmed by statistical tests.  相似文献   

7.
Life history studies have established that trade‐offs between growth and survival are common both within and among species. Identifying the factor(s) that mediate this trade‐off has proven difficult, however, especially at the among‐species level. In this study, we examined a series of potentially interrelated traits in a community of temperate‐zone passerine birds to help understand the putative causes and consequences of variation in early‐life growth among species. First, we examined whether nest predation risk (a proven driver of interspecific variation in growth and development rates) was correlated with species‐level patterns of incubation duration and nestling period length. We then assessed whether proxies for growth rate covaried with mean trait covariance strength (i.e., phenotypic correlations ( rp), which can be a marker of early‐life stress) among body mass, tarsus length, and wing length at fledging. Finally, we examined whether trait covariance strength at fledging was related to postfledging survival. We found that higher nest predation risk was correlated with faster skeletal growth and that our proxies for growth corresponded with increased trait covariance strength ( rp), which subsequently, correlated with higher mortality in the next life stage (postfledging period). These results provide an indication that extrinsic pressures (nest predation) impact rates of growth, and that there are costs of rapid growth across species, expressed as higher mean rp and elevated postfledging mortality. The link between higher levels of trait covariance at fledging and increased mortality is unclear, but increased trait covariance strength may reflect reduced phenotypic flexibility (i.e., phenotypic canalization), which may limit an organism''s capacity for coping with environmental or ecological variability.  相似文献   

8.
A common question in movement studies is how the results should be interpreted with respect to systematic and random errors. In this study, simulations are made in order to see how a rigid body's orientation in space (i.e. helical angle between two orientations) is affected by (1) a systematic error added to a single marker (2) a combination of this systematic error and Gaussian white noise. The orientation was estimated after adding a systematic error to one marker within the rigid body. This procedure was repeated with Gaussian noise added to each marker.

In conclusion, results show that the systematic error's effect on estimated orientation depends on number of markers in the rigid body and also on which direction the systematic error is added. The systematic error has no effect if the error is added along the radial axis (i.e. the line connecting centre of mass and the affected marker).  相似文献   

9.
Shade-tolerant non-native invasive plant species may make deep incursions into natural plant communities, but detecting such species is challenging because occurrences are often sparse. We developed Bayesian models of the distribution of Microstegium vimineum in natural plant communities of the southern Blue Ridge Mountains, USA to address three objectives: (1) to assess local and landscape factors that influence the probability of presence of M. vimineum; (2) to quantify the spatial covariance error structure in occurrence that was not accounted for by the environmental variables; and (3) to synthesize our results with previous findings to make inference on the spatial attributes of the invasion process. Natural plant communities surrounded by areas with high human activity and low forest cover were at highest risk of M. vimineum invasion. The probability of M. vimineum presence also increased with increasing native species richness and soil pH, and decreasing basal area of ericaceous shrubs. After accounting for environmental covariates, evaluation of the spatial covariance error structure revealed that M. vimineum is invading the landscape by a hierarchical process. Infrequent long-distance dispersal events result in new nascent sub-populations that then spread via intermediate- and short-distance dispersal, resulting in 3-km spatial aggregation pattern of sub-populations. Containment or minimisation of its impact on native plant communities will be contingent on understanding how M. vimineum can be prevented from colonizing new suitable habitats. The hierarchical invasion process proposed here provides a framework to organise and focus research and management efforts.  相似文献   

10.
This paper discusses the advantages and disadvantages of the different methods that separate net ecosystem exchange (NEE) into its major components, gross ecosystem carbon uptake (GEP) and ecosystem respiration (Reco). In particular, we analyse the effect of the extrapolation of night‐time values of ecosystem respiration into the daytime; this is usually done with a temperature response function that is derived from long‐term data sets. For this analysis, we used 16 one‐year‐long data sets of carbon dioxide exchange measurements from European and US‐American eddy covariance networks. These sites span from the boreal to Mediterranean climates, and include deciduous and evergreen forest, scrubland and crop ecosystems. We show that the temperature sensitivity of Reco, derived from long‐term (annual) data sets, does not reflect the short‐term temperature sensitivity that is effective when extrapolating from night‐ to daytime. Specifically, in summer active ecosystems the long‐term temperature sensitivity exceeds the short‐term sensitivity. Thus, in those ecosystems, the application of a long‐term temperature sensitivity to the extrapolation of respiration from night to day leads to a systematic overestimation of ecosystem respiration from half‐hourly to annual time‐scales, which can reach >25% for an annual budget and which consequently affects estimates of GEP. Conversely, in summer passive (Mediterranean) ecosystems, the long‐term temperature sensitivity is lower than the short‐term temperature sensitivity resulting in underestimation of annual sums of respiration. We introduce a new generic algorithm that derives a short‐term temperature sensitivity of Reco from eddy covariance data that applies this to the extrapolation from night‐ to daytime, and that further performs a filling of data gaps that exploits both, the covariance between fluxes and meteorological drivers and the temporal structure of the fluxes. While this algorithm should give less biased estimates of GEP and Reco, we discuss the remaining biases and recommend that eddy covariance measurements are still backed by ancillary flux measurements that can reduce the uncertainties inherent in the eddy covariance data.  相似文献   

11.
12.
Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes–from ranking websites to determining critical species in ecosystems–yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node''s state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node''s direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus–through breadth or depth– impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes “form opinions” about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms'' cognitive and computational demands. This is an important consideration in systems in which individuals use the collective opinions of others to make decisions.  相似文献   

13.
《Biophysical journal》2022,121(18):3422-3434
Protein coating material is important in many technological fields. The interaction between carbon nanomaterial and protein is especially interesting since it makes the development of novel hybrid materials possible. Functional bacterial amyloid (FuBA) is promising as a coating material because of its desirable features, such as well-defined molecular structure, robustness against harsh conditions, and easily engineerable functionality. Here, we report the systematic assembly of the functional amyloid protein, CsgA, from Escherichia coli (E. coli) on graphite. We characterize the assemblies using scanning tunneling microscopy (STM) and show that CsgA forms assemblies according to systematic patterns, dictated by the graphite lattice. In addition, we show that graphite flakes induce the fibrillization of CsgA, in vitro, suggesting a surface-induced conformational change of CsgA facilitated by the graphite lattice. Using coarse-grained molecular dynamics simulations, we model the adhesion and lamellar formation of a CsgA-derived peptide and conclude that peptides are adsorbed both as monomers and smaller aggregates leading initially to unordered graphite-bound aggregates, which are followed by rearrangement into lamellar structures. Finally, we show that CsgA-derived peptides can be immobilized in very systematic assemblies and their molecular orientation can be tuned using a small chaperone-like molecule. Our findings have implications for the development of FuBA-based biosensors, catalysts, and other technologies requiring well-defined protein assemblies on graphite.  相似文献   

14.
Genetic association studies have explained only a small proportion of the estimated heritability of complex traits, leaving the remaining heritability “missing.” Genetic interactions have been proposed as an explanation for this, because they lead to overestimates of the heritability and are hard to detect. Whether this explanation is true depends on the proportion of variance attributable to genetic interactions, which is difficult to measure in outbred populations. Founder populations exhibit a greater range of kinship than outbred populations, which helps in fitting the epistatic variance. We extend classic theory to founder populations, giving the covariance between individuals due to epistasis of any order. We recover the classic theory as a limit, and we derive a recently proposed estimator of the narrow sense heritability as a corollary. We extend the variance decomposition to include dominance. We show in simulations that it would be possible to estimate the variance from pairwise interactions with samples of a few thousand from strongly bottlenecked human founder populations, and we provide an analytical approximation of the standard error. Applying these methods to 46 traits measured in a yeast (Saccharomyces cerevisiae) cross, we estimate that pairwise interactions explain 10% of the phenotypic variance on average and that third- and higher-order interactions explain 14% of the phenotypic variance on average. We search for third-order interactions, discovering an interaction that is shared between two traits. Our methods will be relevant to future studies of epistatic variance in founder populations and crosses.  相似文献   

15.
The specific growth rate for P. aeruginosa and four mutator strains mutT, mutY, mutM and mutY–mutM is estimated by a suggested Maximum Likelihood, ML, method which takes the autocorrelation of the observation into account. For each bacteria strain, six wells of optical density, OD, measurements are used for parameter estimation. The data is log-transformed such that a linear model can be applied. The transformation changes the variance structure, and hence an OD-dependent variance is implemented in the model. The autocorrelation in the data is demonstrated, and a correlation model with an exponentially decaying function of the time between observations is suggested. A model with a full covariance structure containing OD-dependent variance and an autocorrelation structure is compared to a model with variance only and with no variance or correlation implemented. It is shown that the model that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded that the specific growth rate is the same for all bacteria strains. This study highlights the importance of carrying out an explorative examination of residuals in order to make a correct parametrization of a model including the covariance structure. The ML method is shown to be a strong tool as it enables estimation of covariance parameters along with the other model parameters and it makes way for strong statistical tools for inference studies.  相似文献   

16.
Malaria is a life-threatening infectious disease primarily caused by the Plasmodium falciparum parasite. The increasing resistance to current antimalarial drugs and their side effects has led to an urgent need for novel malaria drug targets, such as the P. falciparum cGMP-dependent protein kinase (pfPKG). However, PKG plays an essential regulatory role also in the human host. Human cGMP-dependent protein kinase (hPKG) and pfPKG are controlled by structurally homologous cGMP-binding domains (CBDs). Here, we show that despite the structural similarities between the essential CBDs in pfPKG and hPKG, their respective allosteric networks differ significantly. Through comparative analyses of chemical shift covariance analyses, molecular dynamics simulations, and backbone internal dynamics measurements, we found that conserved allosteric elements within the essential CBDs are wired differently in pfPKG and hPKG to implement cGMP-dependent kinase activation. Such pfPKG versus hPKG rewiring of allosteric networks was unexpected because of the structural similarity between the two essential CBDs. Yet, such finding provides crucial information on which elements to target for selective inhibition of pfPKG versus hPKG, which may potentially reduce undesired side effects in malaria treatments.  相似文献   

17.
Bacteria in the class Alphaproteobacteria have a wide variety of lifestyles and physiologies. They include pathogens of humans and livestock, agriculturally valuable strains, and several highly abundant marine groups. The ancestor of mitochondria also originated in this clade. Despite significant effort to investigate the phylogeny of the Alphaproteobacteria with a variety of methods, there remains considerable disparity in the placement of several groups. Recent emphasis on phylogenies derived from multiple protein-coding genes remains contentious due to disagreement over appropriate gene selection and the potential influences of systematic error. We revisited previous investigations in this area using concatenated alignments of the small and large subunit (SSU and LSU) rRNA genes, as we show here that these loci have much lower GC bias than whole genomes. This approach has allowed us to update the canonical 16S rRNA gene tree of the Alphaproteobacteria with additional important taxa that were not previously included, and with added resolution provided by concatenating the SSU and LSU genes. We investigated the topological stability of the Alphaproteobacteria by varying alignment methods, rate models, taxon selection and RY-recoding to circumvent GC content bias. We also introduce RYMK-recoding and show that it avoids some of the information loss in RY-recoding. We demonstrate that the topology of the Alphaproteobacteria is sensitive to inclusion of several groups of taxa, but it is less affected by the choice of alignment and rate methods. The majority of topologies and comparative results from Approximately Unbiased tests provide support for positioning the Rickettsiales and the mitochondrial branch within a clade. This composite clade is a sister group to the abundant marine SAR11 clade (Pelagibacterales). Furthermore, we add support for taxonomic assignment of several recently sequenced taxa. Accordingly, we propose three subclasses within the Alphaproteobacteria: the Caulobacteridae, the Rickettsidae, and the Magnetococcidae.  相似文献   

18.
Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance–covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations.  相似文献   

19.
Karin Meyer  Mark Kirkpatrick 《Genetics》2010,185(3):1097-1110
Obtaining accurate estimates of the genetic covariance matrix for multivariate data is a fundamental task in quantitative genetics and important for both evolutionary biologists and plant or animal breeders. Classical methods for estimating are well known to suffer from substantial sampling errors; importantly, its leading eigenvalues are systematically overestimated. This article proposes a framework that exploits information in the phenotypic covariance matrix in a new way to obtain more accurate estimates of . The approach focuses on the “canonical heritabilities” (the eigenvalues of ), which may be estimated with more precision than those of because is estimated more accurately. Our method uses penalized maximum likelihood and shrinkage to reduce bias in estimates of the canonical heritabilities. This in turn can be exploited to get substantial reductions in bias for estimates of the eigenvalues of and a reduction in sampling errors for estimates of . Simulations show that improvements are greatest when sample sizes are small and the canonical heritabilities are closely spaced. An application to data from beef cattle demonstrates the efficacy this approach and the effect on estimates of heritabilities and correlations. Penalized estimation is recommended for multivariate analyses involving more than a few traits or problems with limited data.QUANTITATIVE geneticists, including evolutionary biologists and plant and animal breeders, are increasingly dependent on multivariate analyses of genetic variation, for example, to understand evolutionary constraints and design efficient selection programs. New challenges arise when one moves from estimating the genetic variance of a single phenotype to the multivariate setting. An important but unresolved issue is how best to deal with sampling variation and the corresponding bias in the eigenvalues of estimates for the genetic covariance matrix, . It is well known that estimates for the largest eigenvalues of a covariance matrix are biased upward and those for the smallest eigenvalues are biased downward (Lawley 1956; Hayes and Hill 1981). For genetic problems, where we need to estimate at least two covariance matrices simultaneously, this tends to be exacerbated, especially for . In turn, this can result in invalid estimates of , i.e., estimates with negative eigenvalues, and can produce systematic errors in predictions for the response to selection.There has been longstanding interest in “regularization” of covariance matrices, in particular for cases where the ratio between the number of observations and the number of variables is small. Various studies recently employed such techniques for the analysis of high-dimensional, genomic data. In general, this involves a compromise between additional bias and reduced sampling variation of “improved” estimators that have less statistical risk than standard methods (Bickel and Li 2006). For instance, various types of shrinkage estimators of covariance matrices have been suggested that counteract bias in estimates of eigenvalues by shrinking all sample eigenvalues toward their mean. Often this is equivalent to a weighted combination of the sample covariance matrix and a target matrix, assumed to have a simple structure. A common choice for the latter is an identity matrix. This yields a ridge regression type formulation (Hoerl and Kennard 1970). Numerous simulation studies in a variety of settings are available, which demonstrate that regularization can yield closer agreement between estimated and population covariance matrices, less variable estimates of model terms, or improved performance of statistical tests.In quantitative genetic analyses, we attempt to partition observed, overall (phenotypic) covariances into their genetic and environmental components. Typically, this results in strong sampling correlations between them. Hence, while the partitioning into sources of variation and estimates of individual covariance matrices may be subject to substantial sampling variances, their sum, i.e., the phenotypic covariance matrix, can generally be estimated much more accurately. This has led to suggestions to “borrow strength” from estimates of phenotypic components to estimate the genetic covariances. In particular, Hayes and Hill (1981) proposed a method termed “bending” that involved regressing the eigenvalues of the product of the genetic and the inverse of the phenotypic covariance matrix toward their mean. One objective of this procedure was to ensure that estimates of the genetic covariance matrix from an analysis of variance were positive definite. In addition, the authors showed by simulation that shrinking eigenvalues even further than needed to make all values nonnegative could improve the achieved response to selection when using the resulting estimates to derive weights for a selection index, especially for estimation based on small samples. Subsequent work demonstrated that bending could also be advantageous in more general scenarios such as indexes that included information from relatives (Meyer and Hill 1983).Modern, mixed model (“animal model”)-based analyses to estimate genetic parameters using maximum likelihood or Bayesian methods generally constrain estimates to the parameter space, so that—at the expense of introducing some bias—estimates of covariance matrices are positive semidefinite. However, the problems arising from substantial sampling variation in multivariate analyses remain. In spite of increasing applications of such analyses in scenarios where data sets are invariably small, e.g., the analysis of data from natural populations (e.g., Kruuk et al. 2008), there has been little interest in regularization and shrinkage techniques in genetic parameter estimation, other than through the use of informative priors in a Bayesian context. Instead, suggestions for improved estimation have focused on parsimonious modeling of covariance matrices, e.g., through reduced rank estimation or by imposing a known structure, such as a factor-analytic structure (Kirkpatrick and Meyer 2004; Meyer 2009), or by fitting covariance functions for longitudinal data (Kirkpatrick et al. 1990). While such methods can be highly advantageous when the underlying assumptions are at least approximately correct, data-driven methods of regularization may be preferable in other scenarios.This article explores the scope for improved estimation of genetic covariance matrices by implementing the equivalent to bending within animal model-type analyses. We begin with a review of the underlying statistical principles (which the impatient reader might skip), examining the concept of improved estimation, its implementation via shrinkage estimators or penalized estimation, and selected applications. We then describe a penalized restricted maximum-likelihood (REML) procedure for the estimation of genetic covariance matrices that utilizes information from its phenotypic counterparts and present a simulation study demonstrating the effect of penalties on parameter estimates and their sampling properties. The article concludes with an application to a problem relevant in genetic improvement of beef cattle and a discussion.  相似文献   

20.
The interpretation of φ-values has led to an understanding of the folding transition state ensemble of a variety of proteins. Although the main guidelines and equations for calculating φ are well established, there remains some controversy about the quality of the numerical values obtained. By analyzing a complete set of results from kinetic experiments with the SH3 domain of α-spectrin (Spc-SH3) and applying classical error methods and error-propagation formulas, we evaluated the uncertainties involved in two-state-folding kinetic experimental parameters and the corresponding calculated φ-values. We show that kinetic constants in water and m values can be properly estimated from a judicious weighting of fitting errors and describe some procedures to calculate the errors in Gibbs energies and φ-values from a traditional two-point Leffler analysis. Furthermore, on the basis of general assumptions made with the protein engineering method, we show how to generate multipoint Leffler plots via the analysis of pH dependencies of kinetic parameters. We calculated the definitive φ-values for a collection of single mutations previously designed to characterize the folding transition state of the α-spectrin SH3 domain. The effectiveness of the pH-scanning procedure is also discussed in the context of error analysis. Judging from the magnitudes of the error bars obtained from two-point and multipoint Leffler plots, we conclude that the precision obtained for φ-values should be ∼25%, a reasonable limit that takes into account the propagation of experimental errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号