首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
It has already been shown that the number of pools in an open system in the steady state cannot be determined from the number of exponential terms in the specific activity function of a pool, even if the data were free from experimental error. However, some information is conveyed by the number of exponential terms. The information is different depending upon whether the data are obtained from the pool into which the tracer is introduced or from another pool. In the latter case, the number of exponential terms is shown to indicate the maximum number of intermediate pools involved in the shortest path of transfer of material from the injected pool to the pool in question. With regard to the former case, this paper is restricted to functions with two exponential terms and shows which systems of n pools (n >/= 2) are consistent with such data. Consequently, biexponential experimental curves can be interpreted in terms of models consisting of an unrestricted number of pools in which each pool is defined in terms of fast mixing. The generalization to cases of functions with more than two exponential terms can be carried out in a similar manner.  相似文献   

2.
Consideration is given to the role of a population of muscle fibers of distributed diameters in the observed washout of a tracer, with particular reference to radioisotopic K and muscle fibers. It is concluded that if washout of tracer from a single fiber is described as first order, then washout of tracer from a population of fibers is apt to appear as first order.  相似文献   

3.
4.
Data obtained from tracer studies often consist of serial measurements after administration of radioisotope. Very little work has been published on how the error in the data affects the mathematical analysis. Computer simulation was here employed to produce data with error of different magnitude and form for each of several values of rate constant and amplitude. The data were terminated when the value of the last point was 5% of the value of the first point, and also in other ways arranged to simulate experimental situations. The sets of simulated data for a two compartment system were analyzed by the gaussian iterative technique. With a rate constant ratio of at least four the technique converged for data errors of 5% or less. The calculated error in the rate constants ranged from 2 to 85%, and in the amplitudes from 1 to 50%, for data error of 0.5 to 10%. The lesser rate constant and amplitude had the greater errors. If a wrong assumption was made in the analysis about the variation of data error over the time interval of measurement, then the calculated values of parameter standard deviations were greatly in error. The results can be used to decide what experimental accuracy is needed for a given accuracy of model parameters for a variety of biological problems.  相似文献   

5.
Advances in sequencing technologies are allowing genome-wide association studies at an ever-growing scale. The interpretation of these studies requires dealing with statistical and combinatorial challenges, owing to the multi-factorial nature of human diseases and the huge space of genomic markers that are being monitored. Recently, it was proposed that using protein–protein interaction network information could help in tackling these challenges by restricting attention to markers or combinations of markers that map to close proteins in the network. In this review, we survey techniques for integrating genomic variation data with network information to improve our understanding of complex diseases and reveal meaningful associations.  相似文献   

6.
Some factors affecting the usefulness of a linear operator in the analysis of tracer data were evaluated. Application of the operator to a sum of two exponential components resulted in the separation of the rate constants with an accuracy of 10 to 15 per cent if they differed by a factor of at least 2 and the error in the data was about 2 per cent. A factor of 4 was necessary if the error in the data was 6 per cent, and of 6 if the error was 10 per cent. The ratio of amplitudes varied from near unity to equality with the ratio of rate constants. However, if the ratio of amplitudes was greater than the ratio of rate constants the method would not resolve the rate constants. Application of the operator to a sum of three exponential components was also considered.  相似文献   

7.

Background

The understanding of host genetic variation in disease resistance increasingly requires the use of field data to obtain sufficient numbers of phenotypes. We introduce concepts necessary for a genetic interpretation of field disease data, for diseases caused by microparasites such as bacteria or viruses. Our focus is on variance component estimation and we introduce epidemiological concepts to quantitative genetics.

Methodology/Principal Findings

We have derived simple deterministic formulae to predict the impacts of incomplete exposure to infection, or imperfect diagnostic test sensitivity and specificity on heritabilities for disease resistance. We show that these factors all reduce the estimable heritabilities. The impacts of incomplete exposure depend on disease prevalence but are relatively linear with the exposure probability. For prevalences less than 0.5, imperfect diagnostic test sensitivity results in a small underestimation of heritability, whereas imperfect specificity leads to a much greater underestimation, with the impact increasing as prevalence declines. These impacts are reversed for prevalences greater than 0.5. Incomplete data recording in which infected or diseased individuals are not observed, e.g. data recording for too short a period, has impacts analogous to imperfect sensitivity.

Conclusions/Significance

These results help to explain the often low disease resistance heritabilities observed under field conditions. They also demonstrate that incomplete exposure to infection, or suboptimal diagnoses, are not fatal flaws for demonstrating host genetic differences in resistance, they merely reduce the power of datasets. Lastly, they provide a tool for inferring the true extent of genetic variation in disease resistance given knowledge of the disease biology.  相似文献   

8.
9.
Gene frequency distributions observed in large-scale surveys of species of Drosophila are shown to be incompatible with a genetic model involving neutral mutations and genetic drift alone. The data are, however, qualitatively similar to predictions based on an alternative model of natural selection for an optimal level of enzyme activity in addition to drift and mutation. The intensity of selection detected reduces the mean rate of gene substitution to less than one-quarter that expected on the neutral-allele hypothesis.  相似文献   

10.
THE conformations of the polypeptide chains of myoglobin1 and lysozyme2 have been successfully simulated with the aid of computed Van der Waals contact and energy maps of the theoretical independent peptide unit (IPU)3–5. The non-glycyl experimental points plotted on an alanyl IPU are rather scattered on the allowed conformational regions of the map6, especially in the case of lysozyme. By contrast, well defined clusters of points can be observed when only the amino-acid residues in segments of the helical secondary structure (mainly α and β chains) are plotted. In addition, clusters of points, albeit less well defined, can be observed by plotting the points relative to the experimental conformations of the first non-helical amino-acid residue next to a more or less folded segment of that α-helical type so frequently present in globular proteins (Fig. 1).  相似文献   

11.
Researchers are regularly interested in interpreting the multipartite structure of data entities according to their functional relationships. Data is often heterogeneous with intricately hidden inner structure. With limited prior knowledge, researchers are likely to confront the problem of transforming this data into knowledge. We develop a new framework, called heat-passing, which exploits intrinsic similarity relationships within noisy and incomplete raw data, and constructs a meaningful map of the data. The proposed framework is able to rank, cluster, and visualize the data all at once. The novelty of this framework is derived from an analogy between the process of data interpretation and that of heat transfer, in which all data points contribute simultaneously and globally to reveal intrinsic similarities between regions of data, meaningful coordinates for embedding the data, and exemplar data points that lie at optimal positions for heat transfer. We demonstrate the effectiveness of the heat-passing framework for robustly partitioning the complex networks, analyzing the globin family of proteins and determining conformational states of macromolecules in the presence of high levels of noise. The results indicate that the methodology is able to reveal functionally consistent relationships in a robust fashion with no reference to prior knowledge. The heat-passing framework is very general and has the potential for applications to a broad range of research fields, for example, biological networks, social networks and semantic analysis of documents.  相似文献   

12.
Classifying multivariate electromyography (EMG) data is an important problem in prosthesis control as well as in neurophysiological studies and diagnosis. With modern high-density EMG sensor technology, it is possible to capture the rich spectrospatial structure of the myoelectric activity. We hypothesize that multi-way machine learning methods can efficiently utilize this structure in classification as well as reveal interesting patterns in it. To this end, we investigate the suitability of existing three-way classification methods to EMG-based hand movement classification in spectrospatial domain, as well as extend these methods by sparsification and regularization. We propose to use Fourier-domain independent component analysis as preprocessing to improve classification and interpretability of the results. In high-density EMG experiments on hand movements across 10 subjects, three-way classification yielded higher average performance compared with state-of-the art classification based on temporal features, suggesting that the three-way analysis approach can efficiently utilize detailed spectrospatial information of high-density EMG. Phase and amplitude patterns of features selected by the classifier in finger-movement data were found to be consistent with known physiology. Thus, our approach can accurately resolve hand and finger movements on the basis of detailed spectrospatial information, and at the same time allows for physiological interpretation of the results.  相似文献   

13.
Tracer studies are analyzed almost universally by multicompartmental models where the state variables are tracer amounts or activities in the different pools. The model parameters are rate constants, defined naturally by expressing fluxes as fractions of the source pools. We consider an alternative state space with tracer enrichments or specific activities as the state variables, with the rate constants redefined by expressing fluxes as fractions of the destination pools. Although the redefinition may seem unphysiological, the commonly computed fractional synthetic rate actually expresses synthetic flux as a fraction of the product mass (destination pool). We show that, for a variety of structures, provided the structure is linear and stationary, the model in the enrichment state space has fewer parameters than that in the activities state space, and is hence better both to study identifiability and to estimate parameters. The superiority of enrichment modeling is shown for structures where activity model unidentifiability is caused by multiple exit pathways; on the other hand, with a single exit pathway but with multiple untraced entry pathways, activity modeling is shown to be superior. With the present-day emphasis on mass isotopes, the tracer in human studies is often of a precursor, labeling most or all entry pathways. It is shown that for these tracer studies, models in the activities state space are always unidentifiable when there are multiple exit pathways, even if the enrichment in every pool is observed; on the other hand, the corresponding models in the enrichment state space have fewer parameters and are more often identifiable. Our results suggest that studies with labeled precursors are modeled best with enrichments.  相似文献   

14.
15.
《Biophysical journal》2020,118(7):1649-1664
Hydrogen-deuterium exchange combined with mass spectrometry (HDX-MS) is a widely applied biophysical technique that probes the structure and dynamics of biomolecules without the need for site-directed modifications or bio-orthogonal labels. The mechanistic interpretation of HDX data, however, is often qualitative and subjective, owing to a lack of quantitative methods to rigorously translate observed deuteration levels into atomistic structural information. To help address this problem, we have developed a methodology to generate structural ensembles that faithfully reproduce HDX-MS measurements. In this approach, an ensemble of protein conformations is first generated, typically using molecular dynamics simulations. A maximum-entropy bias is then applied post hoc to the resulting ensemble such that averaged peptide-deuteration levels, as predicted by an empirical model, agree with target values within a given level of uncertainty. We evaluate this approach, referred to as HDX ensemble reweighting (HDXer), for artificial target data reflecting the two major conformational states of a binding protein. We demonstrate that the information provided by HDX-MS experiments and by the model of exchange are sufficient to recover correctly weighted structural ensembles from simulations, even when the relevant conformations are rarely observed. Degrading the information content of the target data—e.g., by reducing sequence coverage, by averaging exchange levels over longer peptide segments, or by incorporating different sources of uncertainty—reduces the structural accuracy of the reweighted ensemble but still allows for useful insights into the distinctive structural features reflected by the target data. Finally, we describe a quantitative metric to rank candidate structural ensembles according to their correspondence with target data and illustrate the use of HDXer to describe changes in the conformational ensemble of the membrane protein LeuT. In summary, HDXer is designed to facilitate objective structural interpretations of HDX-MS data and to inform experimental approaches and further developments of theoretical exchange models.  相似文献   

16.
Biomedical research becomes increasingly interdisciplinary and collaborative in nature. Researchers need to efficiently and effectively collaborate and make decisions by meaningfully assembling, mining and analyzing available large-scale volumes of complex multi-faceted data residing in different sources. In line with related research directives revealing that, in spite of the recent advances in data mining and computational analysis, humans can easily detect patterns which computer algorithms may have difficulty in finding, this paper reports on the practical use of an innovative web-based collaboration support platform in a biomedical research context. Arguing that dealing with data-intensive and cognitively complex settings is not a technical problem alone, the proposed platform adopts a hybrid approach that builds on the synergy between machine and human intelligence to facilitate the underlying sense-making and decision making processes. User experience shows that the platform enables more informed and quicker decisions, by displaying the aggregated information according to their needs, while also exploiting the associated human intelligence.  相似文献   

17.
18.
Given the importance of protein aggregation in amyloid diseases and in the manufacture of protein pharmaceuticals, there has been increased interest in measuring and modeling the kinetics of protein aggregation. Several groups have analyzed aggregation data quantitatively, typically measuring aggregation kinetics by following the loss of protein monomer over time and invoking a nucleated growth mechanism. Such analysis has led to mechanistic conclusions about the size and nature of the nucleus, the aggregation pathway, and/or the physicochemical properties of aggregation-prone proteins. We have examined some of the difficulties that arise when extracting mechanistic meaning from monomer-loss kinetic data. Using literature data on the aggregation of polyglutamine, a mutant β-clam protein, and protein L, we determined parameter values for 18 different kinetic models. We developed a statistical model discrimination method to analyze protein aggregation data in light of competing mechanisms; a key feature of the method is that it penalizes overparameterization. We show that, for typical monomer-loss kinetic data, multiple models provide equivalent fits, making mechanistic determination impossible. We also define the type and quality of experimental data needed to make more definitive conclusions about the mechanism of aggregation. Specifically, we demonstrate how direct measurement of fibril size provides robust discrimination.  相似文献   

19.
20.
Quantitative Legionella PCRs targeting the 16S rRNA gene (specific for the genus Legionella) and the mip gene (specific for the species Legionella pneumophila) were applied to a total of 223 hot water system samples (131 in one laboratory and 92 in another laboratory) and 37 cooling tower samples (all in the same laboratory). The PCR results were compared with those of conventional culture. 16S rRNA gene PCR results were nonquantifiable for 2.8% of cooling tower samples and up to 39.1% of hot water system samples, and this was highly predictive of Legionella CFU counts below 250/liter. PCR cutoff values for identifying hot water system samples containing >103 CFU/liter legionellae were determined separately in each laboratory. The cutoffs differed widely between the laboratories and had sensitivities from 87.7 to 92.9% and specificities from 77.3 to 96.5%. The best specificity was obtained with mip PCR. PCR cutoffs could not be determined for cooling tower samples, as the results were highly variable and often high for culture-negative samples. Thus, quantitative Legionella PCR appears to be applicable to samples from hot water systems, but the positivity cutoff has to be determined in each laboratory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号