首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A practical calculation procedure to correct the underestimatecaused by isotope dilution in 15N uptake experiments is described.The estimate is based on the experimental observation of 15Nincorporation into particuUtc organic matter but not on directmeasurement of isotope enrichment in the substrate. Conventionaldata sets of 15N uptake and an estimate of the ratio of theflux of regeneration and uptake can provide the informationneeded to correct for isotope dilution. Application of thismethod to 15NH4+ uptake data in the literature showed that theunderestimate is small in open ocean waters but sometimes substantialin coastal or estuarine waters.  相似文献   

2.
3.
Analysis of data from experiments using double labeling   总被引:1,自引:0,他引:1  
Frequently as a result of experiments in which two isotopes are used one is left with a sequence of samples, the ratio of labeling in each sample, and the problem of analyzing the ratios. Suppose that the experiments are designed so that one expects uniform labeling except for one or two special groups of samples. The problem, then, is to find these groups. Because of the variability in the count rate from sample to sample, the variance of the ratios differs from sample to sample making statistical analysis difficult. Furthermore, there is significant serial correlation in the sample disintegrations per minute for each of the isotopes. We have found that the serial correlation in the labeling ratio is small and of questionable significance in controls but becomes significant when there is a subsequence of samples in which the labeling ratio differs from that in the remainder of the gel. We examine the analysis of variance as a test for significant deviations in the labeling ratio and suggest a method for plotting deviations of labeling ratio from the average background labeling ratio. Finally, we develop a method of estimating the mean labeling ratio from the regression of disintegrations per minute of one isotope on those of the other isotope. This provides another way of plotting deviations in labeling ratio in terms of the residuals around the line of regression.  相似文献   

4.
5.
6.
H W Deng  J Li  J L Li 《Genetical research》1999,73(2):147-164
Characterizing deleterious genomic mutations is important. Most of the few current estimates come from the mutation-accumulation (M-A) approach, which has been extremely time- and labour-consuming. There is a resurgent interest in implementing this approach. However, its estimation properties under different experimental designs are poorly understood. By simulations we investigate these issues in detail. We found that many of the previous M-A experiments could have been more efficiently implemented with much less time and expense while still achieving the same estimation accuracy. If more than 100 lines are employed in M-A and if each line is replicated at least 10 times during each assay, an experiment of 10 M-A generations with two assays (at the beginning and at the end of M-A) may achieve at least the same estimation quality as a typical M-A experiment. The number of replicates per M-A line necessary for each assay largely depends on the magnitude of environmental variance. While 10 replicates are reasonable for assaying most fitness traits, many more are needed for viability, which has an exceptionally large environmental variance. The investigation is mainly carried out using Bateman-Mukai's method of moments for estimation. Estimation using Keightley's maximum likelihood is also investigated and discussed. These results should not only be useful for planning efficient M-A experiments, but also may help empiricists in deciding to adopt the M-A approach with manageable labour, time and resources.  相似文献   

7.
Mixture modelling of gene expression data from microarray experiments   总被引:5,自引:0,他引:5  
MOTIVATION: Hierarchical clustering is one of the major analytical tools for gene expression data from microarray experiments. A major problem in the interpretation of the output from these procedures is assessing the reliability of the clustering results. We address this issue by developing a mixture model-based approach for the analysis of microarray data. Within this framework, we present novel algorithms for clustering genes and samples. One of the byproducts of our method is a probabilistic measure for the number of true clusters in the data. RESULTS: The proposed methods are illustrated by application to microarray datasets from two cancer studies; one in which malignant melanoma is profiled (Bittner et al., Nature, 406, 536-540, 2000), and the other in which prostate cancer is profiled (Dhanasekaran et al., 2001, submitted).  相似文献   

8.
metaXCMS is a software program for the analysis of liquid chromatography/mass spectrometry-based untargeted metabolomic data. It is designed to identify the differences between metabolic profiles across multiple sample groups (e.g., 'healthy' versus 'active disease' versus 'inactive disease'). Although performing pairwise comparisons alone can provide physiologically relevant data, these experiments often result in hundreds of differences, and comparison with additional biologically meaningful sample groups can allow for substantial data reduction. By performing second-order (meta-) analysis, metaXCMS facilitates the prioritization of interesting metabolite features from large untargeted metabolomic data sets before the rate-limiting step of structural identification. Here we provide a detailed step-by-step protocol for going from raw mass spectrometry data to metaXCMS results, visualized as Venn diagrams and exported Microsoft Excel spreadsheets. There is no upper limit to the number of sample groups or individual samples that can be compared with the software, and data from most commercial mass spectrometers are supported. The speed of the analysis depends on computational resources and data volume, but will generally be less than 1 d for most users. metaXCMS is freely available at http://metlin.scripps.edu/metaxcms/.  相似文献   

9.
A model for the analysis of growth data from designed experiments   总被引:1,自引:0,他引:1  
A model for growth data from designed experiments is presented which extends the stochastic differential equation of Sandland and McGilchrist (1979, Biometrics 35, 255-272). Residual maximum likelihood (REML) is used to estimate the parameters of the model. The model is easily extended to incomplete data and is shown to overcome some of the practical difficulties encountered with the profile model. The procedure is applied to data from experiments on pigs and sheep.  相似文献   

10.
H Crépeau  J Koziol  N Reid  Y S Yuh 《Biometrics》1985,41(2):505-514
This paper analyses two sets of data that consist of repeated measurements with missing data. The missing observations always occur at the end of the series of repeated measurements. The score test for multivariate normal data is used to compare treatment groups; if the original data are not multivariate normal they are replaced by expected normal scores.  相似文献   

11.
J K Haseman  L L Kupper 《Biometrics》1979,35(1):281-293
In certain toxicological experiments with laboratory animals, littermate data are frequently encountered. It is generally recognized that one characteristic of this type of data is the "litter effect", i.e., the tendency for animals from the same litter to respond more alike than animals from different litters. In this paper attention is restricted to dichotomous response variables that frequently arise in toxicological studies, such as the occurrence of fetal death or a particular malformation. Various techniques for estimating the underlying probability of response are discussed. A number of generalized models that have recently been proposed to take the litter effect into account are breifly reviewed and compared to the simpler binomial and Poisson models. Various procedures for assessing the significance of treatment-control differences are presented and their relative merits discussed. Finally, future research needs in this area are outlined.  相似文献   

12.
We present an introduction to, and examples of, Cox proportional hazards regression in the context of animal lethality studies of potential radioprotective agents. This established method is seldom used to analyze survival data collected in such studies, but is appropriate in many instances. Presenting a hypothetical radiation study that examines the efficacy of a potential radioprotectant both in the absence and presence of a potential modifier, we detail how to implement and interpret results from a Cox proportional hazards regression analysis used to analyze the survival data, and we provide relevant SAS? code. Cox proportional hazards regression analysis of survival data from lethal radiation experiments (1) considers the whole distribution of survival times rather than simply the commonly used proportions of animals that survived, (2) provides a unified analysis when multiple factors are present, and (3) can increase statistical power by combining information across different levels of a factor. Cox proportional hazards regression should be considered as a potential statistical method in the toolbox of radiation researchers.  相似文献   

13.
The options available for processing quantitative data from isotope coded affinity tag (ICAT) experiments have mostly been confined to software specific to the instrument of acquisition. However, recent developments with data format conversion have subsequently increased such processing opportunities. In the present study, data sets from ICAT experiments, analysed with liquid chromatography/tandem mass spectrometry (MS/MS), using an Applied Biosystems QSTAR Pulsar quadrupole-TOF mass spectrometer, were processed in triplicate using separate mass spectrometry software packages. The programs Pro ICAT, Spectrum Mill and SEQUEST with XPRESS were employed. Attention was paid towards the extent of common identification and agreement of quantitative results, with additional interest in the flexibility and productivity of these programs. The comparisons were made with data from the analysis of a specifically prepared test mixture, nine proteins at a range of relative concentration ratios from 0.1 to 10 (light to heavy labelled forms), as a known control, and data selected from an ICAT study involving the measurement of cytokine induced protein expression in human lymphoblasts, as an applied example. Dissimilarities were detected in peptide identification that reflected how the associated scoring parameters favoured information from the MS/MS data sets. Accordingly, there were differences in the numbers of peptides and protein identifications, although from these it was apparent that both confirmatory and complementary information was present. In the quantitative results from the three programs, no statistically significant differences were observed.  相似文献   

14.
15.
One technique for the characterization of receptor subtypes involves measuring the inhibition of the binding of a radioligand which is not subtype selective by agonists or antagonists which are subtype selective. Although such data are routinely calculated using computer programs, it is often useful to have a graphical representation of the data. Until now, a "modified Scatchard" or "Hofstee" plot of the form P = -(P/I)(IC50) + 1 (where P is percentage inhibition and I is the inhibitor concentration) has been used. We describe an alternate plot of the form B = -(B X I)(1/IC50) + B0 (where B is the concentration of bound radioligand and B0 is the concentration of radioligand bound in the absence of the inhibitor). This method has the important advantage that the data need not be calculated as percentage inhibition which eliminates the error involved in the experimentally determined B0 value being included in the other values.  相似文献   

16.
A potential limitation of data from microarray experiments exists when improper control samples are used. In cancer research, comparisons of tumour expression profiles to those from normal samples is challenging due to tissue heterogeneity (mixed cell populations). A specific example exists in a published colon cancer dataset, in which tissue heterogeneity was reported among the normal samples. In this paper, we show how to overcome or avoid the problem of using normal samples that do not derive from the same tissue of origin as the tumour. We advocate an exploratory unsupervised bootstrap analysis that can reveal unexpected and undesired, but strongly supported, clusters of samples that reflect tissue differences instead of tumour versus normal differences. All of the algorithms used in the analysis, including the maximum difference subset algorithm, unsupervised bootstrap analysis, pooled variance t-test for finding differentially expressed genes and the jackknife to reduce false positives, are incorporated into our online Gene Expression Data Analyzer ( http:// bioinformatics.upmc.edu/GE2/GEDA.html ).  相似文献   

17.
The identification of metabolic regulation is a major concern in metabolic engineering. Metabolic regulation phenomena depend on intracellular compounds such as enzymes, metabolites and cofactors. A complete understanding of metabolic regulation requires quantitative information about these compounds under in vivo conditions. This quantitative knowledge in combination with the known network of metabolic pathways allows the construction of mathematical models that describe the dynamic changes in metabolite concentrations over time. Rapid sampling combined with pulse experiments is a useful tool for the identification of metabolic regulation owing to the transient data they provide. Enzymatic tests in combination with ESI-LC-MS (Electrospray Ionization Liquid Chromatographic Tandem Mass Spectrometry) and HPLC measurements have been used to identify up to 30 metabolites and nucleotides from rapid sampling experiments. A metabolic modeling tool (MMT) that is built on a relational database was developed specifically for analysis of rapid sampling experiments. The tool allows to construct complex pathway models with information stored in the relational database. Parameter fitting and simulation algorithms for the resulting system of Ordinary Differential Equations (ODEs) are part of MMT. Additionally explicit sensitivity functions are calculated. The integration of all necessary algorithms in one tool allows fast model analysis and comparison. Complex models have been developed to describe the central metabolic pathways of Escherichia coli during a glucose pulse experiment.  相似文献   

18.
Zheng Q 《Genetics》2005,171(2):861-864
This note discusses a minor mathematical error and a problematic mathematical assumption in Luria and Delbrück's (1943) classic article on fluctuation analysis. In addition to suggesting remedial measures, the note provides information on the latest development of techniques for estimating mutation rates using data from fluctuation experiments.  相似文献   

19.
20.
Modern 'omics'-technologies result in huge amounts of data about life processes. For analysis and data mining purposes this data has to be considered in the context of the underlying biological networks. This work presents an approach for integrating data from biological experiments into metabolic networks by mapping the data onto network elements and visualising the data enriched networks automatically. This methodology is implemented in DBE, an information system that supports the analysis and visualisation of experimental data in the context of metabolic networks. It consists of five parts: (1) the DBE-Database for consistent data storage, (2) the Excel-Importer application for the data import, (3) the DBE-Website as the interface for the system, (4) the DBE-Pictures application for the up- and download of binary (e. g. image) files, and (5) DBE-Gravisto, a network analysis and graph visualisation system. The usability of this approach is demonstrated in two examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号