首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
4.
5.
Summary A statistical approach to the interpretation of data from gene assignment with somatic cell hybrids is presented. The observed data are analysed under a variety of hypotheses. The fit to the hypotheses is compared by means of the likelihood obtained under a given hypothesis. Two of these hypotheses are related to fundamental questions: is a gene responsible for the enzyme observation and if so, is that gene located on a specific chromosome or could it change its position and be sometimes on chromosome j and, in another hybrid line, on chromosome k? The other hypotheses concern the assignment of the gene to just one of the chromosomes.To improve the traditional data analysis approach we considered additional information: the uncertainties and possible errors of laboratory methods in all our calculations and the length of the donor chromosomes in connection with one specific hypothesis.This method allows us to account for the reliability of the investigation methods and the nature of the hybrid lines involved. Data can be evaluated at different error probabilities within a realistic range in order to compare and discuss results.  相似文献   

6.
A new method for quantifying the transmitted information and channel capacity of high-dimensional data, based on cluster formation, is described. The method's ability to handle high-dimensional data allows for a complete measurement of information transmitted by neuronal data. It is computationally efficient in terms of both processing time and memory storage. Application of the method to the responses of a V1 neuron shows that more information was transmitted about the pattern of stimuli than about their color.  相似文献   

7.
8.
9.
10.
Biotech unit operations are often characterized by a large number of inputs (operational parameters) and outputs (performance parameters) along with complex correlations among them. A typical biotech process starts with the vial of the cell bank, ends with the final product, and has anywhere from 15 to 30 such unit operations in series. Besides the above‐mentioned operational parameters, raw material attributes can also impact process performance and product quality as well as interact among each other. Multivariate data analysis (MVDA) offers an effective approach to gather process understanding from such complex datasets. Review of literature suggests that the use of MVDA is rapidly increasing, fuelled by the gradual acceptance of quality by design (QbD) and process analytical technology (PAT) among the regulators and the biotech industry. Implementation of QbD and PAT requires enhanced process and product understanding. In this article, we first discuss the most critical issues that a practitioner needs to be aware of while performing MVDA of bioprocessing data. Next, we present a step by step procedure for performing such analysis. Industrial case studies are used to elucidate the various underlying concepts. With the increasing usage of MVDA, we hope that this article would be a useful resource for present and future practitioners of MVDA. © 2014 American Institute of Chemical Engineers Biotechnol. Prog., 30:967–973, 2014  相似文献   

11.
12.
Microarrays measure values that are approximately proportional to the numbers of copies of different mRNA molecules in samples. Due to technical difficulties, the constant of proportionality between the measured intensities and the numbers of mRNA copies per cell is unknown and may vary for different arrays. Usually, the data are normalized (i.e., array-wise multiplied by appropriate factors) in order to compensate for this effect and to enable informative comparisons between different experiments. Centralization is a new two-step method for the computation of such normalization factors that is both biologically better motivated and more robust than standard approaches. First, for each pair of arrays the quotient of the constants of proportionality is estimated. Second, from the resulting matrix of pairwise quotients an optimally consistent scaling of the samples is computed.  相似文献   

13.
Multivariate analysis such as principal-components analysis (PCA) and partial-least-squares-discriminant analysis (PLS-DA) have been applied to peptidomics data from clinical urine samples subjected to LC/MS analysis. We show that it is possible to use these methods to get information from a complex set of clinical data. The aim of the work is to use this information as a first step in the further search for clinical biomarker data. It is possible to identify peptide-biomarker fingerprints related to disease diagnosis and progression. Further, we review clinical proteomics and pharmacogenomics data analyzed with the same multivariate approach.  相似文献   

14.
Signal-to-noise-ratio (SNR) thresholds for microarray data analysis were experimentally determined with an oligonucleotide array that contained perfect-match (PM) and mismatch (MM) probes based upon four genes from Shewanella oneidensis MR-1. A new SNR calculation, called the signal-to-both-standard-deviations ratio (SSDR), was developed and evaluated, along with other two methods, the signal-to-standard-deviation ratio (SSR) and the signal-to-background ratio (SBR). At a low stringency, the thresholds of the SSR, SBR, and SSDR were 2.5, 1.60, and 0.80 with an oligonucleotide and a PCR amplicon as target templates and 2.0, 1.60, and 0.70 with genomic DNAs as target templates. Slightly higher thresholds were obtained under high-stringency conditions. The thresholds of the SSR and SSDR decreased with an increase in the complexity of targets (e.g., target types) and the presence of background DNA and a decrease in the compositions of targets, while the SBR remained unchanged in all situations. The lowest percentage of false positives and false negatives was observed with the SSDR calculation method, suggesting that it may be a better SNR calculation for more accurate determination of SNR thresholds. Positive spots identified by SNR thresholds were verified by the Student t test, and consistent results were observed. This study provides general guidance for users to select appropriate SNR thresholds for different samples under different hybridization conditions.  相似文献   

15.
A neural network has been used to reduce the dimensionality of multivariate data sets to produce two-dimensional (2D) displays of these sets. The data consisted of physicochemical properties for sets of biologically active molecules calculated by computational chemistry methods. Previous work has demonstrated that these data contain sufficient relevant information to classify the compounds according to their biological activity. The plots produced by the neural network are compared with results from two other techniques for linear and nonlinear dimension reduction, and are shown to give comparable and, in one case, superior results. Advantages of this technique are discussed.  相似文献   

16.
Increasing attention is being devoted to taking landscape information into account in genetic studies. Among landscape variables, space is often considered as one of the most important. To reveal spatial patterns, a statistical method should be spatially explicit, that is, it should directly take spatial information into account as a component of the adjusted model or of the optimized criterion. In this paper we propose a new spatially explicit multivariate method, spatial principal component analysis (sPCA), to investigate the spatial pattern of genetic variability using allelic frequency data of individuals or populations. This analysis does not require data to meet Hardy-Weinberg expectations or linkage equilibrium to exist between loci. The sPCA yields scores summarizing both the genetic variability and the spatial structure among individuals (or populations). Global structures (patches, clines and intermediates) are disentangled from local ones (strong genetic differences between neighbors) and from random noise. Two statistical tests are proposed to detect the existence of both types of patterns. As an illustration, the results of principal component analysis (PCA) and sPCA are compared using simulated datasets and real georeferenced microsatellite data of Scandinavian brown bear individuals (Ursus arctos). sPCA performed better than PCA to reveal spatial genetic patterns. The proposed methodology is implemented in the adegenet package of the free software R.  相似文献   

17.
18.
Trematode worms have the neoophoran mode of development in which several specialized vitelline cells surround the zygote. This vitelline cell mass appears just before the zygote passes through the ootype, a thickening of the oviduct, where the egg shell is formed. The great amount of vitelline material blurs the visualization of embryo development in whole egg seen by brightfield microscopy. The eggshell is difficult to cut into thin or ultrathin sections and acts as a barrier to fixation and infiltration with embedding media. The egg shell is also brightly fluorescent when analyzed by fluorescence microscopy. To overcome these technical disadvantages a simple staining protocol widely used in adult helminth morphological analysis was adapted for the study of the embryonic development of two different trematode species. The effects of potassium hydroxide as bleach and ethylene glycol as mounting medium were also evaluated. Confocal microscopy allowed virtual sectioning of whole-mounted eggs and made possible internal morphological detailed analysis of different embryonic stages. This method could contribute to the study of helminth egg embryology.  相似文献   

19.

Background  

A useful application of flow cytometry is the investigation of cell receptor-ligand interactions. However such analyses are often compromised due to problems interpreting changes in ligand binding where the receptor expression is not constant. Commonly, problems are encountered due to cell treatments resulting in altered receptor expression levels, or when cell lines expressing a transfected receptor with variable expression are being compared. To overcome this limitation we have developed a Microsoft Excel spreadsheet that aims to automatically and effectively simplify flow cytometric data and perform statistical tests in order to provide a clearer graphical representation of results.  相似文献   

20.
Introduction – The use of the average analytical signal for the construction of curves by the least squares method (LSM) over the standard addition method (SAM) is widespread. It would be advantageous, however, to find a way to avoid intermediary averages, which are known to be the cause of significant increases in standard deviations (SD). Objective – To develop a protocol that uses all gathered data to create curves by LSM over SAM. To use Excel® for the estimation of y = mx + b and R2 rather than using LSM equations for the SD of m, x and b. Methodology – The level of lead (II) in the bark (cork) of Quercus suber Linnaeus was determined using differential pulse anodic stripping voltammetry (DPASV). Three current samples were taken for each of the four standard additions. These signals were combined for adjustment by LSM. The results were compared with those obtained after averaging the current for each addition, and the expression of uncertainty in the measurements determined. Results – The new method shows an expanded uncertainty of ± 0.3321 μg/g (nearly 1.42%). The difference between the results obtained by the new and the old method is 0.01 μg/g (23.41 and 23.40 μg/g). The limit of detection changed approximately from 4.8 to 4 μg/g and the relative SD approximately from 9 to 6%. Conclusion – The absence of intermediary averages in curves improved the determination of lead (II) in cork by DPASV. Estimation of SD only with LSM equations produced results that were significantly worse. The changes are large enough to transform an apparently internally non‐validated procedure (repeatability for precision) into an internally validated procedure. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号