首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A rule-based automated method is presented for modeling the structures of the seven transmembrane helices of G-protein-coupled receptors. The structures are generated by using a simulated annealing Monte Carlo procedure that positions and orients rigid helices to satisfy structural restraints. The restraints are derived from analysis of experimental information from biophysical studies on native and mutant proteins, from analysis of the sequences of related proteins, and from theoretical considerations of protein structure. Calculations are presented for two systems. The method was validated through calculations using appropriate experimental information for bacteriorhodopsin, which produced a model structure with a root mean square (rms) deviation of 1.87 A from the structure determined by electron microscopy. Calculations are also presented using experimental and theoretical information available for bovine rhodopsin to assign the helices to a projection density map and to produce a model of bovine rhodopsin that can be used as a template for modeling other G-protein-coupled receptors.  相似文献   

2.
Banerjee-Basu S  Baxevanis AD 《Genome biology》2002,3(8):interactions1004.1-interactions10044
Functional annotation is used to catalog information that would be of value in experimental design and analysis but annotations in public databases are often incorrect. Here, one such case is discussed.  相似文献   

3.
Analysis of longitudinal metabolomics data   总被引:7,自引:0,他引:7  
MOTIVATION: Metabolomics datasets are generally large and complex. Using principal component analysis (PCA), a simplified view of the variation in the data is obtained. The PCA model can be interpreted and the processes underlying the variation in the data can be analysed. In metabolomics, often a priori information is present about the data. Various forms of this information can be used in an unsupervised data analysis with weighted PCA (WPCA). A WPCA model will give a view on the data that is different from the view obtained using PCA, and it will add to the interpretation of the information in a metabolomics dataset. RESULTS: A method is presented to translate spectra of repeated measurements into weights describing the experimental error. These weights are used in the data analysis with WPCA. The WPCA model will give a view on the data where the non-uniform experimental error is accounted for. Therefore, the WPCA model will focus more on the natural variation in the data. AVAILABILITY: M-files for MATLAB for the algorithm used in this research are available at http://www-its.chem.uva.nl/research/pac/Software/pcaw.zip.  相似文献   

4.
ABSTRACT: BACKGROUND: Gene-set enrichment analyses (GEA or GSEA) are commonly used for biological characterization of an experimental gene-set. This is done by finding known functional categories, such as pathways or Gene Ontology terms, that are over-represented in the experimental set; the assessment is based on an overlap statistic. Rich biological information in terms of gene interaction network is now widely available, but this topological information is not used by GEA, so there is a need for methods that exploit this type of information in high-throughput data analysis. RESULTS: We developed a method of network enrichment analysis (NEA) that extends the overlap statistic in GEA to network links between genes in the experimental set and those in the functional categories. For the crucial step in statistical inference, we developed a fast network randomization algorithm in order to obtain the distribution of any network statistic under the null hypothesis of no association between an experimental gene-set and a functional category. We illustrate the NEA method using gene and protein expression data from a lung cancer study. CONCLUSIONS: The results indicate that the NEA method is more powerful than the traditional GEA, primarily because the relationships between gene sets were more strongly captured by network connectivity rather than by simple overlaps.  相似文献   

5.
Surface plasmon resonance (SPR)-biosensor techniques directly provide essential information for the study and characterization of small molecule-nucleic acid interactions, and the use of these methods is steadily increasing. The method is label-free and monitors the interactions in real time. Both dynamic and steady-state information can be obtained for a wide range of reaction rates and binding affinities. This article presents the basics of the SPR technique, provides suggestions for experimental design, and illustrates data processing and analysis of results. A specific example of the interaction of a well-known minor groove binding agent, netropsin, with DNA is evaluated by both kinetic and steady-state SPR methods. Three different experiments are used to illustrate different approaches and analysis methods. The three sets of results show the reproducibility of the binding constants and agreement from both steady-state and kinetic analyses. These experiments also show that reliable kinetic information can be obtained, even with difficult systems, if the experimental conditions are optimized to minimize mass transport effects. Limitations of the biosensor-SPR technique are also discussed to provide an awareness of the care needed to conduct a successful experiment.  相似文献   

6.
Systems biotechnology for strain improvement   总被引:14,自引:0,他引:14  
Various high-throughput experimental techniques are routinely used for generating large amounts of omics data. In parallel, in silico modelling and simulation approaches are being developed for quantitatively analyzing cellular metabolism at the systems level. Thus informative high-throughput analysis and predictive computational modelling or simulation can be combined to generate new knowledge through iterative modification of an in silico model and experimental design. On the basis of such global cellular information we can design cells that have improved metabolic properties for industrial applications. This article highlights the recent developments in these systems approaches, which we call systems biotechnology, and discusses future prospects.  相似文献   

7.
Is multicomponent spectra analysis coming to a deadlock?   总被引:1,自引:0,他引:1  
We have emphasized the information which can be expected from complex absorption spectra analysis in the range of linear algebra.We have considered the different methods now used and examined the conditions in which this information can be effectively available from spectra analysis. When all composite spectra are known the most accurate method to extract the expected information is to use experimental reference spectra and to compute their contributions in the analysed complex form.When all composite spectra are unknown it is basically impossible to reach the solution from the only knowledge of the complex absorbance values.To remove this indetermination requires additional indications like the existence of non-overlapping areas, or additional relations involving the absorbance values like the relation between redox potential and absorbance in the case of oxidation reduction couples.Although our study is presented on a biological background, we have borne in mind the generality of this problem.  相似文献   

8.
For a Phase III randomized trial that compares survival outcomes between an experimental treatment versus a standard therapy, interim monitoring analysis is used to potentially terminate the study early based on efficacy. To preserve the nominal Type I error rate, alpha spending methods and information fractions are used to compute appropriate rejection boundaries in studies with planned interim analyses. For a one-sided trial design applied to a scenario in which the experimental therapy is superior to the standard therapy, interim monitoring should provide the opportunity to stop the trial prior to full follow-up and conclude that the experimental therapy is superior. This paper proposes a method called total control only (TCO) for estimating the information fraction based on the number of events within the standard treatment regimen. Based on theoretical derivations and simulation studies, for a maximum duration superiority design, the TCO method is not influenced by departure from the designed hazard ratio, is sensitive to detecting treatment differences, and preserves the Type I error rate compared to information fraction estimation methods that are based on total observed events. The TCO method is simple to apply, provides unbiased estimates of the information fraction, and does not rely on statistical assumptions that are impossible to verify at the design stage. For these reasons, the TCO method is a good approach when designing a maximum duration superiority trial with planned interim monitoring analyses.  相似文献   

9.
It is shown that real-time 2D solid-state NMR can be used to obtain kinetic and structural information about the process of protein aggregation. In addition to the incorporation of kinetic information involving intermediate states, this approach can offer atom-specific resolution for all detectable species. The analysis was carried out using experimental data obtained during aggregation of the 10.4 kDa Crh protein, which has been shown to involve a partially unfolded intermediate state prior to aggregation. Based on a single real-time 2D 13C–13C transition spectrum, kinetic information about the refolding and aggregation step could be extracted. In addition, structural rearrangements associated with refolding are estimated and several different aggregation scenarios were compared to the experimental data.  相似文献   

10.
11.
In fluorescence decay work, distributions of exponential decay lifetimes are anticipated where complex systems are examined. We describe here methods of gaining information on such distributions using the method of moments analysis approach. The information obtained may be as simple as the average and deviation of the lifetime distribution, quantities which we show may be estimated directly from the results of a multiexponential analysis. An approximation to the actual distribution shape may also be obtained using a procedure we call the variable filter analysis (VFA) method without making any assumptions about the shape of the distribution. Tests of VFA using both simulated and experimental data are described. Limitations of this method and of distribution analysis methods in general are discussed. Results of analyses on experimental decays for ethidium intercalated in core particles and in free DNA are reported.  相似文献   

12.
Neoadjuvant endocrine therapy trials for breast cancer are now a widely accepted investigational approach for oncology cooperative group and pharmaceutical company research programs. However, there remains considerable uncertainty regarding the most suitable endpoints for these studies, in part, because short-term clinical, radiological or biomarker responses have not been fully validated as surrogate endpoints that closely relate to long-term breast cancer outcome. This shortcoming must be addressed before neoadjuvant endocrine treatment can be used as a triage strategy designed to identify patients with endocrine therapy “curable” disease. In this summary, information from published studies is used as a basis to critique clinical trial designs and to suggest experimental endpoints for future validation studies. Three aspects of neoadjuvant endocrine therapy designs are considered: the determination of response; the assessment of surgical outcomes; and biomarker endpoint analysis. Data from the letrozole 024 (LET 024) trial that compared letrozole and tamoxifen is used to illustrate a combined endpoint analysis that integrates both clinical and biomarker information. In addition, the concept of a “cell cycle response” is explored as a simple post-treatment endpoint based on Ki67 analysis that might have properties similar to the pathological complete response endpoint used in neoadjuvant chemotherapy trials.  相似文献   

13.
MOTIVATION: Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. RESULTS: The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.  相似文献   

14.
The distribution of inequivalent geometries occurring during self-assembly of the major capsid protein in thermodynamic equilibrium is determined based on a master equation approach. These results are implemented to characterize the assembly of SV40 virus and to obtain information on the putative pathways controlling the progressive build-up of the SV40 capsid. The experimental testability of the predictions is assessed and an analysis of the geometries of the assembly intermediates on the dominant pathways is used to identify targets for anti-viral drug design.  相似文献   

15.
16.
Time-resolved single molecule fluorescence measurements may be used to probe the conformational dynamics of biological macromolecules. The best time resolution in such techniques will only be achieved by measuring the arrival times of individual photons at the detector. A general approach to the estimation of molecular parameters based on individual photon arrival times is presented. The amount of information present in a data set is quantified by the Fisher information, thereby providing a guide to deriving the basic equations relating measurement uncertainties and time resolution. Based on these information-theoretical considerations, a data analysis algorithm is presented that details the optimal analysis of single-molecule data. This method natively accounts and corrects for background photons and cross talk, and can scale to an arbitrary number of channels. By construction, and with corroboration from computer simulations, we show that this algorithm reaches the theoretical limit, extracting the maximal information out of the data. The bias inherent in the algorithm is considered and its implications for experimental design are discussed. The ideas underlying this approach are general and are expected to be applicable to any information-limited measurement.  相似文献   

17.
Shannon’s seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.  相似文献   

18.
Optimization of cell culture processes can benefit from the systematic analysis of experimental data and their organization in mathematical models, which can be used to decipher the effect of individual process variables on multiple outputs of interest. Towards this goal, a kinetic model of cytosolic glucose metabolism coupled with a population-level model of Chinese hamster ovary cells was used to analyse metabolic behavior under batch and fed-batch cell culture conditions. The model was parameterized using experimental data for cell growth dynamics, extracellular and intracellular metabolite profiles. The results highlight significant differences between the two culture conditions in terms of metabolic efficiency and motivate the exploration of lactate as a secondary carbon source. Finally, the application of global sensitivity analysis to the model parameters highlights the need for additional experimental information on cell cycle distribution to complement metabolomic analyses with a view to parameterize kinetic models.  相似文献   

19.
Statistical coupling analysis (SCA) is a method for analyzing multiple sequence alignments that was used to identify groups of coevolving residues termed “sectors”. The method applies spectral analysis to a matrix obtained by combining correlation information with sequence conservation. It has been asserted that the protein sectors identified by SCA are functionally significant, with different sectors controlling different biochemical properties of the protein. Here we reconsider the available experimental data and note that it involves almost exclusively proteins with a single sector. We show that in this case sequence conservation is the dominating factor in SCA, and can alone be used to make statistically equivalent functional predictions. Therefore, we suggest shifting the experimental focus to proteins for which SCA identifies several sectors. Correlations in protein alignments, which have been shown to be informative in a number of independent studies, would then be less dominated by sequence conservation.  相似文献   

20.
Conditional entropy approach for the evaluation of the coupling strength   总被引:4,自引:0,他引:4  
A method that enables measurement of the degree of coupling between two signals is presented. The method is based on the definition of an uncoupling function calculating, by means of entropy rates, the minimum amount of independent information (i.e. the information carried by one signal which cannot be derived from the other). An estimator of the uncoupling function able to deal with short segments of data (a few hundred samples) is proposed, thus enabling the method to be used for usual experimental recordings. A synchronisation index is derived from the estimate of the uncoupling function by means of a minimisation procedure. It quantifies the maximum amount of information exchanged between the two signals. Simulations in which non-linear coordination schemes are produced and changes in the coupling strength are artificially induced are used to check the ability of the proposed index to measure the degree of synchronisation between signals. The synchronisation analysis is utilised to measure the coupling strength between the beat-to-beat variability of the sympathetic discharge and ventilation in decerebrate artificially ventilated cats and the degree of synchronisation between the beat-to-beat variability of the heart period and ventricular repolarisation interval in normal subjects and myocardial infarction patients. The sympathetic discharge and ventilation are strongly coupled and the coupling strength is not affected by manoeuvres capable of increasing or depressing sympathetic activity. The synchronisation is lost after spinalisation. The synchronisation analysis confirms that the heart period and ventricular repolarisation interval are well coordinated. In normal subjects, the synchronisation index is not modified by experimental conditions inducing changes in the sympathovagal balance. On the contrary, it strongly decreases after myocardial infarction, thus detecting and measuring the uncoupling between the heart period and ventricular repolarisation interval. Received: 29 October 1998 / Received in revised form: 4 March 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号