首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
QSAR analysis using multiple linear regression and partial least squares methods were conducted on a data set of 47 pyrrolidine analogs acting as DPP IV inhibitors. The QSAR models generated (both MLR and PLS) were robust with statistically significant s, F, r, r(2) and r(2) (CV) values. The analysis helped to ascertain the role of shape flexibility index, Ipso atom E-state index and electrostatic parameters like dipole moment, in determining the activity of DPP IV inhibitors. In addition to QSAR modeling, Lipinski's rule of five was also employed to check the pharmacokinetic profile of DPP IV inhibitors. Since none of the compounds violated the Lipinski's rule of five indicating that the DPP IV inhibitors reported herein have sound pharmacokinetic profile and can be considered as potential drug candidates for diabetes mellitus Type II.  相似文献   

2.
Goal, Scope and Background To strengthen the evaluative power of LCA, life cycle interpretation should be further developed. A previous contribution (Heijungs & Kleijn 2001) elaborated five examples of concrete methods within the subset of numerical approaches towards interpretation. These methods were: contribution analysis, perturbation analysis, uncertainty analysis, comparative analysis, and discernibility analysis. Developments in software have enabled the possibility to apply the five example methods to explore the much-used Ecoinvent”96 database.Discussion of Methods The numerical approaches implemented in this study include contribution analysis, perturbation analysis, uncertainty analysis, comparative analysis, discernibility analysis and the newly developed key issue analysis. The data used comes from a very large process database: Ecoinvent’96, containing 1163 processes, 1181 economic flows and 571 environmental flows. Conclusions Results are twofold: they serve as a benchmark to the usefulness and feasibility of these numerical approaches, and they shed light on the question of stability and structure in an often-used large system of interconnected processes. Most of the approaches perform quite well: computation time on a moderate PC is between a few seconds a few minutes. Only Monte Carlo analyses may require much longer, but even then it appears that most questions can be answered within a few hours. Moreover, analytical expressions for error propagation are much faster than Monte Carlo analyses, while giving almost identical results. Despite the fact that many processes are connected to each other, leading to the possibility of a very unstable system and very sensitive coefficients, the overall results show that most results are not extremely uncertain. There are, however, some exceptions to this positive message.  相似文献   

3.
The lipid composition of microbial communities can indicate their response to changes in the surrounding environment induced by anthropogenic practices, chemical contamination or climatic conditions. A considerable number of analytical techniques exist for the examination of microbial lipids. This article reviews a selection of methods available for environmental samples as applied for lipid extraction, fractionation, derivatization and quantification. The discussion focuses on the origin of the standard methods, the different modified versions developed for investigation of microbial lipids, as well as the advantages and limitations of each. Current modifications to standard methods show a number of improvements for each of the different steps associated with analysis. The advantages and disadvantages of lipid analysis compared to other popular techniques are clarified. Accordingly, the preferential utilization of signature lipid biomarker analysis in current research is considered. It is clear from recent literature that this technique stays relevant – mainly for the variety of microbial properties that can be determined in a single analysis.  相似文献   

4.
The purpose of this study is to develop a system capable of performing calculation of temporal gait parameters using two low-cost wireless accelerometers and artificial intelligence-based techniques as part of a larger research project for conducting human gait analysis. Ten healthy subjects of different ages participated in this study and performed controlled walking tests. Two wireless accelerometers were placed on their ankles. Raw acceleration signals were processed in order to obtain gait patterns from characteristic peaks related to steps. A Bayesian model was implemented to classify the characteristic peaks into steps or nonsteps. The acceleration signals were segmented based on gait events, such as heel strike and toe-off, of actual steps. Temporal gait parameters, such as cadence, ambulation time, step time, gait cycle time, stance and swing phase time, simple and double support time, were estimated from segmented acceleration signals. Gait data-sets were divided into two groups of ages to test Bayesian models in order to classify the characteristic peaks. The mean error obtained from calculating the temporal gait parameters was 4.6%. Bayesian models are useful techniques that can be applied to classification of gait data of subjects at different ages with promising results  相似文献   

5.
It is argued that both the principle of parsimony and the theory of evolution, especially that of natural selection, are essential analytical tools in phylogenetic systematics, whereas the widely used outgroup analysis is completely useless and may even be misleading. In any systematic analysis, two types of patterns of characters and character states must be discriminated which are referred to as completely and incompletely resolved. In the former, all known species are presented in which the characters and their states studied occur, whereas in the latter this is not the case. Dependent on its structure, a pattern of characters and their states may be explained by either a unique or by various conflicting, equally most parsimonious hypotheses of relationships. The so-called permutation method is introduced which facilitates finding the conflicting, equally most parsimonious hypotheses of relationships. The utility of the principle of parsimony is limited by the uncertainty as to whether its application in systematics must refer to the minimum number of steps needed to explain a pattern of characterts and their states most parsimoniously or to the minimum number of evolutionary events assumed to have caused these steps. Although these numbers may differ, the former is usually preferred for simplicity. The types of outgroup analysis are shown to exist which are termed parsimony analysis based on test samples and cladistic type of outgroup analysis. Essentially, the former is used for analysing incompletely resolved patterns of characters and their states, the latter for analysing completely resolved ones. Both types are shown to be completely useless for rejecting even one of various conflicting, equally most parsimonious hypotheses of relationships. According to contemporary knowledge, this task can be accomplished only by employing the theory of evolution (including the theory of natural selection). But even then, many phylogenetic-systematic problems will remain unsolved. In such cases, arbitrary algorithms like those offered by phenetics can at best offer pseudosolutions to open problems. Despite its limitations, phylogenetic systematics is superior to any kind of aphylogenetic systematics (transformed cladistics included) in approaching a (not: the) “general reference system” of organisms.  相似文献   

6.
7.
8.
Genetic diversity play key role in the germplasm improvement which is directly correlated with the crop production. Various statistical techniques have been used to study diversity among different genotypes. Among these techniques multivariate is most frequently used one for the genetic association of genotypes. In the present study a total of 64 advance lines included one check cultivar were evaluated under the field conditions of Cereal Crop Research Institute, Pirsabaq Nowshera, Pakistan during September 2017. Data were recorded for nine different parameters. Multivariate analysis divided the total 64 genotypes into four groups. The first five PCs with Eigen values > 1 contributed 86.95% of the variability amongst genotypes. Characters with maximum values in PC1 were Spikelets spike-1 (SPPS) (0.732), spike length (SPL) (0.722) and biological yield (BY) (0.607), PC2 comprised of 100-grain weight (TGW) (0.605), grain yield (GY) (0.482) while days to heading (DH) (0.393), for PC3 major contributors were BY (0.550) and number of tillers meter square-1 (NTPS) (0.289), the contribution of PC4 were flag leaf area (FLA) (0.716) and SPL (0.298) and the maximum values for various traits in PC5 were SPPS (0.732), SPL (0.722) and BY (0.607). From the findings of present study best performing lines can be directly recommended for general cultivation or to be used in future breeding programs.  相似文献   

9.
Post-normalization checking of microarrays rarely occurs, despite the problems that using unreliable data for inference can cause. This paper considers a number of different ways to check microarrays after normalization for a variety of potential problems. Four types of problem with microarray data that these checks can identify are: clerical mistakes, array-wide hybridization problems, problems with normalization and mishandling problems. Any of these can seriously affect the results of any analysis. The three main techniques used to identify these problems are dimension reduction techniques, false array plots and correlograms. None of the techniques are computationally very intensive and all can be carried out in the R statistical package. Once discovered, problems can either be rectified or excluded from the data.  相似文献   

10.
The interest in the analysis of lipids and phospholipids is continuously increasing due to the importance of these molecules in biochemistry (e.g. in the context of biomembranes and lipid second messengers) as well as in industry. Unfortunately, commonly used methods of lipid analysis are often time-consuming and tedious because they include previous separation and/or derivatization steps. With the development of "soft-ionization techniques" like electrospray ionization (ESI) or matrix-assisted laser desorption and ionization time-of-flight (MALDI-TOF), mass spectrometry became also applicable to lipid analysis. The aim of this review is to summarize so far available experiences in MALDI-TOF mass spectrometric analysis of lipids. It will be shown that MALDI-TOF MS can be applied to all known lipid classes and the characteristics of individual lipids will be discussed. Additionally, some selected applications in medicine and biology, e.g. mixture analysis, cell and tissue analysis and the determination of enzyme activities will be described. Advantages and disadvantages of MALDI-TOF MS in comparison to other established lipid analysis methods will be also discussed.  相似文献   

11.
Representative and valid cytoplasmic concentrations are essential for ensuring the significance of results in the field of metabolome analysis. One of the most crucial points in this respect is the sampling itself. A rapid and sudden stopping of the metabolism on a timescale that is much faster than the conversion rates of investigated metabolites is worthwhile. This can be achieved by applying of cold methanol quenching combined with reproducible, fast, and automated sampling. Unfortunately, quenching the metabolism by a sharp temperature shift leads to what is known as cold shock or the cell-leakage effect. In the present work, we applied a microstructure heat exchanger to analyze the cold shock effect using Corynebacterium glutamicum as a model microorganism. Using this apparatus together with a silicon pipe, it was possible to assay the leakage effect on a timescale starting at 1 s after cooling cell suspension. The high turnover rates not only require a rapid quenching technique, but also the correct application. Moreover, we succeeded in showing that even when the required appropriate setup of methanol quenching is not used, the metabolism is not stopped within the required timescale. By applying robust techniques like rapid sampling in combination with reproducible sample processing, we ensured fast and reliable metabolic inactivation during all steps.  相似文献   

12.
PurposeTo provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms.Material and methodsTwo lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images.ResultsThe dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy.ConclusionA multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry.  相似文献   

13.

Background

The electrocardiogram (ECG) signals provide important information about the heart electrical activities in medical and diagnostic applications. This signal may be contaminated by different types of noises. One of the noise types which has a considerable overlap with the ECG signals in frequency domain is electromyogram (EMG). Among the exciting approaches for de-noising the ECG signals, those based on singular spectrum analysis (SSA) are popular.

Methods

In this paper, we propose a method based on SSA to separate the ECG signals from EMG noises. In general, SSA contains four steps as: embedding, singular value decomposition, grouping, and diagonal averaging. Among these steps, grouping step contains parameter (indices) which can be adjusted to achieve the desirable results. Indeed, grouping is one of the important steps of SSA as the ECG and EMG signals are separated in this step. Hence, in the proposed method, a new criterion is presented to select the indices in grouping step to separate the ECG from EMG signal with higher accuracy.

Results

Performance of the proposed method is investigated using several experiments. Two sub-sets from Physionet MIT-BIH arrhythmia database are used for this purpose.

Conclusion

The experimental results demonstrate effectiveness of the proposed method in comparison with other SSA-based techniques.  相似文献   

14.
de Bruin  A.  Ibelings  B.W.  Van Donk  E. 《Hydrobiologia》2003,491(1-3):47-63
Molecular techniques have become a valuable tool in phytoplankton studies over the past decades. The appropriate choice of a technique from an increasing array of methods can be rather complex, because different techniques are suitable for different questions or problems in ecology and evolution. Each technique has its particular strengths and weaknesses and is based upon different (theoretical) assumptions. Our aim is to give a better insight in the (correct) use of various molecular techniques in phytoplankton research, with special emphasis on the fields of strain identification, differentiation of populations and the establishment of phylogenetic relationships. The basic steps in the development of molecular techniques like allozyme electrophoresis, RFLP, DGGE, SSCP, RAPD, AFLP and microsatellites, and the application of these techniques in phytoplankton research, are discussed. Furthermore, recent developments in molecular biology, that have so far only found limited application in phytoplankton studies, such as single-cell PCR, PCR assays combined with molecular probes (Heteroduplex Mobility Assays or DNA arrays), Real-time PCR, complete genome sequencing, multi-gene expression studies using microarrays, and Single Nucleotide Polymorphism (SNPs), are discussed. We emphasise the relevance of fundamental and applied molecular studies on phytoplankton for a wider community of ecologists and evolutionary biologists.  相似文献   

15.
文章简明介绍了现代分支系统学的分支格局原理和组份分析方法,并应用于李朝銮(1990,表2)的中国白粉藤属14种植物数据的分支分析。得出13个可能的最简约的二歧分支图,图长41步。这些分支图与李(1990)图2:13的比较表明,中值淘汰法不如组份分析方法有效,而且中值淘汰法有表相学和(或)进化系统学的疑问。  相似文献   

16.
17.
Gene co-expression network (GCN) mining identifies gene modules with highly correlated expression profiles across samples/conditions. It enables researchers to discover latent gene/molecule interactions, identify novel gene functions, and extract molecular features from certain disease/condition groups, thus helping to identify disease biomarkers. However, there lacks an easy-to-use tool package for users to mine GCN modules that are relatively small in size with tightly connected genes that can be convenient for downstream gene set enrichment analysis, as well as modules that may share common members. To address this need, we developed an online GCN mining tool package: TSUNAMI (Tools SUite for Network Analysis and MIning). TSUNAMI incorporates our state-of-the-art lmQCM algorithm to mine GCN modules for both public and user-input data (microarray, RNA-seq, or any other numerical omics data), and then performs downstream gene set enrichment analysis for the identified modules. It has several features and advantages: 1) a user-friendly interface and real-time co-expression network mining through a web server; 2) direct access and search of NCBI Gene Expression Omnibus (GEO) and The Cancer Genome Atlas (TCGA) databases, as well as user-input gene expression matrices for GCN module mining; 3) multiple co-expression analysis tools to choose from, all of which are highly flexible in regards to parameter selection options; 4) identified GCN modules are summarized to eigengenes, which are convenient for users to check their correlation with other clinical traits; 5) integrated downstream Enrichr enrichment analysis and links to other gene set enrichment tools; and 6) visualization of gene loci by Circos plot in any step of the process. The web service is freely accessible through URL: https://biolearns.medicine.iu.edu/. Source code is available at https://github.com/huangzhii/TSUNAMI/.  相似文献   

18.
In problems with missing or latent data, a standard approach is to first impute the unobserved data, then perform all statistical analyses on the completed dataset--corresponding to the observed data and imputed unobserved data--using standard procedures for complete-data inference. Here, we extend this approach to model checking by demonstrating the advantages of the use of completed-data model diagnostics on imputed completed datasets. The approach is set in the theoretical framework of Bayesian posterior predictive checks (but, as with missing-data imputation, our methods of missing-data model checking can also be interpreted as "predictive inference" in a non-Bayesian context). We consider the graphical diagnostics within this framework. Advantages of the completed-data approach include: (1) One can often check model fit in terms of quantities that are of key substantive interest in a natural way, which is not always possible using observed data alone. (2) In problems with missing data, checks may be devised that do not require to model the missingness or inclusion mechanism; the latter is useful for the analysis of ignorable but unknown data collection mechanisms, such as are often assumed in the analysis of sample surveys and observational studies. (3) In many problems with latent data, it is possible to check qualitative features of the model (for example, independence of two variables) that can be naturally formalized with the help of the latent data. We illustrate with several applied examples.  相似文献   

19.
We develop an improved approach to evaluate car sharing options under uncertain environments with the combination of Fuzzy Analytic Hierarchy Process (F-AHP) and Fuzzy Technique for Order Preference by Similarity to Ideal Solution (F-TOPSIS), which consists of three steps. In the first step, we propose a SCUMN (Specific, Comprehensive, Understandable, Measurable, and Neutral) methodology to identify appropriate indicators and obtain a final list of 24 indicators according to their relevance to car sharing options. In the second step, we determine the weight of each indicator with F-AHP and conduct consistency check of the comparison matrix of selected indicators. In the third step, comparison of different options is performed with selected indicators and F-TOPSIS. A case study is provided to validate the proposed approach. Twenty-four indicators are identified to evaluate five different car sharing options and rank them according to their closeness coefficients in decreasing order. And thirty-one sensitivity analysis experiments are conducted to figure out the influence of indicators on decision making. The experimental results show that the proposed approach is capable of evaluating car sharing options with uncertainty and vagueness. F-AHP is able to determine the weight for each selected indicator and F-TOPSIS demonstrates its advantage in comparing potential options.  相似文献   

20.

Background  

Despite increasing popularity and improvements in terminal restriction fragment length polymorphism (T-RFLP) and other microbial community fingerprinting techniques, there are still numerous obstacles that hamper the analysis of these datasets. Many steps are required to process raw data into a format ready for analysis and interpretation. These steps can be time-intensive, error-prone, and can introduce unwanted variability into the analysis. Accordingly, we developed T-REX, free, online software for the processing and analysis of T-RFLP data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号