首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stoklosa J  Hwang WH  Wu SH  Huggins R 《Biometrics》2011,67(4):1659-1665
In practice, when analyzing data from a capture-recapture experiment it is tempting to apply modern advanced statistical methods to the observed capture histories. However, unless the analysis takes into account that the data have only been collected from individuals who have been captured at least once, the results may be biased. Without the development of new software packages, methods such as generalized additive models, generalized linear mixed models, and simulation-extrapolation cannot be readily implemented. In contrast, the partial likelihood approach allows the analysis of a capture-recapture experiment to be conducted using commonly available software. Here we examine the efficiency of this approach and apply it to several data sets.  相似文献   

2.

Background

Modern analysis of high-dimensional SNP data requires a number of biometrical and statistical methods such as pre-processing, analysis of population structure, association analysis and genotype imputation. Software used for these purposes often rely on specific and incompatible input and output data formats. Therefore extensive data management including multiple format conversions is necessary during analyses.

Methods

In order to support fast and efficient management and bio-statistical quality control of high-dimensional SNP data, we developed the publically available software fcGENE using C++ object-oriented programming language. This software simplifies and automates the use of different existing analysis packages, especially during the workflow of genotype imputations and corresponding analyses.

Results

fcGENE transforms SNP data and imputation results into different formats required for a large variety of analysis packages such as PLINK, SNPTEST, HAPLOVIEW, EIGENSOFT, GenABEL and tools used for genotype imputation such as MaCH, IMPUTE, BEAGLE and others. Data Management tasks like merging, splitting, extracting SNP and pedigree information can be performed. fcGENE also supports a number of bio-statistical quality control processes and quality based filtering processes at SNP- and sample-wise level. The tool also generates templates of commands required to run specific software packages, especially those required for genotype imputation. We demonstrate the functionality of fcGENE by example workflows of SNP data analyses and provide a comprehensive manual of commands, options and applications.

Conclusions

We have developed a user-friendly open-source software fcGENE, which comprehensively supports SNP data management, quality control and analysis workflows. Download statistics and corresponding feedbacks indicate that software is highly recognised and extensively applied by the scientific community.  相似文献   

3.
Rogers M  Graham J  Tonge RP 《Proteomics》2003,3(6):879-886
Protein spot detection is central to the analysis of two-dimensional electrophoresis gel images. There are many commercially available packages, each implementing a protein spot detection algorithm. Despite this, there have been relatively few studies comparing the performance characteristics of the different packages. This is in part due to the fact that different packages employ different sets of user-adjustable parameters. It is also partly due to the fact that the images are complex. To carry out an evaluation, "ground truth" data specifying spot position, shape and intensities needs to be defined subjectively on selected test images. We address this problem by proposing a method of evaluation using synthetic images with unambiguous interpretation. The characteristics of the spots in the synthetic images are determined from statistical models of the shape, intensity, size, spread and location of real spot data. The distribution of parameters is described using a Gaussian mixture model obtained from training images. The synthetic images allow us to investigate the effects of individual image properties, such as signal-to-noise ratios and degree of spot overlap, by measuring quantifiable outcomes, e.g. accuracy of spot position, false positive and false negative detection. We illustrate the approach by carrying out quantitative evaluations of spot detection on a number of widely used analysis packages.  相似文献   

4.
Efficient analysis of protein expression by using two-dimensional electrophoresis (2-DE) data relies on the use of automated image processing techniques. The overall success of this research depends critically on the accuracy and the reliability of the analysis software. In addition, the software has a profound effect on the interpretation of the results obtained, and the amount of user intervention demanded during the analysis. The choice of analysis software that best meets specific needs is therefore of interest to the research laboratory. In this paper we compare two advanced analysis software packages, PDQuest and Progenesis. Their evaluation is based on quantitative tests at three different levels of standard 2-DE analysis: spot detection, gel matching and spot quantitation. As test materials we use three gel sets previously used in a similar comparison of Z3 and Melanie, and three sets of gels from our own research. It was observed that the quality of the test gels critically influences the spot detection and gel matching results. Both packages were sensitive to the parameter or filter settings with respect to the tendency of finding true positive and false positive spots. Quantitation results were very accurate for both analysis software packages.  相似文献   

5.
Great deal of work has been devoted to determine doses from alpha particles emitted by 222Rn and 220Rn progeny. In contrast, contribution of beta particles to total dose has been neglected by most of the authors. The present work describes a study of the detriment of 222Rn and 220Rn progeny to the human lung due to beta particles. The dose conversion factor (DCF) was introduced to relate effective dose and exposure to radon progeny; it is defined as effective dose per unit exposure to inhaled radon or thoron progeny. Doses and DCFs were determined for beta radiation in sensitive layers of bronchi (BB) and bronchioles (bb), taking into account inhaled 222Rn and 220Rn progeny deposited in mucus and cilia layer. The nuclei columnar secretory and short basal cells were considered to be sensitive target layers. For dose calculation, electron-absorbed fractions (AFs) in the sensitive layers of the BB and bb regions were used. Activities in the fast and slow mucus of the BB and bb regions were obtained using the LUNGDOSE software developed earlier. Calculated DCFs due to beta radiation were 0.21 mSv/WLM for 222Rn and 0.06 mSv/WLM for 220Rn progeny. In addition, the influence of Jacobi room parameters on DCFs was investigated, and it was shown that DCFs vary with these parameters by up to 50%.  相似文献   

6.

Purpose

Life cycle assessment (LCA) software packages have proliferated and evolved as LCA has developed and grown. There are now a multitude of LCA software packages that must be critically evaluated by users. Prior to conducting a comparative LCA study on different concrete materials, it is necessary to examine a variety of software packages for this specific purpose. The paper evaluates five LCA tools in the context of the LCA of seven concrete mix designs (conventional concrete, concrete with fly ash, slag, silica fume or limestone as cement replacement, recycled aggregate concrete, and photocatalytic concrete).

Methods

Three key evaluation criteria required to assess the quality of analysis are adequate flexibility, sophistication and complexity of analysis, and usefulness of outputs. The quality of life cycle inventory (LCI) data included in each software package is also assessed for its reliability, completeness, and correlation to the scope of LCA of concrete products in Canada. A questionnaire is developed for evaluating LCA software packages and is applied to five LCA tools.

Results and discussion

The result is the selection of a software package for the specific context of LCA of concrete materials in Canada, which will be used to complete a full LCA study. The software package with the highest score is software package C (SP-C), with 44 out of a possible 48 points. Its main advantage is that it allows for the user to have a high level of control over the system being modeled and the calculation methods used.

Conclusions

This comparative study highlights the importance of selecting a software package that is appropriate for a specific research project. The ability to accurately model the chosen functional unit and system boundary is an important selection criterion. This study demonstrates a method to enable a critical and rigorous comparison without excessive and redundant duplication of efforts.
  相似文献   

7.
8.

Background  

The integration of many aspects of protein/DNA structure analysis is an important requirement for software products in general area of structural bioinformatics. In fact, there are too few software packages on the internet which can be described as successful in this respect. We might say that what is still missing is publicly available, web based software for interactive analysis of the sequence/structure/function of proteins and their complexes with DNA and ligands. Some of existing software packages do have certain level of integration and do offer analysis of several structure related parameters, however not to the extent generally demanded by a user.  相似文献   

9.
To estimate how sophisticated should an empirical scoring function be to ensure successful docking, scoring and virtual screening a new scoring function NScore (naive score) has been developed and tested. NScore is an extremely simple function and has the minimum possible number of parameters; nevertheless, it allows all the main effects determining the ligand–protein interaction to be taken into account. The fundamental difference of NScore from the currently used empirical functions is that all its parameters are selected on the basis of general physical considerations, without any adjustment or training with the use of experimental data on ligand–protein interaction. The results of docking and scoring with the use of NScore in an independent test sets of proteins and ligands have proved to be as good as those yielded by the ICM, GOLD, and Glide software packages, which use sophisticated empirical scoring functions. With respect to some parameters, the results of docking with the use of NScore are even better than those obtained using other functions. Since no training set is used in the development of NScore, this scoring function is indeed versatile in that it does not depend on the specific goal or target. We have performed virtual screening for ten targets and obtained results almost as good as those yielded by the Glide and better than GOLD and DOCK software. Figure Average percent of known actives found vs percent of the ranked database screened (x axis) for NScore (NScore, black)- an extremely simple function where all parameters are selected on the basis of general physical considerations, without any adjustment or training with the use of experimental data, Glide XP (XP, red), Glide SP (SP, green), DOCK (DOCK, blue), GOLD GoldScore1x (gold1x, cyan), and GOLD ChemScore1x (chem1x, magenta). Grey lines (rand show results expected by chance.  相似文献   

10.
Epidemiological studies of the relationship between risk and internal exposure to plutonium are clearly reliant on the dose estimates used. The International Commission on Radiological Protection (ICRP) is currently reviewing the latest scientific information available on biokinetic models and dosimetry, and it is likely that a number of changes to the existing models will be recommended. The effect of certain changes, particularly to the ICRP model of the respiratory tract, has been investigated for inhaled forms of 239Pu and uncertainties have also been assessed. Notable effects of possible changes to respiratory tract model assumptions are (1) a reduction in the absorbed dose to target cells in the airways, if changes under consideration are made to the slow clearing fraction and (2) a doubling of absorbed dose to the alveolar region for insoluble forms, if evidence of longer retention times is taken into account. An important factor influencing doses for moderately soluble forms of 239Pu is the extent of binding of dissolved plutonium to lung tissues and assumptions regarding the extent of binding in the airways. Uncertainty analyses have been performed with prior distributions chosen for application in epidemiological studies. The resulting distributions for dose per unit intake were lognormal with geometric standard deviations of 2.3 and 2.6 for nitrates and oxides, respectively. The wide ranges were due largely to consideration of results for a range of experimental data for the solubility of different forms of nitrate and oxides. The medians of these distributions were a factor of three times higher than calculated using current default ICRP parameter values. For nitrates, this was due to the assumption of a bound fraction, and for oxides due mainly to the assumption of slower alveolar clearance. This study highlights areas where more research is needed to reduce biokinetic uncertainties, including more accurate determination of particle transport rates and long-term dissolution for plutonium compounds, a re-evaluation of long-term binding of dissolved plutonium, and further consideration of modeling for plutonium absorbed to blood from the lungs.  相似文献   

11.

Background

The identification of disease-associated genes using single nucleotide polymorphisms (SNPs) has been increasingly reported. In particular, the Affymetrix Mapping 10 K SNP microarray platform uses one PCR primer to amplify the DNA samples and determine the genotype of more than 10,000 SNPs in the human genome. This provides the opportunity for large scale, rapid and cost-effective genotyping assays for linkage analysis. However, the analysis of such datasets is nontrivial because of the large number of markers, and visualizing the linkage scores in the context of genome maps remains less automated using the current linkage analysis software packages. For example, the haplotyping results are commonly represented in the text format.

Results

Here we report the development of a novel software tool called CompareLinkage for automated formatting of the Affymetrix Mapping 10 K genotype data into the "Linkage" format and the subsequent analysis with multi-point linkage software programs such as Merlin and Allegro. The new software has the ability to visualize the results for all these programs in dChip in the context of genome annotations and cytoband information. In addition we implemented a variant of the Lander-Green algorithm in the dChipLinkage module of dChip software (V1.3) to perform parametric linkage analysis and haplotyping of SNP array data. These functions are integrated with the existing modules of dChip to visualize SNP genotype data together with LOD score curves. We have analyzed three families with recessive and dominant diseases using the new software programs and the comparison results are presented and discussed.

Conclusions

The CompareLinkage and dChipLinkage software packages are freely available. They provide the visualization tools for high-density oligonucleotide SNP array data, as well as the automated functions for formatting SNP array data for the linkage analysis programs Merlin and Allegro and calling these programs for linkage analysis. The results can be visualized in dChip in the context of genes and cytobands. In addition, a variant of the Lander-Green algorithm is provided that allows parametric linkage analysis and haplotyping.  相似文献   

12.

Background

Genomic deletions and duplications are important in the pathogenesis of diseases, such as cancer and mental retardation, and have recently been shown to occur frequently in unaffected individuals as polymorphisms. Affymetrix GeneChip whole genome sampling analysis (WGSA) combined with 100 K single nucleotide polymorphism (SNP) genotyping arrays is one of several microarray-based approaches that are now being used to detect such structural genomic changes. The popularity of this technology and its associated open source data format have resulted in the development of an increasing number of software packages for the analysis of copy number changes using these SNP arrays.

Results

We evaluated four publicly available software packages for high throughput copy number analysis using synthetic and empirical 100 K SNP array data sets, the latter obtained from 107 mental retardation (MR) patients and their unaffected parents and siblings. We evaluated the software with regards to overall suitability for high-throughput 100 K SNP array data analysis, as well as effectiveness of normalization, scaling with various reference sets and feature extraction, as well as true and false positive rates of genomic copy number variant (CNV) detection.

Conclusion

We observed considerable variation among the numbers and types of candidate CNVs detected by different analysis approaches, and found that multiple programs were needed to find all real aberrations in our test set. The frequency of false positive deletions was substantial, but could be greatly reduced by using the SNP genotype information to confirm loss of heterozygosity.  相似文献   

13.
The range of heterogeneous approaches available for quantifying protein abundance via mass spectrometry (MS)1 leads to considerable challenges in modeling, archiving, exchanging, or submitting experimental data sets as supplemental material to journals. To date, there has been no widely accepted format for capturing the evidence trail of how quantitative analysis has been performed by software, for transferring data between software packages, or for submitting to public databases. In the context of the Proteomics Standards Initiative, we have developed the mzQuantML data standard. The standard can represent quantitative data about regions in two-dimensional retention time versus mass/charge space (called features), peptides, and proteins and protein groups (where there is ambiguity regarding peptide-to-protein inference), and it offers limited support for small molecule (metabolomic) data. The format has structures for representing replicate MS runs, grouping of replicates (for example, as study variables), and capturing the parameters used by software packages to arrive at these values. The format has the capability to reference other standards such as mzML and mzIdentML, and thus the evidence trail for the MS workflow as a whole can now be described. Several software implementations are available, and we encourage other bioinformatics groups to use mzQuantML as an input, internal, or output format for quantitative software and for structuring local repositories. All project resources are available in the public domain from the HUPO Proteomics Standards Initiative http://www.psidev.info/mzquantml.The Proteomics Standards Initiative (PSI) has been working for ten years to improve the reporting and standardization of proteomics data. The PSI has published minimum reporting guidelines, called MIAPE (Minimum Information about a Proteomics Experiment) documents, for MS-based proteomics (1) and molecular interactions (2), as well as data standards for raw/processed MS data in mzML (3), peptide and protein identifications in mzIdentML (4), transitions for selected reaction monitoring analysis in TraML (5), and molecular interactions in PSI-MI format (6). Standards are particularly important for quantitative proteomics research, because the associated bioinformatics analysis is highly challenging as a result of the range of different experimental techniques for deriving abundance values for proteins using MS. The techniques can be broadly divided into those based on (i) differential labeling, in which a metabolic label or chemical tag is applied to cells, peptides, or proteins, samples are mixed, and intensity signals for peptide ions are compared within single MS runs; or (ii) label-free methods in which MS runs occur in parallel and bioinformatics methods are used to extract intensity signals, ensuring that like-for-like signals are compared between runs (7). In most label-based and label-free approaches, peptide ratios or abundance values must be summarized in order for one to arrive at relative protein abundance values, taking into account ambiguity in peptide-to-protein inference. Absolute protein abundance values can typically be derived only using internal standards spiked into samples of known abundance (8, 9). The PSI has recently developed a MIAPE-Quant document defining and describing the minimal information necessary in order to judge or repeat a quantitative proteomics experiment.Software packages tend to report peptide or protein abundance values in a bespoke format, often as tab or comma separated values, for import into spreadsheet software. In complementary work, the PSI has developed a standard format for capturing these final results in a standardized tab separated value format, called mzTab, suitable for post-processing and visualization in end-user tools such as Microsoft Excel or the R programming language. The final results of a quantitative analysis are sufficient for many purposes, such as performing statistical analysis to determine differential expression or cluster analysis to find co-expressed proteins. However, mzTab (or similar bespoke formats) was not designed to hold a trace of how the peptide and protein abundance values were calculated from MS data (i.e. metadata is lost that might be crucial for other tasks). For example, most quantitative software packages detect and quantify so-called “features” (representing all ions collected for a given peptide) in two-dimensional MS data, where the two dimensions are retention time from liquid chromatography (LC) and mass over charge (m/z). Without capturing the two-dimensional coordinates of the features, it is not possible to write visualization software showing exactly what the software has quantified; researchers have to trust that the software has accurately quantified all ions from isotopes of a given peptide, excluding any overlapping ions derived from other peptides. The history of proteomics research has been one in which studies of highly variable quality have been published. There is also little quality control or benchmarking performed on quantitative software (10), meaning it is difficult to make quality judgments on a set of peptide and protein abundance values. The PSI has recently developed mzML, which can capture raw or processed MS data in a vendor neutral format, and the mzIdentML standard, to capture search engine results and the important metadata (such as software parameters), such that peptide and protein identification data can be interpreted consistently. These two standards are now being used for data sharing and to support open source software development, so that informatics groups can focus on algorithmic development rather than file format conversions. Until now, there has been no widely used open source format or data standard for capturing metadata and data relating to the quantitation step of analysis pipelines. In this work, we report the mzQuantML standard from the PSI, which has recently completed the PSI standardization process (11), from which version 1.0 was released. We believe that quantitative proteomics research will benefit from improved capabilities for tracing what manipulations have happened to data at each stage of the analysis process. The mzQuantML standard has been designed to store quantitative values calculated for features, peptides, proteins, and/or protein groups (where there is ambiguity in protein inference), plus associated software parameters. It has also been designed to accommodate small molecule data to improve interoperability with metabolomics investigations. The format can represent experimental replicates and grouping of replicates, and it has been designed via an open and transparent process.  相似文献   

14.
PurposeValidate the skin dose software within the radiation dose index monitoring system NEXO[DOSE]® (Bracco Injeneering S.A., Lausanne, Switzerland). It provides the skin dose distribution in interventional radiology (IR) procedures.MethodsTo determine the skin dose distribution and the Peak Skin Dose (PSD) in IR procedures, the software uses exposure and geometrical parameters taken from the radiation dose structured report and additional information specific to each angiographic system. To test the accuracy of the software, GafChromic® XR-RV3 films, wrapped under a cylindrical PMMA phantom, were irradiated with different setups. Calculations and films results are compared in terms of absolute dose and geometric accuracy, using two angiographic systems (Philips Integris Allura FD20, Siemens AXIOM-ArtisZeego).ResultsCalculated and film measured PSD values agree with an average difference of 7% ± 5%. The discrepancies in dose evaluation increase up to 33% in lower dose regions, because the algorithm does not consider the out-of-field scatter contribution of the neighboring fields, which is more significant in these areas. Regarding the geometric accuracy, the differences between the simulated dose spatial distributions and the measured ones are<3 mm (4%) in simple tests and 5 mm (5%) in setups closer to clinical practice. Moreover, similar results are obtained for the two studied angiographic system vendors.ConclusionsNEXO[DOSE]® provides an accurate skin dose distribution and PSD estimate. It will allow faster and more accurate monitoring of patient follow-up in the future.  相似文献   

15.
The aim of this work was to estimate risk of lung tumour occurrence after inhalation of actinide oxides from published studies and rat studies in progress. For the same delivered dose, the risk increases when homogeneity of irradiation increases, i.e., the number of particles deposited after inhalation increases (small particles and (or) low specific alpha activity). The dose-effect relationships reported appear linear up to a few gray, depending on the aerosol considered, and then the slope decreases. This slope, which corresponds with the risk, can vary over one order of magnitude depending on the aerosol used. An effective threshold at about 1 Gy was not observed for the most homogeneous dose distributions. A dosimetric and biological approach is proposed to provide a more realistic risk estimate.  相似文献   

16.
The emergence of next-generation sequencing (NGS) technologies has significantly improved sequencing throughput and reduced costs. However, the short read length, duplicate reads and massive volume of data make the data processing much more difficult and complicated than the first-generation sequencing technology. Although there are some software packages developed to assess the data quality, those packages either are not easily available to users or require bioinformatics skills and computer resources. Moreover, almost all the quality assessment software currently available didn’t taken into account the sequencing errors when dealing with the duplicate assessment in NGS data. Here, we present a new user-friendly quality assessment software package called BIGpre, which works for both Illumina and 454 platforms. BIGpre contains all the functions of other quality assessment software, such as the correlation between forward and reverse reads, read GC-content distribution, and base Ns quality. More importantly, BIGpre incorporates associated programs to detect and remove duplicate reads after taking sequencing errors into account and trimming low quality reads from raw data as well. BIGpre is primarily written in Perl and integrates graphical capability from the statistics package R. This package produces both tabular and graphical summaries of data quality for sequencing datasets from Illumina and 454 platforms. Processing hundreds of millions reads within minutes, this package provides immediate diagnostic information for user to manipulate sequencing data for downstream analyses. BIGpre is freely available at http://bigpre.sourceforge.net/.  相似文献   

17.
The aim of this study was to implement an outlier marking and analysis methodology to optimize CT examination protocols.CT Head examination data, including dose metrics along with technical parameters, were stored in an automatic dose registry system. Reference dose metrics distribution was obtained throughout a 1-year period. Outlier thresholds were calculated taking into account the specific shape of the distribution, by using a robust measure of the skewness; the medcouple parameter. Subsequently, outliers from a 4-month period were marked and Cause-and-Effect analysis was carried out by a multidisciplinary dose committee.Reference Dose metrics distributions were obtained from 3690 CT Head examinations. Both CTDIvol and DLP showed a certain degree of skewness, with a medcouple value of 0.05 and 0.11, respectively. All of the upper-outliers fell within 3 identifiable groups of causes, ordered by relative importance: i) inadequate protocol selection, ii) arms or objects in the field-of-view, and iii) abnormal scanning region diameter. Regarding the lower-outliers, 90% were attributable to the inclusion of additional series in the original head protocol and the remaining 10% to unknown causes. Also, a general Cause-and-Effect diagram for outliers was elaborated.While the Dose Reference Level method applies to the general performance of a CT protocol and allows comparison with other centers, the outlier method represents a step further in the optimization process. The proposed method focuses on detecting incorrect utilization of the CT, which mainly arises from inadequate knowledge of CT technology.  相似文献   

18.

Background and aims

In order to analyse root system architectures (RSAs) from captured images, a variety of manual (e.g. Data Analysis of Root Tracings, DART), semi-automated and fully automated software packages have been developed. These tools offer complementary approaches to study RSAs and the use of the Root System Markup Language (RSML) to store RSA data makes the comparison of measurements obtained with different (semi-) automated root imaging platforms easier. The throughput of the data analysis process using exported RSA data, however, should benefit greatly from batch analysis in a generic data analysis environment (R software).

Methods

We developed an R package (archiDART) with five functions. It computes global RSA traits, root growth rates, root growth directions and trajectories, and lateral root distribution from DART-generated and/or RSML files. It also has specific plotting functions designed to visualise the dynamics of root system growth.

Results

The results demonstrated the ability of the package’s functions to compute relevant traits for three contrasted RSAs (Brachypodium distachyon [L.] P. Beauv., Hevea brasiliensis Müll. Arg. and Solanum lycopersicum L.).

Conclusions

This work extends the DART software package and other image analysis tools supporting the RSML format, enabling users to easily calculate a number of RSA traits in a generic data analysis environment.
  相似文献   

19.

Background  

During the past decade, many software packages have been developed for analysis and visualization of various types of microarrays. We have developed and maintained the widely used dChip as a microarray analysis software package accessible to both biologist and data analysts. However, challenges arise when dChip users want to analyze large number of arrays automatically and share data analysis procedures and parameters. Improvement is also needed when the dChip user support team tries to identify the causes of reported analysis errors or bugs from users.  相似文献   

20.
The small hairpin RNAs (shRNA) are useful in many ways like identification of trait specific molecular markers, gene silencing and characterization of a species. In public domain, hardly there exists any standalone software for shRNA prediction. Hence, a software shRNAPred (1.0) is proposed here to offer a user-friendly Command-line User Interface (CUI) to predict 'shRNA-like' regions from a large set of nucleotide sequences. The software is developed using PERL Version 5.12.5 taking into account the parameters such as stem and loop length combinations, specific loop sequence, GC content, melting temperature, position specific nucleotides, low complexity filter, etc. Each of the parameters is assigned with a specific score and based on which the software ranks the predicted shRNAs. The high scored shRNAs obtained from the software are depicted as potential shRNAs and provided to the user in the form of a text file. The proposed software also allows the user to customize certain parameters while predicting specific shRNAs of his interest. The shRNAPred (1.0) is open access software available for academic users. It can be downloaded freely along with user manual, example dataset and output for easy understanding and implementation. AVAILABILITY: The database is available for free at http://bioinformatics.iasri.res.in/EDA/downloads/shRNAPred_v1.0.exe.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号