首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Network meta-analysis (NMA) – a statistical technique that allows comparison of multiple treatments in the same meta-analysis simultaneously – has become increasingly popular in the medical literature in recent years. The statistical methodology underpinning this technique and software tools for implementing the methods are evolving. Both commercial and freely available statistical software packages have been developed to facilitate the statistical computations using NMA with varying degrees of functionality and ease of use. This paper aims to introduce the reader to three R packages, namely, gemtc, pcnetmeta, and netmeta, which are freely available software tools implemented in R. Each automates the process of performing NMA so that users can perform the analysis with minimal computational effort. We present, compare and contrast the availability and functionality of different important features of NMA in these three packages so that clinical investigators and researchers can determine which R packages to implement depending on their analysis needs. Four summary tables detailing (i) data input and network plotting, (ii) modeling options, (iii) assumption checking and diagnostic testing, and (iv) inference and reporting tools, are provided, along with an analysis of a previously published dataset to illustrate the outputs available from each package. We demonstrate that each of the three packages provides a useful set of tools, and combined provide users with nearly all functionality that might be desired when conducting a NMA.  相似文献   

2.
macroeco is a Python package that supports the analysis of empirical macroecological patterns and the comparison of these patterns to theoretical predictions. Here we describe the use of macroeco and the various functions that it contains. We also highlight a unique high‐level interface included with the package, MacroecoDesktop, that allows non‐programmers to access the functionality of macroeco. MacroecoDesktop takes simple text‐based metadata and parameter files as inputs and generates both tabular and graphical outputs, supporting users in creating reproducible workflows that follow the principles of simplicity, provenance, and automation. Both macroeco and MacroecoDesktop provide case studies for developers of analytically‐focused scientific software packages who wish to better support the reproducible use of their tools.  相似文献   

3.
BEAST 2: A Software Platform for Bayesian Evolutionary Analysis   总被引:1,自引:0,他引:1  
We present a new open source, extensible and flexible software platform for Bayesian evolutionary analysis called BEAST 2. This software platform is a re-design of the popular BEAST 1 platform to correct structural deficiencies that became evident as the BEAST 1 software evolved. Key among those deficiencies was the lack of post-deployment extensibility. BEAST 2 now has a fully developed package management system that allows third party developers to write additional functionality that can be directly installed to the BEAST 2 analysis platform via a package manager without requiring a new software release of the platform. This package architecture is showcased with a number of recently published new models encompassing birth-death-sampling tree priors, phylodynamics and model averaging for substitution models and site partitioning. A second major improvement is the ability to read/write the entire state of the MCMC chain to/from disk allowing it to be easily shared between multiple instances of the BEAST software. This facilitates checkpointing and better support for multi-processor and high-end computing extensions. Finally, the functionality in new packages can be easily added to the user interface (BEAUti 2) by a simple XML template-based mechanism because BEAST 2 has been re-designed to provide greater integration between the analysis engine and the user interface so that, for example BEAST and BEAUti use exactly the same XML file format.
This is a PLOS Computational Biology Software Article.
  相似文献   

4.
Metabolic flux analysis (MFA) combines experimental measurements and computational modeling to determine biochemical reaction rates in live biological systems. Advancements in analytical instrumentation, such as nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry (MS), have facilitated chemical separation and quantification of isotopically enriched metabolites. However, no software packages have been previously described that can integrate isotopomer measurements from both MS and NMR analytical platforms and have the flexibility to estimate metabolic fluxes from either isotopic steady-state or dynamic labeling experiments. By applying physiologically relevant cardiac and hepatic metabolic models to assess NMR isotopomer measurements, we herein test and validate new modeling capabilities of our enhanced flux analysis software tool, INCA 2.0. We demonstrate that INCA 2.0 can simulate and regress steady-state 13C NMR datasets from perfused hearts with an accuracy comparable to other established flux assessment tools. Furthermore, by simulating the infusion of three different 13C acetate tracers, we show that MFA based on dynamic 13C NMR measurements can more precisely resolve cardiac fluxes compared to isotopically steady-state flux analysis. Finally, we show that estimation of hepatic fluxes using combined 13C NMR and MS datasets improves the precision of estimated fluxes by up to 50%. Overall, our results illustrate how the recently added NMR data modeling capabilities of INCA 2.0 can enable entirely new experimental designs that lead to improved flux resolution and can be applied to a wide range of biological systems and measurement time courses.  相似文献   

5.
We describe methods for interactive visualization and analysis of density maps available in the UCSF Chimera molecular modeling package. The methods enable segmentation, fitting, coarse modeling, measuring and coloring of density maps for elucidating structures of large molecular assemblies such as virus particles, ribosomes, microtubules, and chromosomes. The methods are suitable for density maps with resolutions in the range spanned by electron microscope single particle reconstructions and tomography. All of the tools described are simple, robust and interactive, involving computations taking only seconds. An advantage of the UCSF Chimera package is its integration of a large collection of interactive methods. Interactive tools are sufficient for performing simple analyses and also serve to prepare input for and examine results from more complex, specialized, and algorithmic non-interactive analysis software. While both interactive and non-interactive analyses are useful, we discuss only interactive methods here.  相似文献   

6.
7.
Previous studies have reported that some important loci are missed in single-locus genome-wide association studies (GWAS), especially because of the large phenotypic error in field experiments. To solve this issue, multi-locus GWAS methods have been recommended. However, only a few software packages for multi-locus GWAS are available. Therefore, we developed an R software named mrMLM v4.0.2. This software integrates mrMLM, FASTmrMLM, FASTmrEMMA, pLARmEB, pKWmEB, and ISIS EM-BLASSO methods developed by our lab. There are four components in mrMLM v4.0.2, including dataset input, parameter setting, software running, and result output. The fread function in data.table is used to quickly read datasets, especially big datasets, and the doParallel package is used to conduct parallel computation using multiple CPUs. In addition, the graphical user interface software mrMLM.GUI v4.0.2, built upon Shiny, is also available. To confirm the correctness of the aforementioned programs, all the methods in mrMLM v4.0.2 and three widely-used methods were used to analyze real and simulated datasets. The results confirm the superior performance of mrMLM v4.0.2 to other methods currently available. False positive rates are effectively controlled, albeit with a less stringent significance threshold. mrMLM v4.0.2 is publicly available at BioCode (https://bigd.big.ac.cn/biocode/tools/BT007077) or R (https://cran.r-project.org/web/packages/mrMLM.GUI/index.html) as an open-source software.  相似文献   

8.
In an era of rapid global change, our ability to understand and predict Earth's natural systems is lagging behind our ability to monitor and measure changes in the biosphere. Bottlenecks to informing models with observations have reduced our capacity to fully exploit the growing volume and variety of available data. Here, we take a critical look at the information infrastructure that connects ecosystem modeling and measurement efforts, and propose a roadmap to community cyberinfrastructure development that can reduce the divisions between empirical research and modeling and accelerate the pace of discovery. A new era of data‐model integration requires investment in accessible, scalable, and transparent tools that integrate the expertise of the whole community, including both modelers and empiricists. This roadmap focuses on five key opportunities for community tools: the underlying foundations of community cyberinfrastructure; data ingest; calibration of models to data; model‐data benchmarking; and data assimilation and ecological forecasting. This community‐driven approach is a key to meeting the pressing needs of science and society in the 21st century.  相似文献   

9.

Background  

New mathematical models of complex biological structures and computer simulation software allow modelers to simulate and analyze biochemical systems in silico and form mathematical predictions. Due to this potential predictive ability, the use of these models and software has the possibility to compliment laboratory investigations and help refine, or even develop, new hypotheses. However, the existing mathematical modeling techniques and simulation tools are often difficult to use by laboratory biologists without training in high-level mathematics, limiting their use to trained modelers.  相似文献   

10.
We present the ggtreeExtra package for visualizing heterogeneous data with a phylogenetic tree in a circular or rectangular layout (https://www.bioconductor.org/packages/ggtreeExtra). The package supports more data types and visualization methods than other tools. It supports using the grammar of graphics syntax to present data on a tree with richly annotated layers and allows evolutionary statistics inferred by commonly used software to be integrated and visualized with external data. GgtreeExtra is a universal tool for tree data visualization. It extends the applications of the phylogenetic tree in different disciplines by making more domain-specific data to be available to visualize and interpret in the evolutionary context.  相似文献   

11.
Over the past 40 years, actigraphy has been used to study rest-activity patterns in circadian rhythm and sleep research. Furthermore, considering its simplicity of use, there is a growing interest in the analysis of large population-based samples, using actigraphy. Here, we introduce pyActigraphy, a comprehensive toolbox for data visualization and analysis including multiple sleep detection algorithms and rest-activity rhythm variables. This open-source python package implements methods to read multiple data formats, quantify various properties of rest-activity rhythms, visualize sleep agendas, automatically detect rest periods and perform more advanced signal processing analyses. The development of this package aims to pave the way towards the establishment of a comprehensive open-source software suite, supported by a community of both developers and researchers, that would provide all the necessary tools for in-depth and large scale actigraphy data analyses.  相似文献   

12.

Purpose

Life cycle assessment (LCA) software packages have proliferated and evolved as LCA has developed and grown. There are now a multitude of LCA software packages that must be critically evaluated by users. Prior to conducting a comparative LCA study on different concrete materials, it is necessary to examine a variety of software packages for this specific purpose. The paper evaluates five LCA tools in the context of the LCA of seven concrete mix designs (conventional concrete, concrete with fly ash, slag, silica fume or limestone as cement replacement, recycled aggregate concrete, and photocatalytic concrete).

Methods

Three key evaluation criteria required to assess the quality of analysis are adequate flexibility, sophistication and complexity of analysis, and usefulness of outputs. The quality of life cycle inventory (LCI) data included in each software package is also assessed for its reliability, completeness, and correlation to the scope of LCA of concrete products in Canada. A questionnaire is developed for evaluating LCA software packages and is applied to five LCA tools.

Results and discussion

The result is the selection of a software package for the specific context of LCA of concrete materials in Canada, which will be used to complete a full LCA study. The software package with the highest score is software package C (SP-C), with 44 out of a possible 48 points. Its main advantage is that it allows for the user to have a high level of control over the system being modeled and the calculation methods used.

Conclusions

This comparative study highlights the importance of selecting a software package that is appropriate for a specific research project. The ability to accurately model the chosen functional unit and system boundary is an important selection criterion. This study demonstrates a method to enable a critical and rigorous comparison without excessive and redundant duplication of efforts.
  相似文献   

13.
A key benefit of long-read nanopore sequencing technology is the ability to detect modified DNA bases, such as 5-methylcytosine. The lack of R/Bioconductor tools for the effective visualization of nanopore methylation profiles between samples from different experimental groups led us to develop the NanoMethViz R package. Our software can handle methylation output generated from a range of different methylation callers and manages large datasets using a compressed data format. To fully explore the methylation patterns in a dataset, NanoMethViz allows plotting of data at various resolutions. At the sample-level, we use dimensionality reduction to look at the relationships between methylation profiles in an unsupervised way. We visualize methylation profiles of classes of features such as genes or CpG islands by scaling them to relative positions and aggregating their profiles. At the finest resolution, we visualize methylation patterns across individual reads along the genome using the spaghetti plot and heatmaps, allowing users to explore particular genes or genomic regions of interest. In summary, our software makes the handling of methylation signal more convenient, expands upon the visualization options for nanopore data and works seamlessly with existing methylation analysis tools available in the Bioconductor project. Our software is available at https://bioconductor.org/packages/NanoMethViz.  相似文献   

14.
15.
Structural modeling of macromolecular complexes greatly benefits from interactive visualization capabilities. Here we present the integration of several modeling tools into UCSF Chimera. These include comparative modeling by MODELLER, simultaneous fitting of multiple components into electron microscopy density maps by IMP MultiFit, computing of small-angle X-ray scattering profiles and fitting of the corresponding experimental profile by IMP FoXS, and assessment of amino acid sidechain conformations based on rotamer probabilities and local interactions by Chimera.  相似文献   

16.
The current global challenges that threaten biodiversity are immense and rapidly growing. These biodiversity challenges demand approaches that meld bioinformatics, large-scale phylogeny reconstruction, use of digitized specimen data, and complex post-tree analyses (e.g. niche modeling, niche diversification, and other ecological analyses). Recent developments in phylogenetics coupled with emerging cyberinfrastructure and new data sources provide unparalleled opportunities for mobilizing and integrating massive amounts of biological data, driving the discovery of complex patterns and new hypotheses for further study. These developments are not trivial in that biodiversity data on the global scale now being collected and analyzed are inherently complex. The ongoing integration and maturation of biodiversity tools discussed here is transforming biodiversity science, enabling what we broadly term “next-generation” investigations in systematics, ecology, and evolution (i.e., “biodiversity science”). New training that integrates domain knowledge in biodiversity and data science skills is also needed to accelerate research in these areas. Integrative biodiversity science is crucial to the future of global biodiversity. We cannot simply react to continued threats to biodiversity, but via the use of an integrative, multifaceted, big data approach, researchers can now make biodiversity projections to provide crucial data not only for scientists, but also for the public, land managers, policy makers, urban planners, and agriculture.  相似文献   

17.
Biology is advanced by producing structural models of biological systems, such as protein complexes. Some systems are recalcitrant to traditional structure determination methods. In such cases, it may still be possible to produce useful models by integrative structure determination that depends on simultaneous use of multiple types of data. An ensemble of models that are sufficiently consistent with the data is produced by a structural sampling method guided by a data‐dependent scoring function. The variation in the ensemble of models quantified the uncertainty of the structure, generally resulting from the uncertainty in the input information and actual structural heterogeneity in the samples used to produce the data. Here, we describe how to generate, assess, and interpret ensembles of integrative structural models using our open source Integrative Modeling Platform program ( https://integrativemodeling.org ).  相似文献   

18.

Background  

Spectral processing and post-experimental data analysis are the major tasks in NMR-based metabonomics studies. While there are commercial and free licensed software tools available to assist these tasks, researchers usually have to use multiple software packages for their studies because software packages generally focus on specific tasks. It would be beneficial to have a highly integrated platform, in which these tasks can be completed within one package. Moreover, with open source architecture, newly proposed algorithms or methods for spectral processing and data analysis can be implemented much more easily and accessed freely by the public.  相似文献   

19.
Quantitative uncertainty analysis has become a common component of risk assessments. In risk assessment models, the most robust method for propagating uncertainty is Monte Carlo simulation. Many software packages available today offer Monte Carlo capabilities while requiring minimal learning time, computational time, and/or computer memory. This paper presents an evalu ation of six software packages in the context of risk assessment: Crystal Ball, @Risk, Analytica, Stella II, PRISM, and Susa-PC. Crystal Ball and @Risk are spreadsheet based programs; Analytica and Stella II are multi-level, influence diagram based programs designed for the construction of complex models; PRISM and Susa-PC are both public-domain programs designed for incorpo rating uncertainty and sensitivity into any model written in Fortran. Each software package was evaluated on the basis of five criteria, with each criterion having several sub-criteria. A ‘User Preferences Table’ was also developed for an additional comparison of the software packages. The evaluations were based on nine weeks of experimentation with the software packages including use of the associated user manuals and test of the software through the use of example problems. The results of these evaluations indicate that Stella II has the most extensive modeling capabilities and can handle linear differential equations. Crystal Ball has the best input scheme for entering uncertain parameters and the best reference materials. @Risk offers a slightly better standard output scheme and requires a little less learning time. Susa-PC has the most options for detailed statistical analysis of the results, such as multiple options for a sensitivity analysis and sophisticated options for inputting correlations. Analytica is a versatile, menu- and graphics-driven package, while PRISM is a more specialized and less user friendly program. When choosing between software packages for uncertainty and sensitivity analysis, the choice largely depends on the specifics of the problem being modeled. However, for risk assessment problems that can be implemented on a spreadsheet, Crystal Ball is recommended because it offers the best input options, a good output scheme, adequate uncertainty and sensitivity analysis, superior reference materials, and an intuitive spreadsheet basis while requiring very little memory.  相似文献   

20.
In this paper a tool, MENU, is presented, with which demonstration packages can be easily constructed. The teacher designs the set-up of the package by editing a demonstration specification file, containing both commands to MENU to display frames to the end-user of the execute tasks and the text of the frames. The text contains explanations for the end-user together with the options he can choose. MENU takes care that the corresponding actions are executed. Two image analysis packages, one about CT and one about gated cardiac bloodpool scintigraphy, are presented as examples of the use of MENU. It is concluded, that with MENU (existing) programs can be modeled into packages very easily and efficiently. MENU proves to be a tool that is worthwhile for educational purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号