首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background  

Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke.  相似文献   

2.
Modelling in systems biology often involves the integration of component models into larger composite models. How to do this systematically and efficiently is a significant challenge: coupling of components can be unidirectional or bidirectional, and of variable strengths. We adapt the waveform relaxation (WR) method for parallel computation of ODEs as a general methodology for computing systems of linked submodels. Four test cases are presented: (i) a cascade of unidirectionally and bidirectionally coupled harmonic oscillators, (ii) deterministic and stochastic simulations of calcium oscillations, (iii) single cell calcium oscillations showing complex behaviour such as periodic and chaotic bursting, and (iv) a multicellular calcium model for a cell plate of hepatocytes. We conclude that WR provides a flexible means to deal with multitime-scale computation and model heterogeneity. Global solutions over time can be captured independently of the solution techniques for the individual components, which may be distributed in different computing environments.  相似文献   

3.
Cactus Tools for Grid Applications   总被引:3,自引:0,他引:3  
Cactus is an open source problem solving environment designed for scientists and engineers. Its modular structure facilitates parallel computation across different architectures and collaborative code development between different groups. The Cactus Code originated in the academic research community, where it has been developed and used over many years by a large international collaboration of physicists and computational scientists. We discuss here how the intensive computing requirements of physics applications now using the Cactus Code encourage the use of distributed and metacomputing, and detail how its design makes it an ideal application test-bed for Grid computing. We describe the development of tools, and the experiments which have already been performed in a Grid environment with Cactus, including distributed simulations, remote monitoring and steering, and data handling and visualization. Finally, we discuss how Grid portals, such as those already developed for Cactus, will open the door to global computing resources for scientific users.  相似文献   

4.
Metaheuristics are gaining increasing recognition in many research areas, computational systems biology among them. Recent advances in metaheuristics can be helpful in locating the vicinity of the global solution in reasonable computation times, with Differential Evolution (DE) being one of the most popular methods. However, for most realistic applications, DE still requires excessive computation times. With the advent of Cloud Computing effortless access to large number of distributed resources has become more feasible, and new distributed frameworks, like Spark, have been developed to deal with large scale computations on commodity clusters and cloud resources. In this paper we propose a parallel implementation of an enhanced DE using Spark. The proposal drastically reduces the execution time, by means of including a selected local search and exploiting the available distributed resources. The performance of the proposal has been thoroughly assessed using challenging parameter estimation problems from the domain of computational systems biology. Two different platforms have been used for the evaluation, a local cluster and the Microsoft Azure public cloud. Additionally, it has been also compared with other parallel approaches, another cloud-based solution (a MapReduce implementation) and a traditional HPC solution (a MPI implementation)  相似文献   

5.
Evolving in sync with the computation revolution over the past 30 years, computational biology has emerged as a mature scientific field. While the field has made major contributions toward improving scientific knowledge and human health, individual computational biology practitioners at various institutions often languish in career development. As optimistic biologists passionate about the future of our field, we propose solutions for both eager and reluctant individual scientists, institutions, publishers, funding agencies, and educators to fully embrace computational biology. We believe that in order to pave the way for the next generation of discoveries, we need to improve recognition for computational biologists and better align pathways of career success with pathways of scientific progress. With 10 outlined steps, we call on all adjacent fields to move away from the traditional individual, single-discipline investigator research model and embrace multidisciplinary, data-driven, team science.

Do you want to attract computational biologists to your project or to your department? Despite the major contributions of computational biology, those attempting to bridge the interdisciplinary gap often languish in career advancement, publication, and grant review. Here, sixteen computational biologists around the globe present "A field guide to cultivating computational biology," focusing on solutions.

Biology in the digital era requires computation and collaboration. A modern research project may include multiple model systems, use multiple assay technologies, collect varying data types, and require complex computational strategies, which together make effective design and execution difficult or impossible for any individual scientist. While some labs, institutions, funding bodies, publishers, and other educators have already embraced a team science model in computational biology and thrived [17], others who have not yet fully adopted it risk severely lagging behind the cutting edge. We propose a general solution: “deep integration” between biology and the computational sciences. Many different collaborative models can yield deep integration, and different problems require different approaches (Fig 1).Open in a separate windowFig 1Supporting interdisciplinary team science will accelerate biological discoveries.Scientists who have little exposure to different fields build silos, in which they perform science without external input. To solve hard problems and to extend your impact, collaborate with diverse scientists, communicate effectively, recognize the importance of core facilities, and embrace research parasitism. In biologically focused parasitism, wet lab biologists use existing computational tools to solve problems; in computationally focused parasitism, primarily dry lab biologists analyze publicly available data. Both strategies maximize the use and societal benefit of scientific data.In this article, we define computational science extremely broadly to include all quantitative approaches such as computer science, statistics, machine learning, and mathematics. We also define biology broadly, including any scientific inquiry pertaining to life and its many complications. A harmonious deep integration between biology and computer science requires action—we outline 10 immediate calls to action in this article and aim our speech directly at individual scientists, institutions, funding agencies, and publishers in an attempt to shift perspectives and enable action toward accepting and embracing computational biology as a mature, necessary, and inevitable discipline (Box 1).Box 1. Ten calls to action for individual scientists, funding bodies, publishers, and institutions to cultivate computational biology. Many actions require increased funding support, while others require a perspective shift. For those actions that require funding, we believe convincing the community of need is the first step toward agencies and systems allocating sufficient support
  1. Respect collaborators’ specific research interests and motivationsProblem: Researchers face conflicts when their goals do not align with collaborators. For example, projects with routine analyses provide little benefit for computational biologists.Solution: Explicit discussion about interests/expertise/goals at project onset.Opportunity: Clearly defined expectations identify gaps, provide commitment to mutual benefit.
  2. Seek necessary input during project design and throughout the project life cycleProblem: Modern research projects require multiple experts spanning the project’s complexity.Solution: Engage complementary scientists with necessary expertise throughout the entire project life cycle.Opportunity: Better designed and controlled studies with higher likelihood for success.
  3. Provide and preserve budgets for computational biologists’ workProblem: The perception that analysis is “free” leads to collaborator budget cuts.Solution: When budget cuts are necessary, ensure that they are spread evenly.Opportunity: More accurate, reproducible, and trustworthy computational analyses.
  4. Downplay publication author order as an evaluation metric for computational biologistsProblem: Computational biologist roles on publications are poorly understood and undervalued.Solution: Journals provide more equitable opportunities, funding bodies and institutions improve understanding of the importance of team science, scientists educate each other.Opportunity: Engage more computational biologist collaborators, provide opportunities for more high-impact work.
  5. Value software as an academic productProblem: Software is relatively undervalued and can end up poorly maintained and supported, wasting the time put into its creation.Solution: Scientists cite software, and funding bodies provide more software funding opportunities.Opportunity: More high-quality maintainable biology software will save time, reduce reimplementation, and increase analysis reproducibility.
  6. Establish academic structures and review panels that specifically reward team scienceProblem: Current mechanisms do not consistently reward multidisciplinary work.Solution: Separate evaluation structures to better align peer review to reward indicators of team science.Opportunity: More collaboration to attack complex multidisciplinary problems.
  7. Develop and reward cross-disciplinary training and mentoringProblem: Academic labs and institutions are often insufficiently equipped to provide training to tackle the next generation of biological problems, which require computational skills.Solution: Create better training programs aligned to necessary on-the-job skills with an emphasis on communication, encourage wet/dry co-mentorship, and engage younger students to pursue computational biology.Opportunity: Interdisciplinary students uncover important insights in their own data.
  8. Support computing and experimental infrastructure to empower computational biologistsProblem: Individual computational labs often fund suboptimal cluster computing systems and lack access to data generation facilities.Solution: Institutions can support centralized compute and engage core facilities to provide data services.Opportunity: Time and cost savings for often overlooked administrative tasks.
  9. Provide incentives and mechanisms to share open data to empower discovery through reanalysisProblem: Data are often siloed and have untapped potential.Solution: Provide institutional data storage with standardized identifiers and provide separate funding mechanisms and publishing venues for data reuse.Opportunity: Foster new breed of researchers, “research parasites,” who will integrate multimodal data and enhance mechanistic insights.
  10. Consider infrastructural, ethical, and cultural barriers to clinical data accessProblem: Identifiable health data, which include sensitive information that must be kept hidden, are distributed and disorganized, and thus underutilized.Solution: Leadership must enforce policies to share deidentifiable data with interoperable metadata identifiers.Opportunity: Derive new insights from multimodal data integration and build datasets with increased power to make biological discoveries.
  相似文献   

6.
Patient-specific biomechanical modeling of atherosclerotic arteries has the potential to aid clinicians in characterizing lesions and determining optimal treatment plans. To attain high levels of accuracy, recent models use medical imaging data to determine plaque component boundaries in three dimensions, and fluid–structure interaction is used to capture mechanical loading of the diseased vessel. As the plaque components and vessel wall are often highly complex in shape, constructing a suitable structured computational mesh is very challenging and can require a great deal of time. Models based on unstructured computational meshes require relatively less time to construct and are capable of accurately representing plaque components in three dimensions. These models unfortunately require additional computational resources and computing time for accurate and meaningful results. A two-stage modeling strategy based on unstructured computational meshes is proposed to achieve a reasonable balance between meshing difficulty and computational resource and time demand. In this method, a coarsegrained simulation of the full arterial domain is used to guide and constrain a fine-scale simulation of a smaller region of interest within the full domain. Results for a patient-specific carotid bifurcation model demonstrate that the two-stage approach can afford a large savings in both time for mesh generation and time and resources needed for computation. The effects of solid and fluid domain truncation were explored, and were shown to minimally affect accuracy of the stress fields predicted with the two-stage approach.  相似文献   

7.
Reverse computation is presented here as an important future direction in addressing the challenge of fault tolerant execution on very large cluster platforms for parallel computing. As the scale of parallel jobs increases, traditional checkpointing approaches suffer scalability problems ranging from computational slowdowns to high congestion at the persistent stores for checkpoints. Reverse computation can overcome such problems and is also better suited for parallel computing on newer architectures with smaller, cheaper or energy-efficient memories and file systems. Initial evidence for the feasibility of reverse computation in large systems is presented with detailed performance data from a particle (ideal gas) simulation scaling to 65,536 processor cores and 950 accelerators (GPUs). Reverse computation is observed to deliver very large gains relative to checkpointing schemes when nodes rely on their host processors/memory to tolerate faults at their accelerators. A comparison between reverse computation and checkpointing with measurements such as cache miss ratios, TLB misses and memory usage indicates that reverse computation is hard to ignore as a future alternative to be pursued in emerging architectures.  相似文献   

8.

Background  

An increasing number of scientific research projects require access to large-scale computational resources. This is particularly true in the biological field, whether to facilitate the analysis of large high-throughput data sets, or to perform large numbers of complex simulations – a characteristic of the emerging field of systems biology.  相似文献   

9.
A mechanistic understanding of robust self-assembly and repair capabilities of complex systems would have enormous implications for basic evolutionary developmental biology as well as for transformative applications in regenerative biomedicine and the engineering of highly fault-tolerant cybernetic systems. Molecular biologists are working to identify the pathways underlying the remarkable regenerative abilities of model species that perfectly regenerate limbs, brains, and other complex body parts. However, a profound disconnect remains between the deluge of high-resolution genetic and protein data on pathways required for regeneration, and the desired spatial, algorithmic models that show how self-monitoring and growth control arise from the synthesis of cellular activities. This barrier to progress in the understanding of morphogenetic controls may be breached by powerful techniques from the computational sciences-using non-traditional modeling approaches to reverse-engineer systems such as planaria: flatworms with a complex bodyplan and nervous system that are able to regenerate any body part after traumatic injury. Currently, the involvement of experts from outside of molecular genetics is hampered by the specialist literature of molecular developmental biology: impactful collaborations across such different fields require that review literature be available that presents the key functional capabilities of important biological model systems while abstracting away from the often irrelevant and confusing details of specific genes and proteins. To facilitate modeling efforts by computer scientists, physicists, engineers, and mathematicians, we present a different kind of review of planarian regeneration. Focusing on the main patterning properties of this system, we review what is known about the signal exchanges that occur during regenerative repair in planaria and the cellular mechanisms that are thought to underlie them. By establishing an engineering-like style for reviews of the molecular developmental biology of biomedically important model systems, significant fresh insights and quantitative computational models will be developed by new collaborations between biology and the information sciences.  相似文献   

10.
The emerging field of systems biology seeks to develop novel approaches to integrate heterogeneous data sources for effective analysis of complex living systems. Systemic studies of mitochondria have generated a large number of proteomic data sets in numerous species, including yeast, plant, mouse, rat, and human. Beyond component identification, mitochondrial proteomics is recognized as a powerful tool for diagnosing and characterizing complex diseases associated with these organelles. Various proteomic techniques for isolation and purification of proteins have been developed; each tailored to preserve protein properties relevant to study of a particular disease type. Examples of such techniques include immunocapture, which minimizes loss of posttranslational modification, 4-iodobutyltriphenylphosphonium labeling, which quantifies protein redox states, and surface-enhanced laser desorption ionization-time-of-flight mass spectrometry, which allows sequence-specific binding. With the rapidly increasing number of discovered molecular components, computational models are also being developed to facilitate the organization and analysis of such data. Computational models of mitochondria have been accomplished with top-down and bottom-up approaches and have been steadily improved in size and scope. Results from top-down methods tend to be more qualitative but are unbiased by prior knowledge about the system. Bottom-up methods often require the incorporation of a large amount of existing data but provide more rigorous and quantitative information, which can be used as hypotheses for subsequent experimental studies. Successes and limitations of the studies reviewed here provide opportunities and challenges that must be addressed to facilitate the application of systems biology to larger systems. constraint-based modeling; kinetics-based modeling; data integration; standards; bioinformatics  相似文献   

11.
Ma B  Nussinov R 《Physical biology》2004,1(3-4):P23-P26
Computations are being integrated into biological research at an increasingly fast pace. This has not only changed the way in which biological information is managed; it has also changed the way in which experiments are planned in order to obtain information from nature. Can experiments and computations be full partners? Computational chemistry has expanded over the years, proceeding from computations of a hydrogen molecule toward the challenging goal of systems biology, which attempts to handle the entire living cell. Applying theories from ab initio quantum mechanics to simplified models, the virtual worlds explored by computations provide replicas of real-world phenomena. At the same time, the virtual worlds can affect our perception of the real world. Computational biology targets a world of complex organization, for which a unified theory is unlikely to exist. A computational biology model, even if it has a clear physical or chemical basis, may not reduce to physics and chemistry. At the molecular level, computational biology and experimental biology have already been partners, mutually benefiting from each other. For the perception to become reality, computation and experiment should be united as full partners in biological research.  相似文献   

12.
Systems biology is a rapidly expanding field of research and is applied in a number of biological disciplines. In animal sciences, omics approaches are increasingly used, yielding vast amounts of data, but systems biology approaches to extract understanding from these data of biological processes and animal traits are not yet frequently used. This paper aims to explain what systems biology is and which areas of animal sciences could benefit from systems biology approaches. Systems biology aims to understand whole biological systems working as a unit, rather than investigating their individual components. Therefore, systems biology can be considered a holistic approach, as opposed to reductionism. The recently developed 'omics' technologies enable biological sciences to characterize the molecular components of life with ever increasing speed, yielding vast amounts of data. However, biological functions do not follow from the simple addition of the properties of system components, but rather arise from the dynamic interactions of these components. Systems biology combines statistics, bioinformatics and mathematical modeling to integrate and analyze large amounts of data in order to extract a better understanding of the biology from these huge data sets and to predict the behavior of biological systems. A 'system' approach and mathematical modeling in biological sciences are not new in itself, as they were used in biochemistry, physiology and genetics long before the name systems biology was coined. However, the present combination of mass biological data and of computational and modeling tools is unprecedented and truly represents a major paradigm shift in biology. Significant advances have been made using systems biology approaches, especially in the field of bacterial and eukaryotic cells and in human medicine. Similarly, progress is being made with 'system approaches' in animal sciences, providing exciting opportunities to predict and modulate animal traits.  相似文献   

13.
Both distributed systems and multicore systems are difficult programming environments. Although the expert programmer may be able to carefully tune these systems to achieve high performance, the non-expert may struggle. We argue that high level abstractions are an effective way of making parallel computing accessible to the non-expert. An abstraction is a regularly structured framework into which a user may plug in simple sequential programs to create very large parallel programs. By virtue of a regular structure and declarative specification, abstractions may be materialized on distributed, multicore, and distributed multicore systems with robust performance across a wide range of problem sizes. In previous work, we presented the All-Pairs abstraction for computing on distributed systems of single CPUs. In this paper, we extend All-Pairs to multicore systems, and introduce the Wavefront and Makeflow abstractions, which represent a number of problems in economics and bioinformatics. We demonstrate good scaling of both abstractions up to 32 cores on one machine and hundreds of cores in a distributed system.  相似文献   

14.
Buiu C  Arsene O  Cipu C  Patrascu M 《Bio Systems》2011,103(3):442-447
A P system represents a distributed and parallel bio-inspired computing model in which basic data structures are multi-sets or strings. Numerical P systems have been recently introduced and they use numerical variables and local programs (or evolution rules), usually in a deterministic way. They may find interesting applications in areas such as computational biology, process control or robotics. The first simulator of numerical P systems (SNUPS) has been designed, implemented and made available to the scientific community by the authors of this paper. SNUPS allows a wide range of applications, from modeling and simulation of ordinary differential equations, to the use of membrane systems as computational blocks of cognitive architectures, and as controllers for autonomous mobile robots. This paper describes the functioning of a numerical P system and presents an overview of SNUPS capabilities together with an illustrative example. Availability: SNUPS is freely available to researchers as a standalone application and may be downloaded from a dedicated website, http://snups.ics.pub.ro/, which includes an user manual and sample membrane structures.  相似文献   

15.
A major goal of systems biology is to understand how organism-level behavior arises from a myriad of molecular interactions. Often this involves complex sets of rules describing interactions among a large number of components. As an alternative, we have developed a simple, macro-level model to describe how chronic temperature stress affects reproduction in C. elegans. Our approach uses fundamental engineering principles, together with a limited set of experimentally derived facts, and provides quantitatively accurate predictions of performance under a range of physiologically relevant conditions. We generated detailed time-resolved experimental data to evaluate the ability of our model to describe the dynamics of C. elegans reproduction. We find considerable heterogeneity in responses of individual animals to heat stress, which can be understood as modulation of a few processes and may represent a strategy for coping with the ever-changing environment. Our experimental results and model provide quantitative insight into the breakdown of a robust biological system under stress and suggest, surprisingly, that the behavior of complex biological systems may be determined by a small number of key components.  相似文献   

16.
Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.  相似文献   

17.
The recent revolution in digital technologies and information processing methods present important opportunities to transform the way optical imaging is performed, particularly toward improving the throughput of microscopes while at the same time reducing their relative cost and complexity. Lensfree computational microscopy is rapidly emerging toward this end, and by discarding lenses and other bulky optical components of conventional imaging systems, and relying on digital computation instead, it can achieve both reflection and transmission mode microscopy over a large field-of-view within compact, cost-effective and mechanically robust architectures. Such high throughput and miniaturized imaging devices can provide a complementary toolset for telemedicine applications and point-of-care diagnostics by facilitating complex and critical tasks such as cytometry and microscopic analysis of e.g., blood smears, Pap tests and tissue samples. In this article, the basics of these lensfree microscopy modalities will be reviewed, and their clinically relevant applications will be discussed.  相似文献   

18.
The analysis of molecular motion starting from extensive sampling of molecular configurations remains an important and challenging task in computational biology. Existing methods require a significant amount of time to extract the most relevant motion information from such data sets. In this work, we provide a practical tool for molecular motion analysis. The proposed method builds upon the recent ScIMAP (Scalable Isomap) method, which, by using proximity relations and dimensionality reduction, has been shown to reliably extract from simulation data a few parameters that capture the main, linear and/or nonlinear, modes of motion of a molecular system. The results we present in the context of protein folding reveal that the proposed method characterizes the folding process essentially as well as ScIMAP. At the same time, by projecting the simulation data and computing proximity relations in a low-dimensional Euclidean space, it renders such analysis computationally practical. In many instances, the proposed method reduces the computational cost from several CPU months to just a few CPU hours, making it possible to analyze extensive simulation data in a matter of a few hours using only a single processor. These results establish the proposed method as a reliable and practical tool for analyzing motions of considerably large molecular systems and proteins with complex folding mechanisms.  相似文献   

19.
20.
计算系统生物学是一个多学科交叉的新兴领域,旨在通过整合海量数据建立其生物系统相互作用的复杂网络。数据的整合和模型的建立需要发展合适的数学方法和软件工具,这也是计算系统生物学的主要任务。生物系统模型有助于从整体上理解生物体的内在功能和特性。同时,生物网络模型在药物研发中的应用也越来越受到制药企业以及新药研发机构的重视,如用于特异性药物作用靶点的预测和药物毒性评估等。该文简要介绍计算系统生物学的常见网络和计算模型,以及建立模型所用的研究方法,并阐述其在建模和分析中的作用及面临的问题和挑战。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号