首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In an era of rapid global change, our ability to understand and predict Earth's natural systems is lagging behind our ability to monitor and measure changes in the biosphere. Bottlenecks to informing models with observations have reduced our capacity to fully exploit the growing volume and variety of available data. Here, we take a critical look at the information infrastructure that connects ecosystem modeling and measurement efforts, and propose a roadmap to community cyberinfrastructure development that can reduce the divisions between empirical research and modeling and accelerate the pace of discovery. A new era of data‐model integration requires investment in accessible, scalable, and transparent tools that integrate the expertise of the whole community, including both modelers and empiricists. This roadmap focuses on five key opportunities for community tools: the underlying foundations of community cyberinfrastructure; data ingest; calibration of models to data; model‐data benchmarking; and data assimilation and ecological forecasting. This community‐driven approach is a key to meeting the pressing needs of science and society in the 21st century.  相似文献   

2.
What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published mathematical models describing lactation in cattle, we show how structural identifiability analysis can contribute to advancing mathematical modelling in animal science towards the production of useful models and, moreover, highly informative experiments via optimal experiment design. Rather than attempting to impose a systematic identifiability analysis to the modelling community during model developments, we wish to open a window towards the discovery of a powerful tool for model construction and experiment design.  相似文献   

3.
Dynamic Global Vegetation Models (DGVMs) provide a state-of-the-art process-based approach to study the complex interplay between vegetation and its physical environment. For example, they help to predict how terrestrial plants interact with climate, soils, disturbance and competition for resources. We argue that there is untapped potential for the use of DGVMs in ecological and ecophysiological research. One fundamental barrier to realize this potential is that many researchers with relevant expertize (ecology, plant physiology, soil science, etc.) lack access to the technical resources or awareness of the research potential of DGVMs. Here we present the Land Sites Platform (LSP): new software that facilitates single-site simulations with the Functionally Assembled Terrestrial Ecosystem Simulator, an advanced DGVM coupled with the Community Land Model. The LSP includes a Graphical User Interface and an Application Programming Interface, which improve the user experience and lower the technical thresholds for installing these model architectures and setting up model experiments. The software is distributed via version-controlled containers; researchers and students can run simulations directly on their personal computers or servers, with relatively low hardware requirements, and on different operating systems. Version 1.0 of the LSP supports site-level simulations. We provide input data for 20 established geo-ecological observation sites in Norway and workflows to add generic sites from public global datasets. The LSP makes standard model experiments with default data easily achievable (e.g., for educational or introductory purposes) while retaining flexibility for more advanced scientific uses. We further provide tools to visualize the model input and output, including simple examples to relate predictions to local observations. The LSP improves access to land surface and DGVM modelling as a building block of community cyberinfrastructure that may inspire new avenues for mechanistic ecosystem research across disciplines.  相似文献   

4.
BackgroundMapping the distribution of schistosomiasis is essential to determine where control programs should operate, but because it is impractical to assess infection prevalence in every potentially endemic community, model-based geostatistics (MBG) is increasingly being used to predict prevalence and determine intervention strategies.Conclusions/SignificanceUsing the current predictive map for Ghana as a spatial decision support tool by aggregating prevalence estimates to the district level was clearly not adequate for guiding the national program, but the alternative of assessing each school in potentially endemic areas of Ghana or elsewhere is not at all feasible; modelling must be a tool complementary to empiric assessments. Thus for practical usefulness, predictive risk mapping should not be thought of as a one-time exercise but must, as in the current study, be an iterative process that incorporates empiric testing and model refining to create updated versions that meet the needs of disease control operational managers.  相似文献   

5.
Resistance to pesticides is an increasing problem in agriculture. Despite practices such as phased use and cycling of ‘orthogonally resistant’ agents, resistance remains a major risk to national and global food security. To combat this problem, there is a need for both new approaches for pesticide design, as well as for novel chemical entities themselves. As summarized in this opinion article, a technique termed ‘proteochemometric modelling’ (PCM), from the field of chemoinformatics, could aid in the quantification and prediction of resistance that acts via point mutations in the target proteins of an agent. The technique combines information from both the chemical and biological domain to generate bioactivity models across large numbers of ligands as well as protein targets. PCM has previously been validated in prospective, experimental work in the medicinal chemistry area, and it draws on the growing amount of bioactivity information available in the public domain. Here, two potential applications of proteochemometric modelling to agrochemical data are described, based on previously published examples from the medicinal chemistry literature.  相似文献   

6.
Studies in animal science assessing nutrient and energy efficiency or determining nutrient requirements benefit from gathering exact measurements of body composition or body nutrient contents. Those are acquired by standardized dissection or by grinding the body followed by wet chemical analysis, respectively. The two methods do not result in the same type of information, but both are destructive. Harnessing human medical imaging techniques for animal science can enable repeated measurements of individuals over time and reduce the number of individuals required for research. Among imaging techniques, dual-energy X-ray absorptiometry (DXA) is particularly promising. However, the measurements obtained with DXA do not perfectly match dissections or chemical analyses, requiring the adjustment of the DXA via calibration equations. Several calibration regressions have been published, but comparative studies of those regression equations and whether they are applicable to different data sets are pending. Thus, it is currently not clear whether existing regression equations can be directly used to convert DXA measurements into chemical values or whether each individual DXA device will require its own calibration. Our study builds prediction equations that relate body composition to the content of single nutrients in growing entire male pigs (BW range 20–100 kg) as determined by both DXA and chemical analyses, with R2 ranging between 0.89 for ash and 0.99 for water and CP. Moreover, we show that the chemical composition of the empty body can be satisfactorily determined by DXA scans of carcasses, with the prediction error ranging between 4.3% for CP and 12.6% for ash. Finally, we compare existing prediction equations for pigs of a similar range of BWs with the equations derived from our DXA measurements and evaluate their fit with our chemical analysis data. We found that existing equations for absolute contents that were built using the same DXA beam technology predicted our data more precisely than equations based on different technologies and percentages of fat and lean mass. This indicates that the creation of generic regression equations that yield reliable estimates of body composition in pigs of different growth stages, sexes and genetic breeds could be achievable in the near future. DXA may be a promising tool for high-throughput phenotyping for genetic studies, because it efficiently measures body composition in a large number and wide array of animals.  相似文献   

7.
Modeling forest ecosystems is a landmark challenge in science, due to the complexity of the processes involved and their importance in predicting future planetary conditions. While there are a number of open-source forest biogeochemistry models, few papers exist detailing the software development approach used to develop these models. This has left many forest biogeochemistry models large, opaque and/or difficult to use, typically implemented in compiled languages for speed. Here, we present a forest biogeochemistry model from the SORTIE-PPA class of models, PPA-SiBGC. Our model is based on the perfect plasticity approximation with simple biogeochemistry compartments and uses empirical vegetation dynamics rather than detailed prognostic processes to drive the estimation of carbon and nitrogen fluxes. This allows our model to be used with traditional forest inventory data, making it widely applicable and simple to parameterize. We detail the conceptual design of the model as well as the software implementation in the R language for statistical computing. Our aim is to provide a useful tool for the biogeochemistry modeling community that demonstrates the importance of vegetation dynamics in biogeochemical models.  相似文献   

8.
We present the software CDpal that is used to analyze thermal and chemical denaturation data to obtain information on protein stability. The software uses standard assumptions and equations applied to two‐state and various types of three‐state denaturation models in order to determine thermodynamic parameters. It can analyze denaturation monitored by both circular dichroism and fluorescence spectroscopy and is extremely flexible in terms of input format. Furthermore, it is intuitive and easy to use because of the graphical user interface and extensive documentation. As illustrated by the examples herein, CDpal should be a valuable tool for analysis of protein stability.  相似文献   

9.
Determining small molecule—target protein interaction is essential for the chemical proteomics. One of the most important keys to explore biological system in chemical proteomics field is finding first-class molecular tools. Chemical probes can provide great spatiotemporal control to elucidate biological functions of proteins as well as for interrogating biological pathways. The invention of bioorthogonal chemistry has revolutionized the field of chemical biology by providing superior chemical tools and has been widely used for investigating the dynamics and function of biomolecules in live condition. Among 20 different bioorthogonal reactions, tetrazine ligation has been spotlighted as the most advanced bioorthogonal chemistry because of their extremely faster kinetics and higher specificity than others. Therefore, tetrazine ligation has a tremendous potential to enhance the proteomic research. This review highlights the current status of tetrazine ligation reaction as a molecular tool for the chemical proteomics.  相似文献   

10.
The creation of classification kernel models to categorize unknown data samples of massive magnitude is an extremely advantageous tool for the scientific community. Excel2SVM, a stand-alone Python mathematical analysis tool, bridges the gap between researchers and computer science to create a simple graphical user interface that allows users to examine data and perform maximal margin classification. This valuable ability to train support vector machines and classify unknown data files is harnessed in this fast and efficient software, granting researchers full access to this complicated, high-level algorithm. Excel2SVM offers the ability to convert data to the proper sparse format while performing a variety of kernel functions along with cost factors/modes, grids, crossvalidation, and several other functions. This program functions with any type of quantitative data making Excel2SVM the ideal tool for analyzing a wide variety of input. The software is free and available at www.bioinformatics.org/excel2svm. A link to the software may also be found at www.kernel-machines.org. This software provides a useful graphical user interface that has proven to provide kernel models with accurate results and data classification through a decision boundary.  相似文献   

11.
AimThe aim of this work was to design and evaluate a software tool for analysis of a patient’s respiration, with the goal of optimizing the effectiveness of motion management techniques during radiotherapy imaging and treatment.Materials and methodsA software tool which analyses patient respiratory data files (.vxp files) created by the Varian Real-Time Position Management System (RPM) was developed to analyse patient respiratory data. The software, called RespAnalysis, was created in MATLAB and provides four modules, one each for determining respiration characteristics, providing breathing coaching (biofeedback training), comparing pre and post-training characteristics and performing a fraction-by-fraction assessment. The modules analyse respiratory traces to determine signal characteristics and specifically use a Sample Entropy algorithm as the key means to quantify breathing irregularity. Simulated respiratory signals, as well as 91 patient RPM traces were analysed with RespAnalysis to test the viability of using the Sample Entropy for predicting breathing regularity.ResultsRetrospective assessment of patient data demonstrated that the Sample Entropy metric was a predictor of periodic irregularity in respiration data, however, it was found to be insensitive to amplitude variation. Additional waveform statistics assessing the distribution of signal amplitudes over time coupled with Sample Entropy method were found to be useful in assessing breathing regularity.ConclusionsThe RespAnalysis software tool presented in this work uses the Sample Entropy method to analyse patient respiratory data recorded for motion management purposes in radiation therapy. This is applicable during treatment simulation and during subsequent treatment fractions, providing a way to quantify breathing irregularity, as well as assess the need for breathing coaching. It was demonstrated that the Sample Entropy metric was correlated to the irregularity of the patient’s respiratory motion in terms of periodicity, whilst other metrics, such as percentage deviation of inhale/exhale peak positions provided insight into respiratory amplitude regularity.  相似文献   

12.
Recent years have seen a sharp increase in the development of deep learning and artificial intelligence-based molecular informatics. There has been a growing interest in applying deep learning to several subfields, including the digital transformation of synthetic chemistry, extraction of chemical information from the scientific literature, and AI in natural product-based drug discovery. The application of AI to molecular informatics is still constrained by the fact that most of the data used for training and testing deep learning models are not available as FAIR and open data. As open science practices continue to grow in popularity, initiatives which support FAIR and open data as well as open-source software have emerged. It is becoming increasingly important for researchers in the field of molecular informatics to embrace open science and to submit data and software in open repositories. With the advent of open-source deep learning frameworks and cloud computing platforms, academic researchers are now able to deploy and test their own deep learning models with ease. With the development of new and faster hardware for deep learning and the increasing number of initiatives towards digital research data management infrastructures, as well as a culture promoting open data, open source, and open science, AI-driven molecular informatics will continue to grow. This review examines the current state of open data and open algorithms in molecular informatics, as well as ways in which they could be improved in future.  相似文献   

13.
Biology is an information-driven science. Large-scale data sets from genomics, physiology, population genetics and imaging are driving research at a dizzying rate. Simultaneously, interdisciplinary collaborations among experimental biologists, theorists, statisticians and computer scientists have become the key to making effective use of these data sets. However, too many biologists have trouble accessing and using these electronic data sets and tools effectively. A 'cyberinfrastructure' is a combination of databases, network protocols and computational services that brings people, information and computational tools together to perform science in this information-driven world. This article reviews the components of a biological cyberinfrastructure, discusses current and pending implementations, and notes the many challenges that lie ahead.  相似文献   

14.
With the species composition and/or functioning of many ecosystems currently changing due to anthropogenic drivers it is important to understand and, ideally, predict how changes in one part of the ecosystem will affect another. Here we assess if vegetation composition or soil chemistry best predicts the soil microbial community. The above and below-ground communities and soil chemical properties along a successional gradient from dwarf shrubland (moorland) to deciduous woodland (Betula dominated) were studied. The vegetation and soil chemistry were recorded and the soil microbial community (SMC) assessed using Phospholipid Fatty Acid Extraction (PLFA) and Multiplex Terminal Restriction Fragment Length Polymorphism (M-TRFLP). Vegetation composition and soil chemistry were used to predict the SMC using Co-Correspondence analysis and Canonical Correspondence Analysis and the predictive power of the two analyses compared. The vegetation composition predicted the soil microbial community at least as well as the soil chemical data. Removing rare plant species from the data set did not improve the predictive power of the vegetation data. The predictive power of the soil chemistry improved when only selected soil variables were used, but which soil variables gave the best prediction varied between the different soil microbial communities being studied (PLFA or bacterial/fungal/archaeal TRFLP). Vegetation composition may represent a more stable ‘summary’ of the effects of multiple drivers over time and may thus be a better predictor of the soil microbial community than one-off measurements of soil properties.  相似文献   

15.
《Plains anthropologist》2013,58(90):269-278
Abstract

Amateur and professional archeologists alike have called again and again for more public education, particularly as destruction of archeological sites has accelerated. Several programs focusing on education through certification programs are reviewed. Problems of amateur participation, relevance to the community and university, and publication are discussed. It is concluded that a well-planned program of amateur education and certification can be an effective tool in the creation of a public ethic of conservation. Through such a program, archeology can benefit amateurs and professionals, museums and schools, communities and individuals. In short, it offers the possibility of a true “public archeology.”  相似文献   

16.

Background

The 1980s marked the occasion when Geographical Information System (GIS) technology was broadly introduced into the geo-spatial community through the establishment of a strong GIS industry. This technology quickly disseminated across many countries, and has now become established as an important research, planning and commercial tool for a wider community that includes organisations in the public and private health sectors. The broad acceptance of GIS technology and the nature of its functionality have meant that numerous datasets have been created over the past three decades. Most of these datasets have been created independently, and without any structured documentation systems in place. However, search and retrieval systems can only work if there is a mechanism for datasets existence to be discovered and this is where proper metadata creation and management can greatly help. This situation must be addressed through support mechanisms such as Web-based portal technologies, metadata editor tools, automation, metadata standards and guidelines and collaborative efforts with relevant individuals and organisations. Engagement with data developers or administrators should also include a strategy of identifying the benefits associated with metadata creation and publication.

Findings

The establishment of numerous Spatial Data Infrastructures (SDIs), and other Internet resources, is a testament to the recognition of the importance of supporting good data management and sharing practices across the geographic information community. These resources extend to health informatics in support of research, public services and teaching and learning. This paper identifies many of these resources available to the UK academic health informatics community. It also reveals the reluctance of many spatial data creators across the wider UK academic community to use these resources to create and publish metadata, or deposit their data in repositories for sharing. The Go-Geo! service is introduced as an SDI developed to provide UK academia with the necessary resources to address the concerns surrounding metadata creation and data sharing. The Go-Geo! portal, Geodoc metadata editor tool, ShareGeo spatial data repository, and a range of other support resources, are described in detail.

Conclusions

This paper describes a variety of resources available for the health research and public health sector to use for managing and sharing their data. The Go-Geo! service is one resource which offers an SDI for the eclectic range of disciplines using GIS in UK academia, including health informatics. The benefits of data management and sharing are immense, and in these times of cost restraints, these resources can be seen as solutions to find cost savings which can be reinvested in more research.  相似文献   

17.
Nutrient or/and organic pollutions have been used as one of the key factors influencing the stream health, so research approaches were frequently focused on chemical analysis rather and biological analysis. Our objective of the present study was to diagnose the chemical and biological stream health using chemical multi-metric model of nutrient pollution index (NPI) and biological multi-metric fish model of the index of biotic integrity (IBI), respectively in an urban stream. Seven years dataset from 2008 to 2014 were used for the health assessments. The nutrients regime (TN, TP), sestonic chlorophyll, total suspended solids (TSS), electrical conductivity, BOD, and COD showed a typical polluted stream with large temporal and spatial fluctuations due to variations of Asian monsoon rain. The analysis of fish trophic guilds and fish tolerance guilds showed that omnivore fish species and tolerant species were dominated the community and the proportions were directly determined by water chemistry (nutrient and organic matters). Chemical model of NPI showed “poor-very poor health condition” by the criteria of the model and showed same results with the biological model of fish IBI. The degradation of the stream health was mainly due to massive effluents from the wastewater treatment plants. Overall, our data suggest that the multi-metric chemical and biological models may be used as a key tool diagnosing the stream health condition in an urban stream.  相似文献   

18.
The main goal of this study was to predict, through the use of GIS tool as ecological niche modelling, potentially suitable ecological niche and defining the conditions of such niche for the representatives of the cosmopolitan genus Sirthenea. Among all known genera of the subfamily Peiratinae, only Sirthenea occurs on almost all continents and zoogeographical regions. Our research was based on 521 unique occurrence localities and a set of environmental variables covering the whole world. Based on occurrence localities, as well as climatic variables, digital elevation model, terrestrial ecoregions and biomes, information about the ecological preferences is given. Potentially useful ecological niches were modelled using Maxent software, which allowed for the creation of a map of the potential distribution and for determining climatic preferences. An analysis of climatic preferences suggested that the representatives of the genus were linked mainly to the tropical and temperate climates. An analysis of ecoregions also showed that they preferred areas with tree vegetation like tropical and subtropical moist broadleaf forests biomes as well as temperate broadleaf and mixed forest biomes. Therefore, on the basis of the museum data on the species occurrence and ecological niche modelling method, we provided new and valuable information on the potentially suitable habitat and the possible range of distribution of the genus Sirthenea along with its climatic preferences.  相似文献   

19.
IntroductionAfter several decades’ development, meta-analysis has become the pillar of evidence-based medicine. However, heterogeneity is still the threat to the validity and quality of such studies. Currently, Q and its descendant I2 (I square) tests are widely used as the tools for heterogeneity evaluation. The core mission of this kind of test is to identify data sets from similar populations and exclude those are from different populations. Although Q and I2 are used as the default tool for heterogeneity testing, the work we present here demonstrates that the robustness of these two tools is questionable.ConclusionsEvery day, meta-analysis studies which contain flawed data analysis are emerging and passed on to clinical practitioners as “updated evidence”. Using this kind of evidence that contain heterogeneous data sets leads to wrong conclusion, makes chaos in clinical practice and weakens the foundation of evidence-based medicine. We suggest more strict applications of meta-analysis: it should only be applied to those synthesized trials with small sample sizes. We call upon that the tools of evidence-based medicine should keep up-to-dated with the cutting-edge technologies in data science. Clinical research data should be made available publicly when there is any relevant article published so the research community could conduct in-depth data mining, which is a better alternative for meta-analysis in many instances.  相似文献   

20.
Systematic extraction of relevant biological facts from available massive scientific knowledge source is emerging as a significant task for the science community. Its success depends on several key factors, including the precision of a given search, the time of its accomplishment, and the communicative prowess of the mined information to the users. GeneCite - a stand-alone Java-based high-throughput data mining tool - is designed to carry out these tasks for several important knowledge sources simultaneously, allowing the users to integrate the results and interpret biological significance in a time-efficient manner. GeneCite provides an integrated high-throughput search platform serving as an information retrieval (IR) tool for probing online literature database (PubMed) and the sequence-tagged sites' database (UniSTS), respectively. It also operates as a data retrieval (DR) tool to mine an archive of biological pathways integrated into the software itself. Furthermore, GeneCite supports a retrieved data management system (DMS) showcasing the final output in a spread-sheet format. Each cell of the output file holds a real-time connection (hyperlink) to the given online archive reachable at the users' convenience. The software is free and currently available online www.bioinformatics.org; www.wrair.army.mil/Resources.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号