首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Farmers have been slow to adopt decision support system (DSS) models and their outputs, mainly owing to (i) the complexity of the data involved, which most potential users are unable to collect and process; and (ii) inability to integrate these models into real representations of their informational environments. This situation raises questions about the way farm management researchers have modelled information and information management, and especially about the quality of the information assessed by the farmers. We consider that to review advisory procedures we need to understand how farmers select and use farm management-related information, rather than focusing on decisions made in particular situations. The aim of this study was to build a conceptual model of the farmer-targeted farm management-related information system. This model was developed using data collected in commercial beef cattle farms. The design structure and operational procedures are based on (i) data categories representing the diversity of the informational activity; and (ii) selected criteria for supporting decisions. The model is composed of two subsystems, each composed of two units. First, an organizational subsystem organizes, finalizes and monitors informational activity. Second, a processing subsystem builds and exploits the informational resources. This conceptual model makes it possible to describe and understand the diverse range of farmers' informational activity by taking into account both the flow of information and the way farmers make sense of that information. This model could serve as a component of biodecisional DSS models for assigning information in the decision-making process. The next task will be to take into account the broad range of farmers' perceptions of the management situations in DSS models.  相似文献   

2.
Bayesian hierarchical models have been applied in clinical trials to allow for information sharing across subgroups. Traditional Bayesian hierarchical models do not have subgroup classifications; thus, information is shared across all subgroups. When the difference between subgroups is large, it suggests that the subgroups belong to different clusters. In that case, placing all subgroups in one pool and borrowing information across all subgroups can result in substantial bias for the subgroups with strong borrowing, or a lack of efficiency gain with weak borrowing. To resolve this difficulty, we propose a hierarchical Bayesian classification and information sharing (BaCIS) model for the design of multigroup phase II clinical trials with binary outcomes. We introduce subgroup classification into the hierarchical model. Subgroups are classified into two clusters on the basis of their outcomes mimicking the hypothesis testing framework. Subsequently, information sharing takes place within subgroups in the same cluster, rather than across all subgroups. This method can be applied to the design and analysis of multigroup clinical trials with binary outcomes. Compared to the traditional hierarchical models, better operating characteristics are obtained with the BaCIS model under various scenarios.  相似文献   

3.
Individual variation is an inherent aspect of animal populations and understanding the mechanisms shaping resource use patterns within populations is crucial to comprehend how individuals partition resources. Theory predicts that differences in prey preferences among consumers and/or differences in the likelihood of adding new resources to their diets are key mechanisms underlying intrapopulation variation in resource use. We developed network models based on optimal diet theory that simulate how individuals consume resources under varying scenarios of individual variation in prey preferences and in the willingness of consuming alternate resources. We then investigated how the structure of individual–resource networks generated under each model compared to the structure of observed networks representing five classical examples of individual diet variation. Our results support the notion that, for the studied populations, individual variation in prey preferences is the major factor explaining patterns in individual–resource networks. In contrast, variation in the willingness of adding prey does not seem to play an important role in shaping patterns of resource use. Individual differences in prey preferences in the studied populations may be generated by complex behavioral rules related to cognitive constraints and experience. Our approach provides a pathway for mapping foraging models into network patterns, which may allow determining the possible mechanisms leading to variation in resource use within populations.  相似文献   

4.
MedScan,a natural language processing engine for MEDLINE abstracts   总被引:2,自引:0,他引:2  
MOTIVATION: The importance of extracting biomedical information from scientific publications is well recognized. A number of information extraction systems for the biomedical domain have been reported, but none of them have become widely used in practical applications. Most proposals to date make rather simplistic assumptions about the syntactic aspect of natural language. There is an urgent need for a system that has broad coverage and performs well in real-text applications. RESULTS: We present a general biomedical domain-oriented NLP engine called MedScan that efficiently processes sentences from MEDLINE abstracts and produces a set of regularized logical structures representing the meaning of each sentence. The engine utilizes a specially developed context-free grammar and lexicon. Preliminary evaluation of the system's performance, accuracy, and coverage exhibited encouraging results. Further approaches for increasing the coverage and reducing parsing ambiguity of the engine, as well as its application for information extraction are discussed.  相似文献   

5.
Coalescent theory is routinely used to estimate past population dynamics and demographic parameters from genealogies. While early work in coalescent theory only considered simple demographic models, advances in theory have allowed for increasingly complex demographic scenarios to be considered. The success of this approach has lead to coalescent-based inference methods being applied to populations with rapidly changing population dynamics, including pathogens like RNA viruses. However, fitting epidemiological models to genealogies via coalescent models remains a challenging task, because pathogen populations often exhibit complex, nonlinear dynamics and are structured by multiple factors. Moreover, it often becomes necessary to consider stochastic variation in population dynamics when fitting such complex models to real data. Using recently developed structured coalescent models that accommodate complex population dynamics and population structure, we develop a statistical framework for fitting stochastic epidemiological models to genealogies. By combining particle filtering methods with Bayesian Markov chain Monte Carlo methods, we are able to fit a wide class of stochastic, nonlinear epidemiological models with different forms of population structure to genealogies. We demonstrate our framework using two structured epidemiological models: a model with disease progression between multiple stages of infection and a two-population model reflecting spatial structure. We apply the multi-stage model to HIV genealogies and show that the proposed method can be used to estimate the stage-specific transmission rates and prevalence of HIV. Finally, using the two-population model we explore how much information about population structure is contained in genealogies and what sample sizes are necessary to reliably infer parameters like migration rates.  相似文献   

6.
Unlike other living creatures, humans can adapt to uncertainty. They can form hypotheses about situations marked by uncertainty and can anticipate their actions by planning. They can expect the unexpected and take precautions against it. In numerous experiments, we have investigated the manner in which humans deal with these demands. In these experiments, we used computer simulated scenarios representing, for example, a small town, ecological or economic systems or political systems such as a Third World country. Within these computer-simulated scenarios, the subjects had to look for information, plan actions, form hypotheses, etc.  相似文献   

7.
An optimization framework based on the use of hybrid models is presented for preparative chromatographic processes. The first step in the hybrid model strategy involves the experimental determination of the parameters of the physical model, which consists of the full general rate model coupled with the kinetic form of the steric mass action isotherm. These parameters are then used to carry out a set of simulations with the physical model to obtain data on the functional relationship between various objective functions and decision variables. The resulting data is then used to estimate the parameters for neural-network-based empirical models. These empirical models are developed in order to enable the exploration of a wide variety of different design scenarios without any additional computational requirements. The resulting empirical models are then used with a sequential quadratic programming optimization algorithm to maximize the objective function, production rate times yield (in the presence of solubility and purity constraints), for binary and tertiary model protein systems. The use of hybrid empirical models to represent complex preparative chromatographic systems significantly reduces the computational time required for simulation and optimization. In addition, it allows both multivariable optimization and rapid exploration of different scenarios for optimal design.  相似文献   

8.
Software design is an often neglected issue in ecological models, even though bad software design often becomes a hindrance for re-using, sharing and even grasping an ecological model. In this paper, the methodology of agile software design was applied to the domain of ecological models. Thus the principles for a universal design of ecological models were arrived at. To exemplify this design, the open-source software Universal Simulator was constructed using C++ and XML and is provided as a resource for inspiration.  相似文献   

9.
As design for recycling becomes more broadly applied in material and product design, analytical tools to quantify the environmental implications of design choices will become a necessity. Currently, few systematic methods exist to measure and direct the metallurgical alloy design process to create alloys that are most able to be produced from scrap. This is due, in part, to the difficulty in evaluating such a context‐dependent property as recyclability of an alloy, which will depend on the types of scraps available to producers, the compositional characteristics of those scraps, their yield, and the alloy specification itself. This article explores the use of a chance‐constrained based optimization model, similar to models used in operational planning in secondary production today, to (1) characterize the challenge of developing recycling‐friendly alloys due to the contextual sensitivity of recycling, (2) demonstrate how such models can be used to evaluate the potential scrap usage of alloys, and (3) explore the value of sensitivity analysis information to proactively identify effective alloy modifications that can drive increased potential scrap use. These objectives are demonstrated through two cases that involve the production of a broad range of alloys utilizing representative scraps from three classes of industrial end uses.  相似文献   

10.
Species distribution models (SDMs) use spatial environmental data to make inferences on species' range limits and habitat suitability. Conceptually, these models aim to determine and map components of a species' ecological niche through space and time, and they have become important tools in pure and applied ecology and evolutionary biology. Most approaches are correlative in that they statistically link spatial data to species distribution records. An alternative strategy is to explicitly incorporate the mechanistic links between the functional traits of organisms and their environments into SDMs. Here, we review how the principles of biophysical ecology can be used to link spatial data to the physiological responses and constraints of organisms. This provides a mechanistic view of the fundamental niche which can then be mapped to the landscape to infer range constraints. We show how physiologically based SDMs can be developed for different organisms in different environmental contexts. Mechanistic SDMs have different strengths and weaknesses to correlative approaches, and there are many exciting and unexplored prospects for integrating the two approaches. As physiological knowledge becomes better integrated into SDMs, we will make more robust predictions of range shifts in novel or non-equilibrium contexts such as invasions, translocations, climate change and evolutionary shifts.  相似文献   

11.
The Association for Environmental Health and Sciences Foundation has been collecting information on state-by-state petroleum cleanup levels (CULs) for soil since 1990, with the most recent survey in 2012. These data form the basis for this analysis, including a comparison of the CULs to U.S. Environmental Protection Agency (USEPA) regulatory values. The results illustrate the evolving complexity of state regulatory approaches to petroleum mixtures; benzene, toluene, ethylbenzene, and xylenes; and carcinogenic polycyclic aromatic hydrocarbons, as well as the use of multiple exposure scenarios and pathways to regulate petroleum in soil. Different fractionation approaches in use by various states and the USEPA are discussed, their strengths and limitations are reviewed, and their implications for site CULs are evaluated. Because of an increasing array of scenarios and pathways, CUL ranges have widened over time. As the regulatory environment for petroleum releases becomes more complex, it is increasingly important to develop a conceptual site model for fate, transport, land use assumptions, and exposure pathways at petroleum-contaminated sites to enable selection of the most appropriate CULs available.  相似文献   

12.
Aim, Scope and Background  The data-intensive nature of life cycle assessment (LCA), even for non-complex products, quickly leads to the utilization of various methods of representing the data in forms other than written characters. Up until now, traditional representations of life cycle inventory (LCI) data and environmental impact analysis (EIA) results have usually been based on 2D and 3D variants of simple tables, bar charts, pie charts and x/y graphs. However, these representation methods do not sufficiently address aspects such as representation of life cycle inventory information at a glance, filtering out data while summarizing the filtered data (so as to reduce the information load), and representation of data errors and uncertainty. Main Features  This new information representation approach with its glyph-based visualization method addresses the specific problems outlined above, encountered when analyzing LCA and EIA related information. In particular, support for multi-dimensional information representation, reduction of information load, and explicit data feature propagation are provided on an interactive, computer-aided basis. Results  Three-dimensional, interactive geometric objects, so called OM-glyphs, were used in the visualization method introduced, to represent LCA-related information in a multi-dimensional information space. This representation is defined by control parameters, which in turn represent spatial, geometric and retinal properties of glyphs and glyph formations. All relevant analysis scenarios allowed and valid can be visualized. These consist of combinations of items for the material and energy inventories, environmental items, life cycle phases and products, or their parts and components. Individual visualization scenarios, once computed and rendered on a computer screen, can then interactively be modified in terms of visual viewpoint, size, spatial location and detail of data represented, as needed. This helps to increase speed, efficiency and quality of the assessment performance, while at the same time considerably reducing mental load due to the more structured manner in which information is represented to the human expert. Conclusions  The previous paper in this series discussed the motivation for a new approach to efficient information visualization in LCA and introduced the essential basic principles. This second paper offers more insight into and discussion on technical details and the framework developed. To provide a means for better understanding the visualization method presented, examples have been given. The main purpose of the examples, as already indicated, is to demonstrate and make transparent the mapping of LCA related data and their contexts to glyph parameters. Those glyph parameters, in turn, are used to generate a novel form of sophisticated information representation which is transparent, clear and compact, features which cannot be achieved with any traditional representation scheme. Outlook  Final technical details of this approach and its framework will be presented and discussed in the next paper. Theoretical and practical issues related to the application of this visualization method to the computed life cycle inventory data of an actual industrial product will also be discussed in this next paper.  相似文献   

13.
14.
As the extent of human genetic variation becomes more fully characterized, the research community is faced with the challenging task of using this information to dissect the heritable components of complex traits. Genomewide association studies offer great promise in this respect, but their analysis poses formidable difficulties. In this article, we describe a computationally efficient approach to mining genotype-phenotype associations that scales to the size of the data sets currently being collected in such studies. We use discrete graphical models as a data-mining tool, searching for single- or multilocus patterns of association around a causative site. The approach is fully Bayesian, allowing us to incorporate prior knowledge on the spatial dependencies around each marker due to linkage disequilibrium, which reduces considerably the number of possible graphical structures. A Markov chain-Monte Carlo scheme is developed that yields samples from the posterior distribution of graphs conditional on the data from which probabilistic statements about the strength of any genotype-phenotype association can be made. Using data simulated under scenarios that vary in marker density, genotype relative risk of a causative allele, and mode of inheritance, we show that the proposed approach has better localization properties and leads to lower false-positive rates than do single-locus analyses. Finally, we present an application of our method to a quasi-synthetic data set in which data from the CYP2D6 region are embedded within simulated data on 100K single-nucleotide polymorphisms. Analysis is quick (<5 min), and we are able to localize the causative site to a very short interval.  相似文献   

15.
Stable isotope mixing models (SIMMs) are an important tool used to study species' trophic ecology. These models are dependent on, and sensitive to, the choice of trophic discrimination factors (TDF) representing the offset in stable isotope delta values between a consumer and their food source when they are at equilibrium. Ideally, controlled feeding trials should be conducted to determine the appropriate TDF for each consumer, tissue type, food source, and isotope combination used in a study. In reality however, this is often not feasible nor practical. In the absence of species‐specific information, many researchers either default to an average TDF value for the major taxonomic group of their consumer, or they choose the nearest phylogenetic neighbour for which a TDF is available. Here, we present the SIDER package for R, which uses a phylogenetic regression model based on a compiled dataset to impute (estimate) a TDF of a consumer. We apply information on the tissue type and feeding ecology of the consumer, all of which are known to affect TDFs, using Bayesian inference. Presently, our approach can estimate TDFs for two commonly used isotopes (nitrogen and carbon), for species of mammals and birds with or without previous TDF information. The estimated posterior probability provides both a mean and variance, reflecting the uncertainty of the estimate, and can be subsequently used in the current suite of SIMM software. SIDER allows users to place a greater degree of confidence on their choice of TDF and its associated uncertainty, thereby leading to more robust predictions about trophic relationships in cases where study‐specific data from feeding trials is unavailable. The underlying database can be updated readily to incorporate more stable isotope tracers, replicates and taxonomic groups to further increase the confidence in dietary estimates from stable isotope mixing models, as this information becomes available.  相似文献   

16.
17.
Effective precision livestock farming requires the availability of reliable and up-to-date data characterizing the animals and environment of interest. This work presents the design, architecture, and implementation of a wireless acoustic sensor network for monitoring goat farms on a 24/7 basis. In addition, we define a hierarchical organization of the involved sound classes exhaustively covering every aspect of the encountered goat vocalizations. Moreover, we developed an annotation tool tailored to the specifics of the problem at hand, i.e. a big, real-world data environment, able to meaningfully assist the annotation of goat vocalizations via a suitable sound classification module. On top of that, a mobile phone application was developed facilitating authorized users to remotely access information describing the situation on the farm site. Importantly, such a non-invasive monitoring framework was installed in 4 different sites located in Northern Italy while taking into account their diverse characteristics.  相似文献   

18.
19.
Life cycle assessment (LCA) will always involve some subjectivity and uncertainty. This reality is especially true when the analysis concerns new technologies. Dealing with uncertainty can generate richer information and minimize some of the result mismatches currently encountered in the literature. As a way of analyzing future fuel cell vehicles and their potential new fuels, the Fuel Upstream Energy and Emission Model (FUEEM) developed at the University of California—Davis, pioneered two different ways to incorporate uncertainty into the analysis. First, the model works with probabilistic curves as inputs and with Monte Carlo simulation techniques to propagate the uncertainties. Second, the project involved the interested parties in the entire process, not only in the critical review phase. The objective of this paper is to present, as a case study, the tools and the methodologies developed to acquire most of the knowledge held by interested parties and to deal with their — eventually conflicted—interests. The analysis calculation methodology, the scenarios, and all assumed probabilistic curves were derived from a consensus of an international expert network discussion, using existing data in the literature along with new information collected from companies. The main part of the expert discussion process uses a variant of the Delphi technique, focusing on the group learning process through the information feedback feature. A qualitative analysis indicates that a higher level of credibility and a higher quality of information can be achieved through a more participatory process. The FUEEM method works well within technical information and also in establishing a reasonable set of simple scenarios. However, for a complex combination of scenarios, it will require some improvement. The time spent in the process was the major drawback of the method and some alternatives to share this time cost are suggested.  相似文献   

20.
Goal Scope Background  The main focus in OMNIITOX is on characterisation models for toxicological impacts in a life cycle assessment (LCA) context. The OMNIITOX information system (OMNIITOX IS) is being developed primarily to facilitate characterisation modelling and calculation of characterisation factors to provide users with information necessary for environmental management and control of industrial systems. The modelling and implementation of operational characterisation models on eco and human toxic impacts requires the use of data and modelling approaches often originating from regulatory chemical risk assessment (RA) related disciplines. Hence, there is a need for a concept model for the data and modelling approaches that can be interchanged between these different contexts of natural system model approaches. Methods. The concept modelling methodology applied in the OMNIITOX project is built on database design principles and ontological principles in a consensus based and iterative process by participants from the LCA, RA and environmental informatics disciplines. Results. The developed OMNIITOX concept model focuses on the core concepts of substance, nature framework, load, indicator, and mechanism, with supplementary concepts to support these core concepts. They refer to the modelled cause, effect, and the relation between them, which are aspects inherent in all models used in the disciplines within the scope of OMNIITOX. This structure provides a possibility to compare the models on a fundamental level and a language to communicate information between the disciplines and to assess the possibility of transparently reusing data and modelling approaches of various levels of detail and complexity. Conclusions  The current experiences from applying the concept model show that the OMNIITOX concept model increases the structuring of all information needed to describe characterisation models transparently. From a user perspective the OMNIITOX concept model aids in understanding the applicability, use of a characterisation model and how to interpret model outputs. Recommendations and Outlook  The concept model provides a tool for structured characterisation modelling, model comparison, model implementation, model quality management, and model usage. Moreover, it could be used for the structuring of any natural environment cause-effect model concerning other impact categories than toxicity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号