首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Appropriate definitionof neural network architecture prior to data analysis is crucialfor successful data mining. This can be challenging when the underlyingmodel of the data is unknown. The goal of this study was to determinewhether optimizing neural network architecture using genetic programmingas a machine learning strategy would improve the ability of neural networksto model and detect nonlinear interactions among genes in studiesof common human diseases.

Results

Using simulateddata, we show that a genetic programming optimized neural network approachis able to model gene-gene interactions as well as a traditionalback propagation neural network. Furthermore, the genetic programmingoptimized neural network is better than the traditional back propagationneural network approach in terms of predictive ability and powerto detect gene-gene interactions when non-functional polymorphismsare present.

Conclusion

This study suggeststhat a machine learning strategy for optimizing neural network architecturemay be preferable to traditional trial-and-error approaches forthe identification and characterization of gene-gene interactionsin common, complex human diseases.
  相似文献   

2.
Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts.  相似文献   

3.
4.
As a peer-assisted learning process, minilectures on physiology were conducted by students. During this process, students lecture to their colleagues in the presence of faculty staff members. These lectures were evaluated by faculty staff and students simultaneously. The aim of this study was to compare feedback from faculty members and students on 66 minilectures conducted by students. Their perception of different qualities of lecture was assessed using a questionnaire. There were significant correlations between students and faculty members for many qualities of the lecture, including the speed of the lecture, retaining attention, clear introduction, and the overall quality of the lecture. However, ratings for gesture, eye contact, language usage, illustration usage, audiovisuals, voice usage, and important points stressed were significantly different between students and faculty members. Multiple regression analysis was performed to assess the degree of effect of different aspects of a lecture on its overall quality. Aspects such as gesture, eye contact, and language usage showed very low β-values, suggesting a poor contribution of these factors to the overall quality of the lecture for both students and faculty members. The speed of the lecture, retaining attention, and clear introduction were qualities that faculty members and students rated equally, and these were the main contributors to the overall quality of the lecture. Awareness about the possible discrepancy between ratings given by faculty members and students may be important when interpreting the evaluation results of formal lectures by these two groups.  相似文献   

5.
Qualitative reasoning has been successfully used for ecological modelling, particularly when numerical data are not available. However, in order to further explore the potential of this modelling approach, it is important to discuss how to incorporate numerical data, if available, and to develop means to evaluate conceptual aspects and model outputs. This paper describes a study on qualitative model evaluation, in which numerical data about water quality are used to define different scenarios in a water basin, so that the outputs of simulations with the model can be compared to the actual system. The model was evaluated by independent experts, concerning its conceptual and operational aspects, and with respect to its predictive capability. The model was considered valid for the intended use, which is to increase the understanding of non-expert water managers.  相似文献   

6.
Abstract

Accurate and rapid toxic gas concentration prediction model plays an important role in emergency aid of sudden gas leak. However, it is difficult for existing dispersion model to achieve accuracy and efficiency requirements at the same time. Although some researchers have considered developing new forecasting models with traditional machine learning, such as back propagation (BP) neural network, support vector machine (SVM), the prediction results obtained from such models need to be improved still in terms of accuracy. Then new prediction models based on deep learning are proposed in this paper. Deep learning has obvious advantages over traditional machine learning in prediction and classification. Deep belief networks (DBNs) as well as convolution neural networks (CNNs) are used to build new dispersion models here. Both models are compared with Gaussian plume model, computation fluid dynamics (CFD) model and models based on traditional machine learning in terms of accuracy, prediction time, and computation time. The experimental results turn out that CNNs model performs better considering all evaluation indexes.  相似文献   

7.
Binomial tests are commonly used in sensory difference and preference testing under the assumptions that choices are independent and choice probabilities do not vary from trial to trial. This paper addresses violations of the latter assumption (often referred to as overdispersion) and accounts for variation in inter-trial choice probabilities following the Beta distribution. Such variation could arise as a result of differences in test substrate from trial to trial, differences in sensory acuity among subjects or the existence of latent preference segments. In fact, it is likely that overdispersion occurs ubiquitously in product testing. Using the Binomial model for data in which there is inter-trial variation may lead to seriously misleading conclusions from a sensory difference or preference test. A simulation study in this paper based on product testing experience showed that when using a Binomial model for overdispersed Binomial data, Type I error may be 0.44 for a Binomial test specification corresponding to a level of 0.05. Underestimation of Type I error using the Binomial model may seriously undermine legal claims of product superiority in situations where overdispersion occurs. The Beta-Binomial (BB) model, an extension of the Binomial distribution, was developed to fit overdispersed Binomial data. Procedures for estimating and testing the parameters as well as testing for goodness of fit are discussed. Procedures for determining sample size and for calculating estimate precision and test power based on the BB model are given. Numerical examples and simulation results are also given in the paper. The BB model should improve the validity of sensory difference and preference testing.  相似文献   

8.
The relationships between GC data of volatiles and sensory score of the heighest grade of soy sauce (Tokusen shoyu) were analyzed by multivariate analyses. Samples belonging to four brands could be unambiguously classified into the correct brands and the statistical distance among them suggested a close relation with sensory evaluation. The precise predictive equations for the aroma quality were calculated from GC data and sensory evaluation by multiple regression analysis. Eight principal components were extracted from 39 GC peaks as significant factors constituting soy sauce aroma. The second PC can explain 58% of the variation among total variation contained in the sensory score. Discriminant functions on the basis of the 8 PCs can clearly classify all samples.  相似文献   

9.

Aim

Global-scale maps of the environment are an important source of information for researchers and decision makers. Often, these maps are created by training machine learning algorithms on field-sampled reference data using remote sensing information as predictors. Since field samples are often sparse and clustered in geographic space, model prediction requires a transfer of the trained model to regions where no reference data are available. However, recent studies question the feasibility of predictions far beyond the location of training data.

Innovation

We propose a novel workflow for spatial predictive mapping that leverages recent developments in this field and combines them in innovative ways with the aim of improved model transferability and performance assessment. We demonstrate, evaluate and discuss the workflow with data from recently published global environmental maps.

Main conclusions

Reducing predictors to those relevant for spatial prediction leads to an increase of model transferability and map accuracy without a decrease of prediction quality in areas with high sampling density. Still, reliable gap-free global predictions were not possible, highlighting that global maps and their evaluation are hampered by limited availability of reference data.  相似文献   

10.
The aim of this study was to compare experts to naïve practitioners in rating the beauty and the technical quality of a Tai Chi sequence observed in video-clips (of high and middle level performances). Our hypothesis are: i) movement evaluation will correlate with the level of skill expressed in the kinematics of the observed action but ii) only experts will be able to unravel the technical component from the aesthetic component of the observed action. The judgments delivered indicate that both expert and non-expert observers are able to discern a good from a mediocre performance; however, as expected, only experts discriminate the technical from the aesthetic component of the action evaluated and do this independently of the level of skill shown by the model (high or middle level performances). Furthermore, the judgments delivered were strongly related to the kinematic variables measured in the observed model, indicating that observers rely on specific movement kinematics (e.g. movement amplitude, jerk and duration) for action evaluation. These results provide evidence of the complementary functional role of visual and motor action representation in movement evaluation and underline the role of expertise in judging the aesthetic quality of movements.  相似文献   

11.
Null models have proven to be an important quantitative tool in the search for ecological processes driving local diversity and species distribution. However, there remains an important concern that different processes, such as environmental conditions and biotic interactions may produce similar patterns in species distributions. In this paper we present an analytical protocol for incorporating habitat suitability as an occupancy criterion in null models. Our approach involves modeling species presence or absence as a function of environmental conditions, and using the estimated site-specific probabilities of occurrence as the likelihood of species occupancy of a site during the generation of "null communities". We validated this approach by showing that type I error is not affected by the use of probabilities as a site occupancy criterion and is robust against a variety of predictive performances of the species-environmental models. We describe the expected differences when contrasting classical and the environmentally constrained null models, and illustrate our approach with a data set of Dutch dune hunting spider assemblages. Together, an environmentally constrained approach to null models will provide a more robust evaluation of species associations by facilitating the distinction between mutually exclusive processes that may shape species distributions and community assembly.  相似文献   

12.
MOTIVATION: The Predictive Toxicology Challenge (PTC) was initiated to stimulate the development of advanced techniques for predictive toxicology models. The goal of this challenge was to compare different approaches for the prediction of rodent carcinogenicity, based on the experimental results of the US National Toxicology Program (NTP). RESULTS: 111 sets of predictions for 185 compounds have been evaluated on quantitative and qualitative scales to select the most predictive models and those with the highest toxicological relevance. The accuracy of the submitted predictions was between 25 and 79 %. An evaluation of the most accurate models by toxicological experts showed, that it is still hard for domain experts to interpret the submitted models and to put them into relation with toxicological knowledge. AVAILABILITY: PTC details and data can be found at: http://www.predictive-toxicology.org/ptc/.  相似文献   

13.
MOTIVATION: The development of in silico models to predict chemical carcinogenesis from molecular structure would help greatly to prevent environmentally caused cancers. The Predictive Toxicology Challenge (PTC) competition was organized to test the state-of-the-art in applying machine learning to form such predictive models. RESULTS: Fourteen machine learning groups generated 111 models. The use of Receiver Operating Characteristic (ROC) space allowed the models to be uniformly compared regardless of the error cost function. We developed a statistical method to test if a model performs significantly better than random in ROC space. Using this test as criteria five models performed better than random guessing at a significance level p of 0.05 (not corrected for multiple testing). Statistically the best predictor was the Viniti model for female mice, with p value below 0.002. The toxicologically most interesting models were Leuven2 for male mice, and Kwansei for female rats. These models performed well in the statistical analysis and they are in the middle of ROC space, i.e. distant from extreme cost assumptions. These predictive models were also independently judged by domain experts to be among the three most interesting, and are believed to include a small but significant amount of empirically learned toxicological knowledge. AVAILABILITY: PTC details and data can be found at: http://www.predictive-toxicology.org/ptc/.  相似文献   

14.
Abstract

Quantitative risk assessment (QRA) approaches systematically evaluate the likelihood, impacts, and risk of adverse events. QRA using fault tree analysis (FTA) is based on the assumptions that failure events have crisp probabilities and they are statistically independent. The crisp probabilities of the events are often absent, which leads to data uncertainty. However, the independence assumption leads to model uncertainty. Experts’ knowledge can be utilized to obtain unknown failure data; however, this process itself is subject to different issues such as imprecision, incompleteness, and lack of consensus. For this reason, to minimize the overall uncertainty in QRA, in addition to addressing the uncertainties in the knowledge, it is equally important to combine the opinions of multiple experts and update prior beliefs based on new evidence. In this article, a novel methodology is proposed for QRA by combining fuzzy set theory and evidence theory with Bayesian networks to describe the uncertainties, aggregate experts’ opinions, and update prior probabilities when new evidences become available. Additionally, sensitivity analysis is performed to identify the most critical events in the FTA. The effectiveness of the proposed approach has been demonstrated via application to a practical system.  相似文献   

15.
Improving predictions of restoration outcomes is increasingly important to resource managers for accountability and adaptive management, yet there is limited guidance for selecting a predictive model from the multitude available. The goal of this article was to identify an optimal predictive framework for restoration ecology using 11 modeling frameworks (including machine learning, inferential, and ensemble approaches) and three data groups (field data, geographic data [GIS], and a combination thereof). We test this approach with a dataset from a large postfire sagebrush reestablishment project in the Great Basin, U.S.A. Predictive power varied among models and data groups, ranging from 58% to 79% accuracy. Finer‐scale field data generally had the greatest predictive power, although GIS data were present in the best models overall. An ensemble prediction computed from the 10 models parameterized to field data was well above average for accuracy but was outperformed by others that prioritized model parsimony by selecting predictor variables based on rankings of their importance among all candidate models. The variation in predictive power among a suite of modeling frameworks underscores the importance of a model comparison and refinement approach that evaluates multiple models and data groups, and selects variables based on their contribution to predictive power. The enhanced understanding of factors influencing restoration outcomes accomplished by this framework has the potential to aid the adaptive management process for improving future restoration outcomes.  相似文献   

16.
Active learning and research-oriented activities have been increasingly used in smaller, specialized science courses. Application of this type of scientific teaching to large enrollment introductory courses has been, however, a major challenge. The general microbiology lecture/laboratory course described has been designed to incorporate published active-learning methods. Three major case studies are used as platforms for active learning. Themes from case studies are integrated into lectures and laboratory experiments, and in class and online discussions and assignments. Students are stimulated to apply facts to problem-solving and to learn research skills such as data analysis, writing, and working in teams. This course is feasible only because of its organizational framework that makes use of teaching teams (made up of faculty, graduate assistants, and undergraduate assistants) and Web-based technology. Technology is a mode of communication, but also a system of course management. The relevance of this model to other biology courses led to assessment and evaluation, including an analysis of student responses to the new course, class performance, a university course evaluation, and retention of course learning. The results are indicative of an increase in student engagement in research-oriented activities and an appreciation of real-world context by students.  相似文献   

17.
一种自优化RBF神经网络的叶绿素a浓度时序预测模型   总被引:4,自引:0,他引:4  
仝玉华  周洪亮  黄浙丰  张宏建 《生态学报》2011,31(22):6788-6795
藻类水华发生过程具有复杂性、非线性、时变性等特点,其准确预测一直是一个国际性难题.以天津市于桥水库为研究对象,根据2000年1月至2003年12月常规监测的水生生态数据(采样周期为10 d),提出了一种结合时序方法的可自优化RBF神经网络智能预测模型,对判断藻类水华的重要指标叶绿素a浓度进行预测.研究了训练样本量及RBF神经网络扩展速度SPREAD值的可自优化性能,以及该模型用于于桥水库叶绿素a浓度的短期变化趋势预测的可行性.结果表明,预测性能指标随SPREAD值及样本量不同发生变化,该预测模型能自动寻到最优SPREAD值,并发现至少需要约两年的训练样本量才能达到较好预测效果.当样本量为105,SPREAD值为10时,预测效果最好,精度较高,预测值与实测值的相关系数R达到0.982.该方法对水库的藻类水华预警有一定的参考价值.  相似文献   

18.
Many characteristics of sensorimotor control can be explained by models based on optimization and optimal control theories. However, most of the previous models assume that the central nervous system has access to the precise knowledge of the sensorimotor system and its interacting environment. This viewpoint is difficult to be justified theoretically and has not been convincingly validated by experiments. To address this problem, this paper presents a new computational mechanism for sensorimotor control from a perspective of adaptive dynamic programming (ADP), which shares some features of reinforcement learning. The ADP-based model for sensorimotor control suggests that a command signal for the human movement is derived directly from the real-time sensory data, without the need to identify the system dynamics. An iterative learning scheme based on the proposed ADP theory is developed, along with rigorous convergence analysis. Interestingly, the computational model as advocated here is able to reproduce the motor learning behavior observed in experiments where a divergent force field or velocity-dependent force field was present. In addition, this modeling strategy provides a clear way to perform stability analysis of the overall system. Hence, we conjecture that human sensorimotor systems use an ADP-type mechanism to control movements and to achieve successful adaptation to uncertainties present in the environment.  相似文献   

19.
A short review of model selection techniques for radiation epidemiology   总被引:1,自引:1,他引:0  
A common type of statistical challenge, widespread across many areas of research, involves the selection of a preferred model to describe the main features and trends in a particular data set. The objective of model selection is to balance the quality of fit to data against the complexity and predictive ability of the model achieving that fit. Several model selection techniques, including two information criteria, which aim to determine which set of model parameters the data best support, are reviewed here. The techniques rely on computing the probabilities of the different models, given the data, rather than considering the allowed values of the fitted parameters. Such information criteria have only been applied to the field of radiation epidemiology recently, even though they have longer traditions of application in other areas of research. The purpose of this review is to make two information criteria more accessible by fully detailing how to calculate them in a practical way and how to interpret the resulting values. This aim is supported with the aid of some examples involving the computation of risk models for radiation-induced solid cancer mortality fitted to the epidemiological data from the Japanese A-bomb survivors. These examples illustrate that the Bayesian information criterion is particularly useful in concluding that the weight of evidence is in favour of excess relative risk models that depend on age-at-exposure and excess relative risk models that depend on age-attained.  相似文献   

20.
In this study, the nerve conduction study (NCS) waveform assignment performance of algorithms used in a commercial electrodiagnostic instrument was compared against three neurophysiology experts for motor, F-wave, and sensory parameters. Assignments were made on a common set of waveforms, thereby eliminating a source of variability present in earlier studies that relied on re-testing patients. The performance of the algorithms was comparable to the experts as quantified by the inter-class correlation coefficient and Bland–Altman analyses. The observed algorithm-expert agreement was higher than previously reported estimates, suggesting that the approach of scoring a common set of waveforms may provide a more accurate measure of algorithm performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号