首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In animal foraging, the optimal search strategy in an unknown environment varies depending on the context, such as the resource density and season. When food is distributed sparsely and uniformly, superdiffusive walks outperform normal-diffusive walks. However, superdiffusive walks are no longer advantageous when random walkers forage in resource-rich environments. It is not currently clear whether a relationship exists between an agent's use of local information to make subjective inferences about global food distribution and the optimal random walk strategy. Therefore, I investigated how flexible exploration is achieved if an agent alters its directional rule based on the local resource distribution. In the proposed model, the agent, a Brownian-like walker, estimates whether an abundant or sparse area is nearby using local resource patterns and then makes a decision by altering its movement rules. I show that the agent can behave like a non-Brownian walker if it interacts with a prey distribution. The agent can adaptively switch between diffusive properties depending on the resource density. This leads to a more effective resource-searching performance than a simple random-walk model. These results demonstrate that optimal searching is a context-dependent process.  相似文献   

2.
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.  相似文献   

3.
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process.  相似文献   

4.
Functional explanations of behaviour often propose optimal strategies for organisms to follow. These ‘best’ strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or ‘rules-of-thumb’ that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game''s output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose – particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.  相似文献   

5.
A fundamental and frequently overlooked aspect of animal learning is its reliance on compatibility between the learning rules used and the attentional and motivational mechanisms directing them to process the relevant data (called here data-acquisition mechanisms). We propose that this coordinated action, which may first appear fragile and error prone, is in fact extremely powerful, and critical for understanding cognitive evolution. Using basic examples from imprinting and associative learning, we argue that by coevolving to handle the natural distribution of data in the animal's environment, learning and data-acquisition mechanisms are tuned jointly so as to facilitate effective learning using relatively little memory and computation. We then suggest that this coevolutionary process offers a feasible path for the incremental evolution of complex cognitive systems, because it can greatly simplify learning. This is illustrated by considering how animals and humans can use these simple mechanisms to learn complex patterns and represent them in the brain. We conclude with some predictions and suggested directions for experimental and theoretical work.  相似文献   

6.
The notion that cooperation can aid a group of agents to solve problems more efficiently than if those agents worked in isolation is prevalent in computer science and business circles. Here we consider a primordial form of cooperation – imitative learning – that allows an effective exchange of information between agents, which are viewed as the processing units of a social intelligence system or collective brain. In particular, we use agent-based simulations to study the performance of a group of agents in solving a cryptarithmetic problem. An agent can either perform local random moves to explore the solution space of the problem or imitate a model agent – the best performing agent in its influence network. There is a trade-off between the number of agents and the imitation probability , and for the optimal balance between these parameters we observe a thirtyfold diminution in the computational cost to find the solution of the cryptarithmetic problem as compared with the independent search. If those parameters are chosen far from the optimal setting, however, then imitative learning can impair greatly the performance of the group.  相似文献   

7.
The quality of a chosen partner can be one of the most significantfactors affecting an animal's long-term reproductive success.We investigate optimal mate choice rules in an environment wherethere is both local variation in the quality of potential mateswithin each local mating pool and spatial (or temporal) variationin the average quality of the pools themselves. In such a situation,a robust rule that works well across a variety of environmentswill confer a significant reproductive advantage. We formulatea full Bayesian model for updating information in such a varyingenvironment and derive the form of the rule that maximizes expectedreward in a spatially varying environment. We compare the theoreticalperformance of our optimal learning rule against both fixedthreshold rules and simpler near-optimal learning rules andshow that learning is most advantageous when both the localand environmental variances are large. We consider how optimalsimple learning rules might evolve and compare their evolutionwith that of fixed threshold rules using genetic algorithmsas minimal models of the relevant genetics. Our analysis pointsup the variety of ways in which a near-optimal rule can be expressed.Finally, we describe how our results extend to the case of temporallyvarying environments.  相似文献   

8.
Do we expect natural selection to produce rational behaviour?   总被引:3,自引:0,他引:3  
We expect that natural selection should result in behavioural rules which perform well; however, animals (including humans) sometimes make bad decisions. Researchers account for these with a variety of explanations; we concentrate on two of them. One explanation is that the outcome is a side effect; what matters is how a rule performs (in terms of reproductive success). Several rules may perform well in the environment in which they have evolved, but their performance may differ in a 'new' environment (e.g. the laboratory). Some rules may perform very badly in this environment. We use the debate about whether animals follow the matching law rather than maximizing their gains as an illustration. Another possibility is that we were wrong about what is optimal. Here, the general idea is that the setting in which optimal decisions are investigated is too simple and may not include elements that add extra degrees of freedom to the situation.  相似文献   

9.
Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms.  相似文献   

10.
R is an increasingly preferred software environment for data analytics and statistical computing among scientists and practitioners. Packages markedly extend R’s utility and ameliorate inefficient solutions to data science problems. We outline 10 simple rules for finding relevant packages and determining which package is best for your desired use. We begin in Rule 1 with tips on how to consider your purpose, which will guide your search to follow, where, in Rule 2, you’ll learn best practices for finding and collecting options. Rules 3 and 4 will help you navigate packages’ profiles and explore the extent of their online resources, so that you can be confident in the quality of the package you choose and assured that you’ll be able to access support. In Rules 5 and 6, you’ll become familiar with how the R Community evaluates packages and learn how to assess the popularity and utility of packages for yourself. Rules 7 and 8 will teach you how to investigate and track package development processes, so you can further evaluate their merit. We end in Rules 9 and 10 with more hands-on approaches, which involve digging into package code.  相似文献   

11.
By formulating Helmholtz's ideas about perception, in terms of modern-day theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts: using constructs from statistical physics, the problems of inferring the causes of sensory input and learning the causal structure of their generation can be resolved using exactly the same principles. Furthermore, inference and learning can proceed in a biologically plausible fashion. The ensuing scheme rests on Empirical Bayes and hierarchical models of how sensory input is caused. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of cortical organisation and responses. In this paper, we show these perceptual processes are just one aspect of emergent behaviours of systems that conform to a free energy principle. The free energy considered here measures the difference between the probability distribution of environmental quantities that act on the system and an arbitrary distribution encoded by its configuration. The system can minimise free energy by changing its configuration to affect the way it samples the environment or change the distribution it encodes. These changes correspond to action and perception respectively and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment assumes that the system's state and structure encode an implicit and probabilistic model of the environment. We will look at the models entailed by the brain and how minimisation of its free energy can explain its dynamics and structure.  相似文献   

12.
We have previously tried to explain perceptual inference and learning under a free-energy principle that pursues Helmholtz’s agenda to understand the brain in terms of energy minimization. It is fairly easy to show that making inferences about the causes of sensory data can be cast as the minimization of a free-energy bound on the likelihood of sensory inputs, given an internal model of how they were caused. In this article, we consider what would happen if the data themselves were sampled to minimize this bound. It transpires that the ensuing active sampling or inference is mandated by ergodic arguments based on the very existence of adaptive agents. Furthermore, it accounts for many aspects of motor behavior; from retinal stabilization to goal-seeking. In particular, it suggests that motor control can be understood as fulfilling prior expectations about proprioceptive sensations. This formulation can explain why adaptive behavior emerges in biological agents and suggests a simple alternative to optimal control theory. We illustrate these points using simulations of oculomotor control and then apply to same principles to cued and goal-directed movements. In short, the free-energy formulation may provide an alternative perspective on the motor control that places it in an intimate relationship with perception.  相似文献   

13.
In mammals, goal-directed and planning processes support flexible behaviour used to face new situations that cannot be tackled through more efficient but rigid habitual behaviours. Within the Bayesian modelling approach of brain and behaviour, models have been proposed to perform planning as probabilistic inference but this approach encounters a crucial problem: explaining how such inference might be implemented in brain spiking networks. Recently, the literature has proposed some models that face this problem through recurrent spiking neural networks able to internally simulate state trajectories, the core function at the basis of planning. However, the proposed models have relevant limitations that make them biologically implausible, namely their world model is trained ‘off-line’ before solving the target tasks, and they are trained with supervised learning procedures that are biologically and ecologically not plausible. Here we propose two novel hypotheses on how brain might overcome these problems, and operationalise them in a novel architecture pivoting on a spiking recurrent neural network. The first hypothesis allows the architecture to learn the world model in parallel with its use for planning: to this purpose, a new arbitration mechanism decides when to explore, for learning the world model, or when to exploit it, for planning, based on the entropy of the world model itself. The second hypothesis allows the architecture to use an unsupervised learning process to learn the world model by observing the effects of actions. The architecture is validated by reproducing and accounting for the learning profiles and reaction times of human participants learning to solve a visuomotor learning task that is new for them. Overall, the architecture represents the first instance of a model bridging probabilistic planning and spiking-processes that has a degree of autonomy analogous to the one of real organisms.  相似文献   

14.
Accurate estimates of animal abundance are essential for guiding effective management, and poor survey data can produce misleading inferences. Aerial surveys are an efficient survey platform, capable of collecting wildlife data across large spatial extents in short timeframes. However, these surveys can yield unreliable data if not carefully executed. Despite a long history of aerial survey use in ecological research, problems common to aerial surveys have not yet been adequately resolved. Through an extensive review of the aerial survey literature over the last 50 years, we evaluated how common problems encountered in the data (including nondetection, counting error, and species misidentification) can manifest, the potential difficulties conferred, and the history of how these challenges have been addressed. Additionally, we used a double‐observer case study focused on waterbird data collected via aerial surveys and an online group (flock) counting quiz to explore the potential extent of each challenge and possible resolutions. We found that nearly three quarters of the aerial survey methodology literature focused on accounting for nondetection errors, while issues of counting error and misidentification were less commonly addressed. Through our case study, we demonstrated how these challenges can prove problematic by detailing the extent and magnitude of potential errors. Using our online quiz, we showed that aerial observers typically undercount group size and that the magnitude of counting errors increases with group size. Our results illustrate how each issue can act to bias inferences, highlighting the importance of considering individual methods for mitigating potential problems separately during survey design and analysis. We synthesized the information gained from our analyses to evaluate strategies for overcoming the challenges of using aerial survey data to estimate wildlife abundance, such as digital data collection methods, pooling species records by family, and ordinal modeling using binned data. Recognizing conditions that can lead to data collection errors and having reasonable solutions for addressing errors can allow researchers to allocate resources effectively to mitigate the most significant challenges for obtaining reliable aerial survey data.  相似文献   

15.
Abstract: Satellite tracking is currently used to make inferences to avian populations. Cost of transmitters and logistical challenges of working with some species can limit sample size and strength of inferences. Therefore, careful study design including consideration of sample size is important. We used simulations to examine how sample size, population size, and population variance affected probability of making reliable inferences from a sample and the precision of estimates of population parameters. For populations of >100 individuals, a sample >20 birds was needed to make reliable inferences about questions with simple outcomes (i.e., 2 possible outcomes). Sample size demands increased rapidly for more complex problems. For example, in a problem with 3 outcomes, a sample of >75 individuals will be needed for proper inference to the population. Combining data from satellite telemetry studies with data from surveys or other types of sampling may improve inference strength.  相似文献   

16.
With the growing uncertainty and complexity in the manufacturing environment, most scheduling problems have been proven to be NP-complete and this can degrade the performance of conventional operations research (OR) techniques. This article presents a system-attribute-oriented knowledge-based scheduling system (SAOSS) with inductive learning capability. With the rich heritage from artificial intelligence (AI), SAOSS takes a multialgorithm paradigm which makes it more intelligent, flexible, and suitable than others for tackling complicated, dynamic scheduling problems. SAOSS employs an efficient and effective inductive learning method, a continuous iterative dichotomister 3 (CID3) algorithm, to induce decision rules for scheduling by converting corresponding decision trees into hidden layers of a self-generated neural network. Connection weights between hidden units imply the scheduling heuristics, which are then formulated into scheduling rules. An FMS scheduling problem is also given for illustration. The scheduling results show that the system-attribute-oriented knowledge-based approach is capable of addressing dynamic scheduling problems.  相似文献   

17.
For current computational intelligence techniques, a major challenge is how to learn new concepts in changing environment. Traditional learning schemes could not adequately address this problem due to a lack of dynamic data selection mechanism. In this paper, inspired by human learning process, a novel classification algorithm based on incremental semi-supervised support vector machine (SVM) is proposed. Through the analysis of prediction confidence of samples and data distribution in a changing environment, a “soft-start” approach, a data selection mechanism and a data cleaning mechanism are designed, which complete the construction of our incremental semi-supervised learning system. Noticeably, with the ingenious design procedure of our proposed algorithm, the computation complexity is reduced effectively. In addition, for the possible appearance of some new labeled samples in the learning process, a detailed analysis is also carried out. The results show that our algorithm does not rely on the model of sample distribution, has an extremely low rate of introducing wrong semi-labeled samples and can effectively make use of the unlabeled samples to enrich the knowledge system of classifier and improve the accuracy rate. Moreover, our method also has outstanding generalization performance and the ability to overcome the concept drift in a changing environment.  相似文献   

18.
Studies of sequential decision-making in humans frequently find suboptimal performance relative to an ideal actor that has perfect knowledge of the model of how rewards and events are generated in the environment. Rather than being suboptimal, we argue that the learning problem humans face is more complex, in that it also involves learning the structure of reward generation in the environment. We formulate the problem of structure learning in sequential decision tasks using Bayesian reinforcement learning, and show that learning the generative model for rewards qualitatively changes the behavior of an optimal learning agent. To test whether people exhibit structure learning, we performed experiments involving a mixture of one-armed and two-armed bandit reward models, where structure learning produces many of the qualitative behaviors deemed suboptimal in previous studies. Our results demonstrate humans can perform structure learning in a near-optimal manner.  相似文献   

19.
There are many dynamic optimization problems in the real world, whose convergence and searching ability is cautiously desired, obviously different from static optimization cases. This requires an optimization algorithm adaptively seek the changing optima over dynamic environments, instead of only finding the global optimal solution in the static environment. This paper proposes a novel comprehensive learning artificial bee colony optimizer (CLABC) for optimization in dynamic environments problems, which employs a pool of optimal foraging strategies to balance the exploration and exploitation tradeoff. The main motive of CLABC is to enrich artificial bee foraging behaviors in the ABC model by combining Powell’s pattern search method, life-cycle, and crossover-based social learning strategy. The proposed CLABC is a more bee-colony-realistic model that the bee can reproduce and die dynamically throughout the foraging process and population size varies as the algorithm runs. The experiments for evaluating CLABC are conducted on the dynamic moving peak benchmarks. Furthermore, the proposed algorithm is applied to a real-world application of dynamic RFID network optimization. Statistical analysis of all these cases highlights the significant performance improvement due to the beneficial combination and demonstrates the performance superiority of the proposed algorithm.  相似文献   

20.
Scientists studying how languages change over time often make an analogy between biological and cultural evolution, with words or grammars behaving like traits subject to natural selection. Recent work has exploited this analogy by using models of biological evolution to explain the properties of languages and other cultural artefacts. However, the mechanisms of biological and cultural evolution are very different: biological traits are passed between generations by genes, while languages and concepts are transmitted through learning. Here we show that these different mechanisms can have the same results, demonstrating that the transmission of frequency distributions over variants of linguistic forms by Bayesian learners is equivalent to the Wright–Fisher model of genetic drift. This simple learning mechanism thus provides a justification for the use of models of genetic drift in studying language evolution. In addition to providing an explicit connection between biological and cultural evolution, this allows us to define a ‘neutral’ model that indicates how languages can change in the absence of selection at the level of linguistic variants. We demonstrate that this neutral model can account for three phenomena: the s-shaped curve of language change, the distribution of word frequencies, and the relationship between word frequencies and extinction rates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号