首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study self-organized cooperation between heterogeneous robotic swarms. The robots of each swarm play distinct roles based on their different characteristics. We investigate how the use of simple local interactions between the robots of the different swarms can let the swarms cooperate in order to solve complex tasks. We focus on an indoor navigation task, in which we use a swarm of wheeled robots, called foot-bots, and a swarm of flying robots that can attach to the ceiling, called eye-bots. The task of the foot-bots is to move back and forth between a source and a target location. The role of the eye-bots is to guide foot-bots: they choose positions at the ceiling and from there give local directional instructions to foot-bots passing by. To obtain efficient paths for foot-bot navigation, eye-bots need on the one hand to choose good positions and on the other hand learn the right instructions to give. We investigate each of these aspects. Our solution is based on a process of mutual adaptation, in which foot-bots execute instructions given by eye-bots, and eye-bots observe the behavior of foot-bots to adapt their position and the instructions they give. Our approach is inspired by pheromone mediated navigation of ants, as eye-bots serve as stigmergic markers for foot-bot navigation. Through simulation, we show how this system is able to find efficient paths in complex environments, and to display different kinds of complex and scalable self-organized behaviors, such as shortest path finding and automatic traffic spreading.  相似文献   

2.
In heterogeneous distributed computing systems like cloud computing, the problem of mapping tasks to resources is a major issue which can have much impact on system performance. For some reasons such as heterogeneous and dynamic features and the dependencies among requests, task scheduling is known to be a NP-complete problem. In this paper, we proposed a hybrid heuristic method (HSGA) to find a suitable scheduling for workflow graph, based on genetic algorithm in order to obtain the response quickly moreover optimizes makespan, load balancing on resources and speedup ratio. At first, the HSGA algorithm makes tasks prioritization in complex graph considering their impact on others, based on graph topology. This technique is efficient to reduction of completion time of application. Then, it merges Best-Fit and Round Robin methods to make an optimal initial population to obtain a good solution quickly, and apply some suitable operations such as mutation to control and lead the algorithm to optimized solution. This algorithm evaluates the solutions by considering efficient parameters in cloud environment. Finally, the proposed algorithm presents the better results with increasing number of tasks in application graph in contrast with other studied algorithms.  相似文献   

3.
Zhang W  Yano K  Karube I 《Bio Systems》2007,88(1-2):35-55
Evolutionary molecular design based on genetic algorithms (GAs) has been demonstrated to be a flexible and efficient optimization approach with potential for locating global optima. Its efficacy and efficiency are largely dependent on the operations and control parameters of the GAs. Accordingly, we have explored new operations and probed good parameter setting through simulations. The findings have been evaluated in a helical peptide design according to "Parameter setting by analogy" strategy; highly helical peptides have been successfully obtained with a population of only 16 peptides and 5 iterative cycles. The results indicate that new operations such as multi-step crossover-mutation are able to improve the explorative efficiency and to reduce the sensitivity to crossover and mutation rates (CR-MR). The efficiency of the peptide design has been furthermore improved by setting the GAs at the good CR-MR setting determined through simulation. These results suggest that probing the operations and parameter settings through simulation in combination with "Parameter setting by analogy" strategy provides an effective framework for improving the efficiency of the approach. Consequently, we conclude that this framework will be useful for contributing to practical peptide design, and gaining a better understanding of evolutionary molecular design.  相似文献   

4.
The goal of many shotgun proteomics experiments is to determine the protein complement of a complex biological mixture. For many mixtures, most methodological approaches fall significantly short of this goal. Existing solutions to this problem typically subdivide the task into two stages: first identifying a collection of peptides with a low false discovery rate and then inferring from the peptides a corresponding set of proteins. In contrast, we formulate the protein identification problem as a single optimization problem, which we solve using machine learning methods. This approach is motivated by the observation that the peptide and protein level tasks are cooperative, and the solution to each can be improved by using information about the solution to the other. The resulting algorithm directly controls the relevant error rate, can incorporate a wide variety of evidence and, for complex samples, provides 18-34% more protein identifications than the current state of the art approaches.  相似文献   

5.

Crowdsourcing

Crowdsourcing is the practice of obtaining needed ideas, services, or content by requesting contributions from a large group of people. Amazon Mechanical Turk is a web marketplace for crowdsourcing microtasks, such as answering surveys and image tagging. We explored the limits of crowdsourcing by using Mechanical Turk for a more complicated task: analysis and creation of wind simulations.

Harnessing Crowdworkers for Engineering

Our investigation examined the feasibility of using crowdsourcing for complex, highly technical tasks. This was done to determine if the benefits of crowdsourcing could be harnessed to accurately and effectively contribute to solving complex real world engineering problems. Of course, untrained crowds cannot be used as a mere substitute for trained expertise. Rather, we sought to understand how crowd workers can be used as a large pool of labor for a preliminary analysis of complex data.

Virtual Wind Tunnel

We compared the skill of the anonymous crowd workers from Amazon Mechanical Turk with that of civil engineering graduate students, making a first pass at analyzing wind simulation data. For the first phase, we posted analysis questions to Amazon crowd workers and to two groups of civil engineering graduate students. A second phase of our experiment instructed crowd workers and students to create simulations on our Virtual Wind Tunnel website to solve a more complex task.

Conclusions

With a sufficiently comprehensive tutorial and compensation similar to typical crowd-sourcing wages, we were able to enlist crowd workers to effectively complete longer, more complex tasks with competence comparable to that of graduate students with more comprehensive, expert-level knowledge. Furthermore, more complex tasks require increased communication with the workers. As tasks become more complex, the employment relationship begins to become more akin to outsourcing than crowdsourcing. Through this investigation, we were able to stretch and explore the limits of crowdsourcing as a tool for solving complex problems.  相似文献   

6.
Jay F  François O  Blum MG 《PloS one》2011,6(1):e16227

Background

The mainland of the Americas is home to a remarkable diversity of languages, and the relationships between genes and languages have attracted considerable attention in the past. Here we investigate to which extent geography and languages can predict the genetic structure of Native American populations.

Methodology/Principal Findings

Our approach is based on a Bayesian latent cluster regression model in which cluster membership is explained by geographic and linguistic covariates. After correcting for geographic effects, we find that the inclusion of linguistic information improves the prediction of individual membership to genetic clusters. We further compare the predictive power of Greenberg''s and The Ethnologue classifications of Amerindian languages. We report that The Ethnologue classification provides a better genetic proxy than Greenberg''s classification at the stock and at the group levels. Although high predictive values can be achieved from The Ethnologue classification, we nevertheless emphasize that Choco, Chibchan and Tupi linguistic families do not exhibit a univocal correspondence with genetic clusters.

Conclusions/Significance

The Bayesian latent class regression model described here is efficient at predicting population genetic structure using geographic and linguistic information in Native American populations.  相似文献   

7.
Li X  Rao S  Wang Y  Gong B 《Nucleic acids research》2004,32(9):2685-2694
Current applications of microarrays focus on precise classification or discovery of biological types, for example tumor versus normal phenotypes in cancer research. Several challenging scientific tasks in the post-genomic epoch, like hunting for the genes underlying complex diseases from genome-wide gene expression profiles and thereby building the corresponding gene networks, are largely overlooked because of the lack of an efficient analysis approach. We have thus developed an innovative ensemble decision approach, which can efficiently perform multiple gene mining tasks. An application of this approach to analyze two publicly available data sets (colon data and leukemia data) identified 20 highly significant colon cancer genes and 23 highly significant molecular signatures for refining the acute leukemia phenotype, most of which have been verified either by biological experiments or by alternative analysis approaches. Furthermore, the globally optimal gene subsets identified by the novel approach have so far achieved the highest accuracy for classification of colon cancer tissue types. Establishment of this analysis strategy has offered the promise of advancing microarray technology as a means of deciphering the involved genetic complexities of complex diseases.  相似文献   

8.
The relationship between team size and productivity is a question of broad relevance across economics, psychology, and management science. For complex tasks, however, where both the potential benefits and costs of coordinated work increase with the number of workers, neither theoretical arguments nor empirical evidence consistently favor larger vs. smaller teams. Experimental findings, meanwhile, have relied on small groups and highly stylized tasks, hence are hard to generalize to realistic settings. Here we narrow the gap between real-world task complexity and experimental control, reporting results from an online experiment in which 47 teams of size ranging from n = 1 to 32 collaborated on a realistic crisis mapping task. We find that individuals in teams exerted lower overall effort than independent workers, in part by allocating their effort to less demanding (and less productive) sub-tasks; however, we also find that individuals in teams collaborated more with increasing team size. Directly comparing these competing effects, we find that the largest teams outperformed an equivalent number of independent workers, suggesting that gains to collaboration dominated losses to effort. Importantly, these teams also performed comparably to a field deployment of crisis mappers, suggesting that experiments of the type described here can help solve practical problems as well as advancing the science of collective intelligence.  相似文献   

9.
This paper describes and explains design patterns for software that supports how analysts can efficiently inspect and classify camera trap images for wildlife‐related ecological attributes. Broadly speaking, a design pattern identifies a commonly occurring problem and a general reusable design approach to solve that problem. A developer can then use that design approach to create a specific software solution appropriate to the particular situation under consideration. In particular, design patterns for camera trap image analysis by wildlife biologists address solutions to commonly occurring problems they face while inspecting a large number of images and entering ecological data describing image attributes. We developed design patterns for image classification based on our understanding of biologists' needs that we acquired over 8 years during development and application of the freely available Timelapse image analysis system. For each design pattern presented, we describe the problem, a design approach that solves that problem, and a concrete example of how Timelapse addresses the design pattern. Our design patterns offer both general and specific solutions related to: maintaining data consistency, efficiencies in image inspection, methods for navigating between images, efficiencies in data entry including highly repetitious data entry, and sorting and filtering image into sequences, episodes, and subsets. These design patterns can inform the design of other camera trap systems and can help biologists assess how competing software products address their project‐specific needs along with determining an efficient workflow.  相似文献   

10.
Studies of animal impulsivity generally find steep subjective devaluation, or discounting, of delayed rewards – often on the order of a 50% reduction in value in a few seconds. Because such steep discounting is highly disfavored in evolutionary models of time preference, we hypothesize that discounting tasks provide a poor measure of animals’ true time preferences. One prediction of this hypothesis is that estimates of time preferences based on these tasks will lack external validity, i.e. fail to predict time preferences in other contexts. We examined choices made by four rhesus monkeys in a computerized patch-leaving foraging task interleaved with a standard intertemporal choice task. Monkeys were significantly more patient in the foraging task than in the intertemporal choice task. Patch-leaving behavior was well fit by parameter-free optimal foraging equations but poorly fit by the hyperbolic discount parameter obtained from the intertemporal choice task. Day-to-day variation in time preferences across the two tasks was uncorrelated with each other. These data are consistent with the conjecture that seemingly impulsive behavior in animals is an artifact of their difficulty understanding the structure of intertemporal choice tasks, and support the idea that animals are more efficient rate maximizers in the multi-second range than intertemporal choice tasks would suggest.  相似文献   

11.
A protein-protein docking procedure traditionally consists in two successive tasks: a search algorithm generates a large number of candidate conformations mimicking the complex existing in vivo between two proteins, and a scoring function is used to rank them in order to extract a native-like one. We have already shown that using Voronoi constructions and a well chosen set of parameters, an accurate scoring function could be designed and optimized. However to be able to perform large-scale in silico exploration of the interactome, a near-native solution has to be found in the ten best-ranked solutions. This cannot yet be guaranteed by any of the existing scoring functions. In this work, we introduce a new procedure for conformation ranking. We previously developed a set of scoring functions where learning was performed using a genetic algorithm. These functions were used to assign a rank to each possible conformation. We now have a refined rank using different classifiers (decision trees, rules and support vector machines) in a collaborative filtering scheme. The scoring function newly obtained is evaluated using 10 fold cross-validation, and compared to the functions obtained using either genetic algorithms or collaborative filtering taken separately. This new approach was successfully applied to the CAPRI scoring ensembles. We show that for 10 targets out of 12, we are able to find a near-native conformation in the 10 best ranked solutions. Moreover, for 6 of them, the near-native conformation selected is of high accuracy. Finally, we show that this function dramatically enriches the 100 best-ranking conformations in near-native structures.  相似文献   

12.
The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants'' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop1 and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional ''word'' across the affective faces or the facial ''expressions'' of the face stimuli1,2. When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong ''stimulus-response (SR)'' associations; hence inhibiting these strong SR''s is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task3,4,5,6, where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials7 which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.  相似文献   

13.
Various authors have suggested similarities between tool use in early hominins and chimpanzees. This has been particularly evident in studies of nut-cracking which is considered to be the most complex skill exhibited by wild apes, and has also been interpreted as a precursor of more complex stone-flaking abilities. It has been argued that there is no major qualitative difference between what the chimpanzee does when he cracks a nut and what early hominins did when they detached a flake from a core. In this paper, similarities and differences between skills involved in stone-flaking and nut-cracking are explored through an experimental protocol with human subjects performing both tasks. We suggest that a ‘functional’ approach to percussive action, based on the distinction between functional parameters that characterize each task and parameters that characterize the agent''s actions and movements, is a fruitful method for understanding those constraints which need to be mastered to perform each task successfully, and subsequently, the nature of skill involved in both tasks.  相似文献   

14.
Handedness in wild chimpanzees   总被引:1,自引:0,他引:1  
The debate over nonhuman primate precursors to human handedness is unsettled mainly due to lack of data, particularly on apes. Handedness in wild chimpanzees at the Taï National Park Côte d'Ivoire, has been monitored in four tasks. For the simple unimanual ones, reaching and grooming, adults use both hands equally (ambidextrous), while for the more complex unimanual wadge-dipping and the complex bimanual nut-cracking, adults are highly lateralized. These results support the hypothesis that lateralization increases with the complexity of the task. The lateralization is constant for years for each task but may vary in an individual with respect to different tasks. For nutcracking females are more lateralized than males. The ontogeny of handedness for nut-cracking shows many variations in the tendency to use one hand and in the side preferred, until at about 10 years of age, the individual achieves her adult handedness. No population bias toward one side exists in Taï chimpanzees. No heritability of handedness between mother and offspring was observed. Human and chimpanzees handedness are compared.  相似文献   

15.
Inference by exclusion, the ability to base choices on the systematic exclusion of alternatives, has been studied in many nonhuman species over the past decade. However, the majority of methodologies employed so far are hard to integrate into a comparative framework as they rarely use controls for the effect of neophilia. Here, we present an improved approach that takes neophilia into account, using an abstract two-choice task on a touch screen, which is equally feasible for a large variety of species. To test this approach we chose Goffin cockatoos (Cacatua goffini), a highly explorative Indonesian parrot species, which have recently been reported to have sophisticated cognitive skills in the technical domain. Our results indicate that Goffin cockatoos are able to solve such abstract two-choice tasks employing inference by exclusion but also highlight the importance of other response strategies.  相似文献   

16.
When organisms perform a single task, selection leads to phenotypes that maximize performance at that task. When organisms need to perform multiple tasks, a trade‐off arises because no phenotype can optimize all tasks. Recent work addressed this question, and assumed that the performance at each task decays with distance in trait space from the best phenotype at that task. Under this assumption, the best‐fitness solutions (termed the Pareto front) lie on simple low‐dimensional shapes in trait space: line segments, triangles and other polygons. The vertices of these polygons are specialists at a single task. Here, we generalize this finding, by considering performance functions of general form, not necessarily functions that decay monotonically with distance from their peak. We find that, except for performance functions with highly eccentric contours, simple shapes in phenotype space are still found, but with mildly curving edges instead of straight ones. In a wide range of systems, complex data on multiple quantitative traits, which might be expected to fill a high‐dimensional phenotype space, is predicted instead to collapse onto low‐dimensional shapes; phenotypes near the vertices of these shapes are predicted to be specialists, and can thus suggest which tasks may be at play.  相似文献   

17.
Cognitive functions rely on the extensive use of information stored in the brain, and the searching for the relevant information for solving some problem is a very complex task. Human cognition largely uses biological search engines, and we assume that to study cognitive function we need to understand the way these brain search engines work. The approach we favor is to study multi-modular network models, able to solve particular problems that involve searching for information. The building blocks of these multimodular networks are the context dependent memory models we have been using for almost 20 years. These models work by associating an output to the Kronecker product of an input and a context. Input, context and output are vectors that represent cognitive variables. Our models constitute a natural extension of the traditional linear associator. We show that coding the information in vectors that are processed through association matrices, allows for a direct contact between these memory models and some procedures that are now classical in the Information Retrieval field. One essential feature of context-dependent models is that they are based on the thematic packing of information, whereby each context points to a particular set of related concepts. The thematic packing can be extended to multimodular networks involving input-output contexts, in order to accomplish more complex tasks. Contexts act as passwords that elicit the appropriate memory to deal with a query. We also show toy versions of several ‘neuromimetic’ devices that solve cognitive tasks as diverse as decision making or word sense disambiguation. The functioning of these multimodular networks can be described as dynamical systems at the level of cognitive variables.  相似文献   

18.
Rodents have been traditionally used as a standard animal model in laboratory experiments involving a myriad of sensory, cognitive, and motor tasks. Higher cognitive functions that require precise control over sensorimotor responses such as decision-making and attentional modulation, however, are typically assessed in nonhuman primates. Despite the richness of primate behavior that allows multiple variants of these functions to be studied, the rodent model remains an attractive, cost-effective alternative to primate models. Furthermore, the ability to fully automate operant conditioning in rodents adds unique advantages over the labor intensive training of nonhuman primates while studying a broad range of these complex functions.Here, we introduce a protocol for operantly conditioning rats on performing working memory tasks. During critical epochs of the task, the protocol ensures that the animal''s overt movement is minimized by requiring the animal to ''fixate'' until a Go cue is delivered, akin to nonhuman primate experimental design. A simple two alternative forced choice task is implemented to demonstrate the performance. We discuss the application of this paradigm to other tasks.  相似文献   

19.
MOTIVATION: An important challenge in the use of large-scale gene expression data for biological classification occurs when the expression dataset being analyzed involves multiple classes. Key issues that need to be addressed under such circumstances are the efficient selection of good predictive gene groups from datasets that are inherently 'noisy', and the development of new methodologies that can enhance the successful classification of these complex datasets. METHODS: We have applied genetic algorithms (GAs) to the problem of multi-class prediction. A GA-based gene selection scheme is described that automatically determines the members of a predictive gene group, as well as the optimal group size, that maximizes classification success using a maximum likelihood (MLHD) classification method. RESULTS: The GA/MLHD-based approach achieves higher classification accuracies than other published predictive methods on the same multi-class test dataset. It also permits substantial feature reduction in classifier genesets without compromising predictive accuracy. We propose that GA-based algorithms may represent a powerful new tool in the analysis and exploration of complex multi-class gene expression data. AVAILABILITY: Supplementary information, data sets and source codes are available at http://www.omniarray.com/bioinformatics/GA.  相似文献   

20.
Grid computing uses distributed interconnected computers and resources collectively to achieve higher performance computing and resource sharing. Task scheduling is one of the core steps to efficiently exploit the capabilities of Grid environment. Recently, heuristic algorithms have been successfully applied to solve task scheduling on computational Grids. In this paper, Gravitational Search Algorithm (GSA), as one of the latest population-based metaheuristic algorithms, is used for task scheduling on computational Grids. The proposed method employs GSA to find the best solution with the minimum makespan and flowtime. We evaluate this approach with Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) method. The results demonstrate that the benefit of the GSA is its speed of convergence and the capability to obtain feasible schedules.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号