首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Evolutionary game theory is a basis of replicator systems and has applications ranging from animal behavior and human language to ecosystems and other hierarchical network systems. Most studies in evolutionary game dynamics have focused on a single game, but, in many situations, we see that many games are played simultaneously. We construct a replicator equation with plural games by assuming that a reward of a player is a simple summation of the reward of each game. Even if the numbers of the strategies of the games are different, its dynamics can be described in one replicator equation. We here show that when players play several games at the same time, the fate of a single game cannot be determined without knowing the structures of the whole other games. The most absorbing fact is that even if a single game has a ESS (evolutionary stable strategy), the relative frequencies of strategies in the game does not always converge to the ESS point when other games are played simultaneously.  相似文献   

2.
The complexity and requirements of web applications are increasing in order to meet more sophisticated business models (web services and cloud computing, for instance). For this reason, characteristics such as performance, scalability and security are addressed in web server cluster design. Due to the rising energy costs and also to environmental concerns, energy consumption in this type of system has become a main issue. This paper shows energy consumption reduction techniques that use a load forecasting method, combined with DVFS (Dynamic Voltage and Frequency Scaling) and dynamic configuration techniques (turning servers on and off), in a soft real-time web server clustered environment. Our system promotes energy consumption reduction while maintaining user’s satisfaction with respect to request deadlines being met. The results obtained show that prediction capabilities increase the QoS (Quality of Service) of the system, while maintaining or improving the energy savings over state-of-the-art power management mechanisms. To validate this predictive policy, a web application running a real workload profile was deployed in an Apache server cluster testbed running Linux.  相似文献   

3.
Server scalability is more important than ever in today's client/server dominated network environments. Recently, researchers have begun to consider cluster-based computers using commodity hardware as an alternative to expensive specialized hardware for building scalable Web servers. In this paper, we present performance results comparing two cluster-based Web servers based on different server architectures: OSI layer two dispatching (LSMAC) and OSI layer three dispatching (LSNAT). Both cluster-based server systems were implemented as application-space programs running on commodity hardware in contrast to other, similar, solutions which require specialized hardware/software. We point out the advantages and disadvantages of both systems. We also identify when servers should be clustered and when clustering will not improve performance. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

4.
SUMMARY: The Structure Prediction Meta Server offers a convenient way for biologists to utilize various high quality structure prediction servers available worldwide. The meta server translates the results obtained from remote services into uniform format, which are consequently used to request a jury prediction from a remote consensus server Pcons. AVAILABILITY: The structure prediction meta server is freely available at http://BioInfo.PL/meta/, some remote servers have however restrictions for non-academic users, which are respected by the meta server. SUPPLEMENTARY INFORMATION: Results of several sessions of the CAFASP and LiveBench programs for assessment of performance of fold-recognition servers carried out via the meta server are available at http://BioInfo.PL/services.html.  相似文献   

5.
An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.  相似文献   

6.
A simple distributed processing system named "Peach" was developed to meet the rising computational demands of modern structural biology (and other) laboratories without additional expense by using existing hardware resources more efficiently. A central server distributes jobs to idle workstations in such a way that each computer is used maximally, but without disturbing intermittent interactive users. As compared to other distributed systems, Peach is simple, easy to install, easy to administer, easy to use, scalable, and robust. While it was designed to queue and distribute large numbers of small tasks to participating computers, it can also be used to send single jobs automatically to the fastest currently available computer and/or survey the activity of an entire laboratory's computers. Tests of robustness and scalability are reported, as are three specific electron cryomicroscopy applications where Peach enabled projects that would not otherwise have been feasible without an expensive, dedicated cluster.  相似文献   

7.
Understanding human institutions, animal cultures and other social systems requires flexible formalisms that describe how their members change them from within. We introduce a framework for modelling how agents change the games they participate in. We contrast this between-game ‘institutional evolution’ with the more familiar within-game ‘behavioural evolution’. We model institutional change by following small numbers of persistent agents as they select and play a changing series of games. Starting from an initial game, a group of agents trace trajectories through game space by navigating to increasingly preferable games until they converge on ‘attractor’ games. Agents use their ‘institutional preferences'' for game features (such as stability, fairness and efficiency) to choose between neighbouring games. We use this framework to pose a pressing question: what kinds of games does institutional evolution select for; what is in the attractors? After computing institutional change trajectories over the two-player space, we find that attractors have disproportionately fair outcomes, even though the agents who produce them are strictly self-interested and indifferent to fairness. This seems to occur because game fairness co-occurs with the self-serving features these agents do actually prefer. We thus present institutional evolution as a mechanism for encouraging the spontaneous emergence of cooperation among small groups of inherently selfish agents, without space, reputation, repetition, or other more familiar mechanisms. Game space trajectories provide a flexible, testable formalism for modelling the interdependencies of behavioural and institutional evolutionary processes, as well as a mechanism for the evolution of cooperation.  相似文献   

8.
To secure interactive multimedia applications in wireless LANs (WLANs), it is pertinent to implement real time cryptographic services. In this paper we evaluate the use of software based encryption algorithms that are implemented in the layer service provider as defined by WinSock 2 for Windows 95/NT. Our measurements show that software implementation of various encryptors can sustain the throughput requirements of interactive multimedia applications for WLANs such as telephone-quality audio, video conferencing, and MPEG video. We present a design methodology that includes guidelines for a secure multimedia system design in terms of the encryption method chosen as a function of required application throughput, system configuration, protocol layers overhead and wireless LAN throughput. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

9.
Given the cost of memories and the very large storage and bandwidth requirements of large-scale multimedia databases, hierarchical storage servers (which consist of disk-based secondary storage and tape-library-based tertiary storage) are becoming increasingly popular. Such server applications rely upon tape libraries to store all media, exploiting their excellent storage capacity and cost per MB characteristics. They also rely upon disk arrays, exploiting their high bandwidth, to satisfy a very large number of requests. Given typical access patterns and server configurations, the tape drives are fully utilized uploading data for requests that fall through to the tertiary level. Such upload operations consume significant secondary storage device and bus bandwidth. In addition, with present technology (and trends) the disk array can serve fewer requests to continuous objects than it can store, mainly due to IO and/or backplane bus bandwidth limitations. In this work we address comprehensively the performance of these hierarchical, continuous-media, storage servers by looking at all three main system resources: the tape drive bandwidth, the secondary-storage bandwidth, and the host's RAM. We provide techniques which, while fully utilizing the tape drive bandwidth (an expensive resource) they introduce bandwidth savings, which allow the secondary storage devices to serve more requests and do so without increasing demands for the host's RAM space. Specifically, we consider the issue of elevating continuous data from its permanent place in tertiary for display purposes. We develop algorithms for sharing the responsibility for the playback between the secondary and tertiary devices and for placing the blocks of continuous objects on tapes, and show how they achieve the above goals. We study these issues for different commercial tape library products with different bandwidth and tape capacity and in environments with and without the multiplexing of tape libraries.  相似文献   

10.
This paper describes a novel technique for establishing a virtual file system that allows data to be transferred user-transparently and on-demand across computing and storage servers of a computational grid. Its implementation is based on extensions to the Network File System (NFS) that are encapsulated in software proxies. A key differentiator between this approach and previous work is the way in which file servers are partitioned: while conventional file systems share a single (logical) server across multiple users, the virtual file system employs multiple proxy servers that are created, customized and terminated dynamically, for the duration of a computing session, on a per-user basis. Furthermore, the solution does not require modifications to standard NFS clients and servers. The described approach has been deployed in the context of the PUNCH network-computing infrastructure, and is unique in its ability to integrate unmodified, interactive applications (even commercial ones) and existing computing infrastructure into a network computing environment. Experimental results show that: (1) the virtual file system performs well in comparison to native NFS in a local-area setup, with mean overheads of 1 and 18%, for the single-client execution of the Andrew benchmark in two representative computing environments, (2) the average overhead for eight clients can be reduced to within 1% of native NFS with the use of concurrent proxies, (3) the wide-area performance is within 1% of the local-area performance for a typical compute-intensive PUNCH application (SimpleScalar), while for the I/O-intensive application Andrew the wide-area performance is 5.5 times worse than the local-area performance.  相似文献   

11.
Harrison F  El Mouden C 《PloS one》2011,6(11):e27623
In recent years, significant advances have been made in understanding the adaptive (ultimate) and mechanistic (proximate) explanations for the evolution and maintenance of cooperation. Studies of cooperative behaviour in humans invariably use economic games. These games have provided important insights into the mechanisms that maintain economic and social cooperation in our species. However, they usually rely on the division of monetary tokens which are given to participants by the investigator. The extent to which behaviour in such games may reflect behaviour in the real world of biological markets--where money must be earned and behavioural strategies incur real costs and benefits--is unclear. To provide new data on the potential scale of this problem, we investigated whether people behaved differently in two standard economic games (public goods game and dictator game) when they had to earn their monetary endowments through the completion of dull or physically demanding tasks, as compared with simply being given the endowment. The requirement for endowments to be 'earned' through labour did not affect behaviour in the dictator game. However, the requirement to complete a dull task reduced cooperation in the public goods game among the subset of participants who were not familiar with game theory. There has been some effort to test whether the conclusions drawn from standard, token-based cooperation games adequately reflect cooperative behaviour 'in the wild.' However, given the almost total reliance on such games to study cooperation, more exploration of this issue would be welcome. Our data are not unduly worrying, but they do suggest that further exploration is needed if we are to make general inferences about human behaviour from the results of structured economic games.  相似文献   

12.

Purpose

The purpose of this work was to develop an indicator framework for the environmental sustainability benchmarking of products produced by the metallurgical industry. Sustainability differentiation has become an important issue for companies throughout the value chain. Differentiation is sometimes not attainable, due to the use of average data, lack of comparative data, certain issues being overshadowed by others, and a very narrow palette of indicators dominating the current sustainability assessments. There is a need for detailed and credible analyses, which show the current status and point out where improvements can be made. The indicator framework is developed to give a comprehensive picture of eco-efficiency, to provide methods that enable relevant comparisons as well as the tools for communicating the results. In this way, the methodology presented in this study aims to make differentiation easier and thus aid companies in driving the development toward more sustainable solutions.

Methods

The framework is based on the existing indicator framework Gaia Biorefiner, which is primarily intended for bio-based products. In this work, the framework was further developed for application in the metallurgical industry. The indicator framework is built by first looking at the issues, which are critical to the environment and global challenges seen today and which the activities of the metallurgical industry may have an impact on. Based on these issues, suitable indicators are chosen if they exist and built if they do not. The idea is that all indicators in a group form a whole, showing areas of innovation while refraining from aggregating and weighting, which often compromise a comprehensive and objective view. Both qualitative and quantitative indicators are included. The indicators are constructed following the criteria set by the EU and OECD for building indicators. Each indicator further has a benchmark. The rules for building the benchmark are connected to the indicators. Suitable data sources and criteria for the benchmark and the indicators are gathered from literature, publicly available databases, and commercial LCA software. The use of simulation tools for attaining more reliable data is also studied.

Results and discussion

The result is a visual framework consisting of ten indicator groups with one to five indicators each, totaling up to 31 indicators. These are visualized in a sustainability indicator “flower.” The flower can be further opened up to study each indicator and the reasons behind the results. The sustainability benchmark follows a methodology that is based on utilization of baseline data and sustainability criteria or limits. A simulation approach was included in the methodology to address the problem with data scarcity and data reliability. The status of the environment, current production technologies, location-specific issues, and process-specific issues all affect the result, and the aim of finding relevant comparisons that will support sustainability differentiation is answered by a scalable scoping system.

Conclusions

A new framework and its concise visualization has been built for assessing the eco-efficiency of products from the metallurgical industry, in a way that aims to answer the needs of the industry. Since there is a baseline, against which each indicator can be benchmarked, a sustainability indicator “flower” can be derived, one of the key innovations of this methodology. This approach goes beyond the usual quantification, as it is also scalable and linked to technology and its fundamental parameters. In part 2, a case study “A case study from the copper industry” tests and illustrates the methodology.
  相似文献   

13.
Novotny M  Madsen D  Kleywegt GJ 《Proteins》2004,54(2):260-270
When a new protein structure has been determined, comparison with the database of known structures enables classification of its fold as new or belonging to a known class of proteins. This in turn may provide clues about the function of the protein. A large number of fold comparison programs have been developed, but they have never been subjected to a comprehensive and critical comparative analysis. Here we describe an evaluation of 11 publicly available, Web-based servers for automatic fold comparison. Both their functionality (e.g., user interface, presentation, and annotation of results) and their performance (i.e., how well established structural similarities are recognized) were assessed. The servers were subjected to a battery of performance tests covering a broad spectrum of folds as well as special cases, such as multidomain proteins, Calpha-only models, new folds, and NMR-based models. The CATH structural classification system was used as a reference. These tests revealed the strong and weak sides of each server. On the whole, CE, DALI, MATRAS, and VAST showed the best performance, but none of the servers achieved a 100% success rate. Where no structurally similar proteins are found by any individual server, it is recommended to try one or two other servers before any conclusions concerning the novelty of a fold are put on paper.  相似文献   

14.
Revision bingo     
Bingo can be a versatile and engaging tool for spicing up end-of-module revision and other contexts in which course content is being reviewed. Instead of numbers being called out, students are given a verbal clue that fits with one of the answers on their playing grid. Get five correct answers in any straight line (including either of the major diagonals) and they win the game. The interactive and light-hearted medium of a bingo game can provide motivation for study and enhance learning by the students. Protein revision bingo is included as an example.  相似文献   

15.
In this article, we present a game theory based framework, named games network, for modeling biological interactions. After introducing the theory, we more precisely describe the methodology to model biological interactions. Then we apply it to the plasminogen activator system (PAs) which is a signal transduction pathway involved in cancer cell migration. The games network theory extends game theory by including the locality of interactions. Each game in a games network represents local interactions between biological agents. The PAs system is implicated in cytoskeleton modifications via regulation of actin and microtubules, which in turn favors cell migration. The games network model has enabled us a better understanding of the regulation involved in the PAs system.  相似文献   

16.
Chappell JM  Iqbal A  Abbott D 《PloS one》2012,7(1):e29015
The framework for playing quantum games in an Einstein-Podolsky-Rosen (EPR) type setting is investigated using the mathematical formalism of geometric algebra (GA). The main advantage of this framework is that the players' strategy sets remain identical to the ones in the classical mixed-strategy version of the game, and hence the quantum game becomes a proper extension of the classical game, avoiding a criticism of other quantum game frameworks. We produce a general solution for two-player games, and as examples, we analyze the games of Prisoners' Dilemma and Stag Hunt in the EPR setting. The use of GA allows a quantum-mechanical analysis without the use of complex numbers or the Dirac Bra-ket notation, and hence is more accessible to the non-physicist.  相似文献   

17.
Development of high-performance distributed applications, called metaapplications, is extremely challenging because of their complex runtime environment coupled with their requirements of high-performance and Quality of Service (QoS). Such applications typically run on a set of heterogeneous machines with dynamically varying loads, connected by heterogeneous networks possibly supporting a wide variety of communication protocols. In spite of the size and complexity of such applications, they must provide the high-performance and QoS mandated by their users. In order to achieve the goal of high-performance, they need to adaptively utilize their computational and communication resources. Apart from the requirements of adaptive resource utilization, such applications have a third kind of requirement related to remote access QoS. Different clients, although accessing a single server resource, may have differing QoS requirements from their remote connections. A single server resource may also need to provide different QoS for different clients, depending on various issues such as the amount of trust between the server and a given client. These QoS requirements can be encapsulated under the abstraction of remote access capabilities. Metaapplications need to address all the above three requirements in order to achieve the goal of high-performance and satisfy user expectations of QoS. This paper presents Open HPC++, a programming environment for high-performance applications running in a complex and heterogeneous run-time environment. Open HPC++ provides application level tools and mechanisms to satisfy application requirements of adaptive resource utilization and remote access capabilities. Open HPC++ is designed on the lines of CORBA and uses an Object Request Broker (ORB) to support seamless communication between distributed application components. In order to provide adaptive utilization of communication resources, it uses the principle of open implementation to open up the communication mechanisms of its ORB. By virtue of its open architecture, the ORB supports multiple, possibly custom, communication protocols, along with automatic and user controlled protocol selection at run-time. An extension of the same mechanism is used to support the concept of remote access capabilities. In order to support adaptive utilization of computational resources, Open HPC++ also provides a flexible yet powerful set of load-balancing mechanisms that can be used to implement custom load-balancing strategies. The paper also presents performance evaluations of Open HPC++ adaptivity and load-balancing mechanisms. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

18.
A number of biological data resources (i.e. databases and data analytical tools) are searchable and usable on-line thanks to the internet and the World Wide Web (WWW) servers. The output from the web server is easy for us to browse. However, it is laborious and sometimes impossible for us to write a computer program that finds a useful data resource, sends a proper query and processes the output. It is a serious obstacle to the integration of distributed heterogeneous data resources. To solve the issue, we have implemented a SOAP (Simple Object Access Protocol) server and web services that provide a program-friendly interface. The web services are accessible at http://www.xml.nig.ac.jp/.  相似文献   

19.

Background

Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees.

Methodology/Principal Findings

We introduce TreeVector, a Scalable Vector Graphics–and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print.

Conclusions/Significance

TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user''s own web server. It has already been deployed on two recognized and widely used database Web sites.  相似文献   

20.
The Wii Fit? is a form of interactive gaming designed to elicit health and fitness benefits to replace sedentary gaming. This study was designed to determine the effectiveness of Wii Fit? fitness games. The purpose of the study was to determine the %VO2max and energy expenditure from different Wii Fit? games at different levels including the step and hula games. Eight healthy young women completed a preliminary trial to determine VO2max and later played the Wii Fit? during 2 separate counterbalanced trials. During each session, subjects played levels of Wii Fit? games for 10 minutes each level. One session involved beginning and intermediate hula, and the other session involved beginning and intermediate steps. The VO2 was measured continuously via metabolic cart, and rating of perceived exertion (RPE) was assessed at the end of each game level. The lowest %VO2max, kcal·min, and RPE occurred during the beginning step game and the highest values occurred during the intermediate hula game. Respiratory exchange ratio was significantly higher in the intermediate hula than beginning hula game but was not significantly different between step game levels. The intermediate hula and step games produced the greatest energy expenditure with an equivalent effect of a walking speed of >5.63 km·h (>3.5 miles·h). This is the first study to determine the percentage of VO2max and caloric expenditure elicited by different Wii Fit? video games at different game levels in adults. Findings suggest that the Wii Fit? can be used as an effective activity for promoting physical health in this population.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号