首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers.  相似文献   

2.
Abstract: Industrial ecologists study phenomena at several distinct scales, and linking the resulting insights could advance the field. The disciplines of ecology and economics have each attempted, with partial success, to accomplish this by building a behavioral micro foundation, and industrial ecology should do the same. These fields all study evolving systems made up of autonomous individuals who operate in a largely self-interested manner, exhibit diverse behaviors, and self-organize many higher-level structures such as communities or sectors in a bottom-up fashion. Industrial ecologists should explicitly attempt to integrate empirical and normative views about agency, and more carefully distinguish between two types of agents—firms and individual humans.  相似文献   

3.
Cluster, consisting of a group of computers, is to act as a whole system to provide users with computer resources. Each computer is a node of this cluster. Cluster computer refers to a system consisting of a complete set of computers connected to each other. With the rapid development of computer technology, cluster computing technique with high performance–cost ratio has been widely applied in distributed parallel computing. For the large-scale close data in group enterprise, a heterogeneous data integration model was built under cluster environment based on cluster computing, XML technology and ontology theory. Such model could provide users unified and transparent access interfaces. Based on cluster computing, the work has solved the heterogeneous data integration problems by means of Ontology and XML technology. Furthermore, good application effect has been achieved compared with traditional data integration model. Furthermore, it was proved that this model improved the computing capacity of system, with high performance–cost ratio. Thus, it is hoped to provide support for decision-making of enterprise managers.  相似文献   

4.
This paper provides an overview of methods and current applications of distributed computing in bioinformatics. Distributed computing is a strategy of dividing a large workload among multiple computers to reduce processing time, or to make use of resources such as programs and databases that are not available on all computers. Participating computers may be connected either through a local high-speed network or through the Internet.  相似文献   

5.
Finding an effective method to quantify species compositional changes in time and space has been an important task for ecologists and biogeographers. Recently, exploring regional floristic patterns using data derived from satellite imagery, such as the normalized difference vegetation index (NDVI) has drawn considerable research interests among ecologists. Studies have shown that NDVI could be a fairly good surrogate for primary productivities. In this study, we used plant distribution data in the North and the South Carolina states to investigate the correlations between species composition and NDVI within defined ecoregions using Mantel test and multi-response permutation procedure (MRPP). Our analytical approach involved generating compositional dissimilarity matrices by computing pairwise beta diversities of the 145 counties in the two states for species distribution data and by computing Euclidian distances for NDVI time series data. We argue that beta diversity measurements take the pairwise dissimilarities into consideration explicitly and could provide more spatial correlation information compared with uni- or multi-dimensional regressions. Our results showed a significant positive correlation between species compositional dissimilarity matrices and NDVI distance matrices. We also found for the first time that the strength of correlation increased at a lower taxonomic rank. Same trends were discovered when incorporating variability in phenological patterns in NDVI. Our findings suggest that remotely sensed NDVI can be viable for monitoring species compositional changes at regional scales.  相似文献   

6.
Michael Conrad unveiled many of the fundamental characteristics of biological computing. Underlying the behavioral variability and the adaptability of biological systems are these characteristics, including the ability of biological information processing to exploit quantum features at the atomic level, the powerful 3-D pattern recognition capabilities of macromolecules, the computational efficiency, and the ability to support biological function. Among many other things, Conrad formalized and explicated the underlying principles of biological adaptability, characterized the differences between biological and digital computing in terms of a fundamental tradeoff between adaptability and programmability of information processing, and discussed the challenges of interfacing digital computers and human society. This paper is about the encounter of biological and digital computing. The focus is on the nature of the biological information processing infrastructure of organizations and how it can be extended effectively with digital computing. In order to achieve this goal effectively, however, we need to embed properly digital computing into the information processing aspects of human and social behavior and intelligence, which are fundamentally biological. Conrad's legacy provides a firm, strong, and inspiring foundation for this endeavor.  相似文献   

7.
Considerable recent research effort has gone into studying how dispersal might affect the diversity of local communities. While this general topic has received attention from theoretical and empirical ecologists alike, the research focus has differed between the two groups; theoretical ecologists have explored the role of dispersal in the maintenance of diversity within local communities, whereas empirical ecologists have sought to quantify the role of dispersal in limiting local diversity. We argue that there is no necessary relationship between these two components of diversity and we therefore need to develop empirical approaches to quantify the dispersal-maintained component of diversity, as well as the dispersal-limited component. We develop one such approach in this paper, based on a quantitative partitioning of the natural regeneration within intact communities onto different sources of recruits (local communityvs. dispersal across different spatial or temporal scales).  相似文献   

8.
Laboratory-based researchers have increasingly reaped the benefits of entering data directly into a computer; those concerned with behaviour often using specially designed keyboards. However, many ecologists and ethologists doing fieldwork in remote places have been reluctant to abandon paper checksheets because of worries about unreliability, lack of electrical supply and sheer weight of computer equipment, adding to more general drawbacks such as the need for considerable expertise in purpose-built hardware and software. Having used commercially available hand-held computers extensively for our own fieldwork on baboons in Africa, we are confident that these worries are unfounded. As some researchers have already discovered, field computerization is not something to be distrusted, but in fact offers several important benefits.  相似文献   

9.

The original computers were people using algorithms to get mathematical results such as rocket trajectories. After the invention of the digital computer, brains have been widely understood through analogies with computers and now artificial neural networks, which have strengths and drawbacks. We define and examine a new kind of computation better adapted to biological systems, called biological computation, a natural adaptation of mechanistic physical computation. Nervous systems are of course biological computers, and we focus on some edge cases of biological computing, hearts and flytraps. The heart has about the computing power of a slug, and much of its computing happens outside of its forty thousand neurons. The flytrap has about the computing power of a lobster ganglion. This account advances fundamental debates in neuroscience by illustrating ways that classical computability theory can miss complexities of biology. By this reframing of computation, we make way for resolving the disconnect between human and machine learning.

  相似文献   

10.
One of the distinct characteristics of computing platforms shared by multiple users such as a cluster and a computational grid is heterogeneity on each computer and/or among computers. Temporal heterogeneity refers to variation, along the time dimension, of computing power available for a task on a computer, and spatial heterogeneity represents the variation among computers. In minimizing the average parallel execution time of a target task on a spatially heterogeneous computing system, it is not optimal to distribute the target task linearly proportional to the average computing powers available on computers. In this paper, effects of the temporal and spatial heterogeneity on performance of a target task have been analyzed in terms of the mean and standard deviation of parallel execution time. Based on the analysis results, an approach to load balancing for minimizing the average parallel execution time of a target task is described. The proposed approach whose validity has been verified through simulation considers temporal and spatial heterogeneities in addition to the average computing power on each computer.
Soo-Young Lee (Corresponding author)Email:
  相似文献   

11.
Current interest in behavioural syndromes, or 'animal personalities', reinforces a need for behavioural ecologists to adopt a multivariate view of phenotypes. Fortunately, many of the methodological and theoretical issues currently being dealt with by behavioural ecologists within the context of behavioural syndromes have previously been investigated by researchers in other areas of evolutionary ecology. As a result of these previous efforts, behavioural syndrome researchers have considerable theory and a wide range of tools already available to them. Here, we discuss aspects of quantitative genetics useful for understanding the multivariate phenotype as well as the relevance of quantitative genetics to behavioural syndrome research. These methods not only allow the proper characterization of the multivariate behavioural phenotype and genotype-including behaviours within, among and independent of behavioural syndrome structures-but also allow predictions as to how populations may respond to selection on behaviours within syndromes. An application of a quantitative genetics framework to behavioural syndrome research also clarifies and refines the questions that should be asked.  相似文献   

12.
The statistical tools available to ecologists are becoming increasingly sophisticated, allowing more complex, mechanistic models to be fit to ecological data. Such models have the potential to provide new insights into the processes underlying ecological patterns, but the inferences made are limited by the information in the data. Statistical nonestimability of model parameters due to insufficient information in the data is a problem too‐often ignored by ecologists employing complex models. Here, we show how a new statistical computing method called data cloning can be used to inform study design by assessing the estimability of parameters under different spatial and temporal scales of sampling. A case study of parasite transmission from farmed to wild salmon highlights that assessing the estimability of ecologically relevant parameters should be a key step when designing studies in which fitting complex mechanistic models is the end goal.  相似文献   

13.
The rise of the individual-based model in ecology   总被引:2,自引:0,他引:2  
Recent advances of three different kinds are driving a change in the way that modelling Is being done in ecology. First, the theory of chaos tells us that short-term predictions of nonlinear systems will be difficult, and long-term predictions will be impossible. The grave Implications this has for ecology are only just beginning to be understood. Second, ecologists have started to recognize the importance of local interactions between individuals in ecological systems. And third, improvements in computer power and software are making computers more inviting as a primary tool for modelling. The combination of these factors may have far-reaching consequences for ecological theory.  相似文献   

14.
Many research institutions are deploying computing clusters based on a shared/buy-in paradigm. Such clusters combine shared computers, which are free to be used by all users, and buy-in computers, which are computers purchased by users for semi-exclusive use. The purpose of this paper is to characterize the typical behavior and performance of a shared/buy-in computing cluster, using data traces from the Shared Computing Cluster (SCC) at Boston University that runs under this paradigm as a case study. Among our main findings, we show that the semi-exclusive policy, which allows any SCC user to use idle buy-in resources for a limited time, increases the utilization of buy-in resources by 17.4%, thus significantly improving the performance of the system as a whole. We find that jobs allowed to run on idle buy-in resources arrive more frequently and run for a shorter time than other jobs. Finally, we identify the run time limit (i.e., the maximum time during which a job is allowed to use resources) and the type of parallel environment as two factors that have a significant impact on the different performance experienced by shared and buy-in jobs.  相似文献   

15.
DNA计算机的分子生物学研究进展   总被引:7,自引:0,他引:7  
张治洲  赵健  贺林 《遗传学报》2003,30(9):886-892
DNA(脱氧核糖核酸)计算机研究是一个新领域。从字面上看,它既包含DNA研究也包含计算机的研究,因而也包含DNA技术与计算机技术如何交融的研究。1994年,Adleman在Science上报道了首例DNA计算的研究结果;2001年,Benenson等在Nature报道了一种由DNA分子和相应的酶分子构成的、有图灵机功能的可程序试管型DNA计算机,标志着DNA计算机研究的重大进展。DNA计算机最大的特点是超大规模的并行运算能力和潜在的巨大的数据储存能力。目前DNA计算机研究已涉及许多领域,包括生物学、数学、物理、化学、计算机科学和自动化工程等具体应用,是计算概念上的一次革命。DNA计算机的研究大大促进了DNA分子操作技术尤其是在纳米尺度下操作DNA分子的研究速度。从DNA计算机的基本原理、应用形式、与基因组学研究的重要关系等方面总结和评述了相关研究进展。  相似文献   

16.
Matrix projection models are among the most widely used tools in plant ecology. However, the way in which plant ecologists use and interpret these models differs from the way in which they are presented in the broader academic literature. In contrast to calls from earlier reviews, most studies of plant populations are based on < 5 matrices and present simple metrics such as deterministic population growth rates. However, plant ecologists also cautioned against literal interpretation of model predictions. Although academic studies have emphasized testing quantitative model predictions, such forecasts are not the way in which plant ecologists find matrix models to be most useful. Improving forecasting ability would necessitate increased model complexity and longer studies. Therefore, in addition to longer term studies with better links to environmental drivers, priorities for research include critically evaluating relative/comparative uses of matrix models and asking how we can use many short-term studies to understand long-term population dynamics.  相似文献   

17.
Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.  相似文献   

18.
N G Rambidi 《Bio Systems》1992,27(4):219-222
A new version of computing and information processing devices may result from major principles of information processing at molecular level. Non-discrete biomolecular computers based on these principles seems to be capable of solving problems of high computational complexity. One of the possible ways to implement these devices is based on biochemical non-linear dynamical systems. Means and ways to materialize biomolecular computers are discussed.  相似文献   

19.
ABSTRACT: BACKGROUND: Accurate and efficient RNA secondary structure prediction remains an important open problem in computational molecular biology. Historically, advances in computing technology have enabled faster and more accurate RNA secondary structure predictions. Previous parallelized prediction programs achieved significant improvements in runtime, but their implementations were not portable from niche high-performance computers or easily accessible to most RNA researchers. With the increasing prevalence of multi-core desktop machines, a new parallel prediction program is needed to take full advantage of today's computing technology. FINDINGS: We present here the first implementation of RNA secondary structure prediction by thermodynamic optimization for modern multi-core computers. We show that GTfold predicts secondary structure in less time than UNAfold and RNAfold, without sacrificing accuracy, on machines with four or more cores. CONCLUSIONS: GTfold supports advances in RNA structural biology by reducing the timescales for secondary structure prediction. The difference will be particularly valuable to researchers working with lengthy RNA sequences, such as RNA viral genomes.  相似文献   

20.
Stable isotope analyses have emerged as an insightful tool for ecologists, with quantitative methods being developed to analyse data at the population, community and food web levels. In parallel, functional ecologists have developed metrics to quantify the multiple facets of functional diversity in a n-dimensional space based on functional traits. Here, we transferred and adapted metrics developed by functional ecologists into a set of four isotopic diversity metrics (isotopic divergence, dispersion, evenness and uniqueness) complementary to the existing metrics. Specifically, these new metrics are mathematically independent of the number of organisms analysed and account for the abundance of organisms. They can also be calculated with more than two stable isotopes. In addition, we also provide a procedure for calculating the levels of isotopic overlap (similarity and turnover) between two groups of organisms. These metrics have been implemented into new functions in R made freely available to users and we illustrated their application using stable isotope values from a freshwater fish community. Transferring the framework developed initially for measuring functional diversity to stable isotope ecology will allow more efficient assessments of changes in the multiple facets of isotopic diversity following anthropogenic disturbances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号