首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
近年来随着计算机与各学科领域交叉研究的发展,计算流体力学数值模拟方法在城市环境的微气候研究方面得到较多应用,为研究绿地在有限面积内更有效地实现其降温效应提供了新的思路。回顾计算流体力学(CFD)数值模拟方法在不同尺度的城市绿地温湿效应及室外热舒适度评价研究中的应用,在此基础上,总结目前存在的问题及不足,对未来该领域的研究方向提出3点展望,以期为未来城市绿地微气候研究提供参考:1)多平台与尺度扩展研究;2)微气候特征指标的综合交叉分析;3)高适配度模拟模型的及时更新。  相似文献   

2.
Gregory R  Paton R  Saunders J  Wu QH 《Bio Systems》2004,76(1-3):121-131
Large simulations of bacterial colonies require huge amounts of computational time, the only way to achieve the necessary level of performance is with parallel computers and a suitably designed implementation that maps the problem onto the hardware. For real problems this mapping can be a non-trivial problem requiring careful consideration of the constraints in both the system being modelled and the hardware that executes that model. Here we describe an implementation of a system for modelling bacterial evolution that encompasses many physical scales. This system is composed entirely of individual entities all playing out a complex series of interactions. These individuals exist at the scale of the population of bacterial and at the gene product scale. This paper reports that it is possible to map a dynamic problem such as this onto fixed resources, for the most part making use of implicit multiplexing of resources provided by the OS and partitioning the problem to reduce communication time. Through this an efficient simulation can be created, making maximal use of the available hardware without constraining the model to require excessively specific resources.  相似文献   

3.
Recently much effort has been spent on providing a shared address space abstraction on clusters of small-scale symmetric multiprocessors. However, advances in technology will soon make it possible to construct these clusters with larger-scale cc-NUMA nodes, connected with non-coherent networks that offer latencies and bandwidth comparable to interconnection networks used in hardware cache-coherent systems. The shared memory abstraction can be provided on these systems in software across nodes and hardware within nodes.Recent simulation results have demonstrated that certain features of modern system area networks can be used to greatly reduce shared virtual memory (SVM) overheads [5,19]. In this work we leverage these results and we use detailed system emulation to investigate building future software shared memory clusters. We use an existing, large-scale hardware cache-coherent system with 64 processors to emulate a complete future cluster. We port our existing infrastructure (communication layer and shared memory protocol) on this system and study the behavior of a set of real applications. We present results for both 32- and 64-processor system configurations.We find that: (i) System emulation is invaluable in quantifying potential benefits from changes in the technology of commodity components. More importantly, it reveals potential problems in future systems that are easily overlooked in simulation studies. Thus, system emulation should be used along with other modeling techniques (e.g., simulation, implementation) to investigate future trends. (ii) Our work shows that current SVM protocols can only partially take advantage of faster interconnects and wider nodes due to operating system and architectural implications. We quantify the related issues and identify the areas where more research is required for future SVM clusters.  相似文献   

4.
5.
This review discusses the many roles atomistic computer simulations of macromolecular (for example, protein) receptors and their associated small-molecule ligands can play in drug discovery, including the identification of cryptic or allosteric binding sites, the enhancement of traditional virtual-screening methodologies, and the direct prediction of small-molecule binding energies. The limitations of current simulation methodologies, including the high computational costs and approximations of molecular forces required, are also discussed. With constant improvements in both computer power and algorithm design, the future of computer-aided drug design is promising; molecular dynamics simulations are likely to play an increasingly important role.  相似文献   

6.
Protein dynamics simulations from nanoseconds to microseconds.   总被引:3,自引:0,他引:3  
There have been a number of advances in atomic resolution simulations of biomolecules during the past few years. These have arisen partly from improvements to computer power and partly from algorithmic improvements. There have also been advances in measuring time-dependent fluctuations in proteins using NMR spectroscopy, revealing the importance of fluctuations in the microsecond to millisecond time range. Progress has also been made in measuring how far the simulations are able to represent the accessible phase space that is available to the protein in its native state, in solution, at room temperature. Another area of development is the simulation of protein unfolding at atomic resolution.  相似文献   

7.
8.
Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.  相似文献   

9.
Structure-based drug design is a creative process that displays several features that make it closer to human reasoning than to machine automation. However, very often the user intervention is limited to the preparation of the input and analysis of the output of a computer simulation. In some cases, allowing human intervention directly in the process could improve the quality of the results by applying the researcher intuition directly into the simulation. Haptic technology has been previously explored as a useful method to interact with a chemical system. However, the need of expensive hardware and the lack of accessible software have limited the use of this technology to date. Here we are reporting the implementation of a haptic-based molecular mechanics environment aimed for interactive drug design and ligand optimization, using an easily accessible software/hardware combination.  相似文献   

10.
In this paper, we present a mathematical foundation, including a convergence analysis, for cascading architecture neural network. Our analysis also shows that the convergence of the cascade architecture neural network is assured because it satisfies Liapunov criteria, in an added hidden unit domain rather than in the time domain. From this analysis, a mathematical foundation for the cascade correlation learning algorithm can be found. Furthermore, it becomes apparent that the cascade correlation scheme is a special case from mathematical analysis in which an efficient hardware learning algorithm called Cascade Error Projection(CEP) is proposed. The CEP provides efficient learning in hardware and it is faster to train, because part of the weights are deterministically obtained, and the learning of the remaining weights from the inputs to the hidden unit is performed as a single-layer perceptron learning with previously determined weights kept frozen. In addition, one can start out with zero weight values (rather than random finite weight values) when the learning of each layer is commenced. Further, unlike cascade correlation algorithm (where a pool of candidate hidden units is added), only a single hidden unit is added at a time. Therefore, the simplicity in hardware implementation is also achieved. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.  相似文献   

11.
Molecular dynamics simulations of membrane proteins are making rapid progress, because of new high-resolution structures, advances in computer hardware and atomistic simulation algorithms, and the recent introduction of coarse-grained models for membranes and proteins. In addition to several large ion channel simulations, recent studies have explored how individual amino acids interact with the bilayer or snorkel/anchor to the headgroup region, and it has been possible to calculate water/membrane partition free energies. This has resulted in a view of bilayers as being adaptive rather than purely hydrophobic solvents, with important implications, for example, for interaction between lipids and arginines in the charged S4 helix of voltage-gated ion channels. However, several studies indicate that the typical current simulations fall short of exhaustive sampling, and that even simple protein-membrane interactions require at least ca. 1mus to fully sample their dynamics. One new way this is being addressed is coarse-grained models that enable mesoscopic simulations on multi-mus scale. These have been used to model interactions, self-assembly and membrane perturbations induced by proteins. While they cannot replace all-atom simulations, they are a potentially useful technique for initial insertion, placement, and low-resolution refinement.  相似文献   

12.
In heterogeneous environments, dynamic scheduling algorithms are a powerful tool towards performance improvement of scientific applications via load balancing. However, these scheduling techniques employ heuristics that require prior knowledge about workload via profiling resulting in higher overhead as problem sizes and number of processors increase. In addition, load imbalance may appear only at run-time, making profiling work tedious and sometimes even obsolete. Recently, the integration of dynamic loop scheduling algorithms into a number of scientific applications has been proven effective. This paper reports on performance improvements obtained by integrating the Adaptive Weighted Factoring, a recently proposed dynamic loop scheduling technique that addresses these concerns, into two scientific applications: computational field simulation on unstructured grids, and N-Body simulations. Reported experimental results confirm the benefits of using this methodology, and emphasize its high potential for future integration into other scientific applications that exhibit substantial performance degradation due to load imbalance.  相似文献   

13.
The development, construction and operation of an open-air fumigation system for exposing young forest trees to controlled concentrations of sulphur dioxide and ozone is described. A computer simulation of gas dispersion was used to design an array of pipework sources which minimized spatial variability in exposure concentrations. Five fumigation plots were constructed using the design and were used to fumigate trees during a 7 year study known as the Liphook Forest Fumigation Project. Rates of gas release were controlled by a small computer to follow predetermined patterns of sulphur dioxide concentration and to maintain an elevation above ambient ozone concentration. Effective control of exposure was demonstrated, and examples of experimentally produced concentration frequency distributions are provided. The advantages and shortcomings of the system are discussed with recommendations for future improvements.  相似文献   

14.
Abstract

The performance of a supercomputer installation (Cray-IS with Amdahl front-end) is compared with a ‘desk-top’ computer (IBM PC-AT with 80287 Numeric Data Processor) using both execution time and total job turnaround time for a set of benchmark programs which includes a Monte Carlo simulation. The effect of compiler efficiency and optimisation for system-specific hardware on execution time are discusssed. We propose the use of ‘turn-around time’ as a criterion for computer system performance, and show that judged by this criterion, ‘desk-top’ computing can provide a significant fraction of the power of a networked supercomputer installation.  相似文献   

15.
We have developed a software package called Osprey for the calculation of optimal oligonucleotides for DNA sequencing and the creation of microarrays based on either PCR-products or directly spotted oligomers. It incorporates a novel use of position-specific scoring matrices, for the sensitive and specific identification of secondary binding sites anywhere in the target sequence. Using accelerated hardware is faster and more efficient than the traditional pairwise alignments used in most oligo-design software. Osprey consists of a module for target site selection based on user input, novel utilities for dealing with problematic sequences such as repeats, and a common code base for the identification of optimal oligonucleotides from the target list. Overall, these improvements provide a program that, without major increases in run time, reflects current DNA thermodynamics models, improves specificity and reduces the user's data preprocessing and parameterization requirements. Using a TimeLogic™ hardware accelerator, we report up to 50-fold reduction in search time versus a linear search strategy. Target sites may be derived from computer analysis of DNA sequence assemblies in the case of sequencing efforts, or genome or EST analysis in the case of microarray development in both prokaryotes and eukaryotes.  相似文献   

16.
The last decade saw a proliferation of research into the design of neurocomputers. Although such work still continues, much of it is never beyond the prototype-machine stage. In this paper, we argue that, on the whole, neurocomputers are no longer viable; like, say, database computers before them, their time has passed before they became a common reality. We consider the implementation of hardware neural networks, from the level of arithmetic to complete individual processors and parallel processors and show that currents trends in computer architecture and implementation are not supportive of a case for custom neurocomputers. We argue that in the future, neural-network processing ought to be mostly restricted to general-purpose processors or to processors that have been designed for other widely-used applications. There are just one or two, rather narrow, exceptions to this.  相似文献   

17.
Radiation therapy plays an increasingly important role in the management of cancer. Currently, more than 50% of all cancer patients can expect to receive radiotherapy during the course of their disease, either in a primary management (radical or adjuvant radiotherapy) or for symptom control (palliative radiotherapy).Radiation oncology is a very unique branch of medicine connected with clinical knowledge and also with medical physics. In recent years, this approach has become increasingly absorbed with technological advances. This increasing emphasis on technology, together with other important changes in the health-care economic environment, now place the specialty of radiation oncology in a precarious position. New treatment technologies are evolving at a rate unprecedented in radiation therapy, paralleled by improvements in computer hardware and software. These techniques allow assessment of changes in the tumour volume and its location during the course of therapy (interfraction motion) so that re-planning can adjust for such changes in an adaptive radiotherapy process.If radiation oncologists become simply the guardians of a single therapeutic modality they may find that time marches by and, while the techniques will live on, the specialty may not. This article discusses these threats to the field and examines strategies by which we may evolve, diversify, and thrive.  相似文献   

18.
The first aim of simulation in virtual environment is to help biologists to have a better understanding of the simulated system. The cost of such simulation is significantly reduced compared to that of in vivo simulation. However, the inherent complexity of biological system makes it hard to simulate these systems on non-parallel architectures: models might be made of sub-models and take several scales into account; the number of simulated entities may be quite large. Today, graphics cards are used for general purpose computing which has been made easier thanks to frameworks like CUDA or OpenCL. Parallelization of models may however not be easy: parallel computer programing skills are often required; several hardware architectures may be used to execute models. In this paper, we present the software architecture we built in order to implement various models able to simulate multi-cellular system. This architecture is modular and it implements data structures adapted for graphics processing units architectures. It allows efficient simulation of biological mechanisms.  相似文献   

19.
Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graph of standard neural models can not be directly handled by digital hardware devices. More particularly, several works show that programmable digital hardware is a real opportunity for flexible hardware implementations of neural networks. And yet many area and topology problems arise when standard neural models are implemented onto programmable circuits such as FPGAs, so that the fast FPGA technology improvements can not be fully exploited. Therefore neural network hardware implementations need to reconcile simple hardware topologies with complex neural architectures. The theoretical and practical framework developed, allows this combination thanks to some principles of configurable hardware that are applied to neural computation: Field Programmable Neural Arrays (FPNA) lead to powerful neural architectures that are easy to map onto FPGAs, thanks to a simplified topology and an original data exchange scheme. This paper shows how FPGAs have led to the definition of the FPNA computation paradigm. Then it shows how FPNAs contribute to current and future FPGA-based neural implementations by solving the general problems that are raised by the implementation of complex neural networks onto FPGAs.  相似文献   

20.
Pig breeders in the past have adopted their breeding goals according to the needs of the producers, processors and consumers and have made remarkable genetic improvements in the traits of interest. However, it is becoming more and more challenging to meet the market needs and expectations of consumers and in general of the citizens. In view of the current and future trends, the breeding goals have to include several additional traits and new phenotypes. These phenotypes include (a) vitality from birth to slaughter, (b) uniformity at different levels of production, (c) robustness, (d) welfare and health and (e) phenotypes to reduce carbon footprint. Advancements in management, genomics, statistical models and other technologies provide opportunities for recording these phenotypes. These new developments also provide opportunities for making effective use of the new phenotypes for faster genetic improvement to meet the newly adapted breeding goals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号