共查询到20条相似文献,搜索用时 46 毫秒
1.
Yingchong Situ Lixia Liu Chandra S. Martha Matthew E. Louis Zhiyuan Li Ahmed H. Sameh Gregory A. Blasidell Anatasios S. Lyrintzis 《Cluster computing》2013,16(1):157-170
High-fidelity computational fluid dynamics (CFD) tools, such as the large eddy simulation technique, have become feasible in aiding the field of computational aeroacoustics (CAA) to compute noise on petascale computing platforms. CAA poses significant challenges for researchers because the computational schemes used in the CFD tools should have high accuracy, good spectral resolution, and low dispersion and diffusion errors. A high-order compact finite difference scheme, which is implicit in space, can be used for such simulations because it fulfills the requirements for CAA. Usually, this method is parallelized using a transposition scheme; however, that approach has a high communication overhead. In this paper, we discuss the use of a parallel tridiagonal linear system solver based on the truncated SPIKE algorithm for reducing the communication overhead in our large eddy simulations. We present theoretical performance analysis and report experimental results collected on two parallel computing platforms. 相似文献
2.
Shakti Mehrotra Om Prakash B. N. Mishra B. Dwevedi 《Plant Cell, Tissue and Organ Culture》2008,95(1):29-35
This study represents an ANN based computational scheming of physical, chemical and biological parameters at flask level for
mass multiplication of plants through micropropagation using bioreactors of larger volumes. The optimal culture environment
at small scale for Glycyrrhiza plant was predicted by using neural network approach in terms of pH and volume of growth medium
per culture flask, incubation room temperature and month of inoculation along with inoculum properties in terms of inoculum
size, fresh weight and number of explant per flask. This kind of study could be a model system in commercial propagation of
various economically important plants in bioreactors using tissue culture technique. In present course of study the ANN was
trained by implementing MATLAB neural network. A feed-forward back propagation type network was created for input vector (seven
input elements), with single hidden layer (seven nodes) and one output unit in output layer. The ‘tansig’ and ‘purelin’ transfer functions were adapted for hidden and output layers respectively. The four training functions viz. traingda, trainrp,
traincgf, traincgb were randomly selected to train four networks which further examined with available dataset. The efficiency
of neural networks was concluded by the comparison of results obtained from this study with that of empirical data obtained
from the detailed tissue culture experiments and designated as Target set (mean fresh weight biomass per culture flask after
40 days of in vitro culture duration). Efficiency of networks for better training initialization was judged on the basis of
comparative analysis of ‘Mean Square Error at zero epoch’ for each network trained in which the least error at initial point
was observed with trainrp followed by traincgb and traincgf. A comparative assessment between experimental target data range
obtained from wet lab practice and all trained network output range for the efficiency of trained networks for least deviation
from target range revealed the output range of network ‘trainrp’ was closest to the empirical target range while least comparison
was worked out from network ‘traincgb’ which had output range more than the target decided and ultimately showed meaningless
result. 相似文献
3.
B. Borisch Chappuis H. Müller J. Stutte M. M. Hey K. Hübner H. K. Müller-Hermelink 《Virchows Archiv. B, Cell pathology including molecular pathology》1989,58(1):199-205
Fourteen examples of non-Hodgkin’s lymphoma (NHL) and four of Hodgkin’s disease in patients with AIDS as well as lymph nodes
exhibiting changes related to the lymphadenopathy syndrome (LAS) from 11 HIV-positive individuals were studied for the presence
of Epstein-Barr virus (EBV) genome both by in situ DNA hybridization and blotting techniques. Both methods were performed
using formalin-fixed paraffin-embedded material. All the NHLs were of high malignancy and all but one were of the B-cell type.
Of the four examples of Hodgkin’s disease, two were lymphocytic predominant, one of mixed cellularity and one of the nodular
sclerosing variety. The lymph nodes of patients with LAS were mostly stage I with marked follicular hyperplasia. In 7 of the
14 NHLs the presence of EBV-DNA was clearly demonstrated by dot-blotting and by in situ hybridization. All lymph nodes from
the patients with LAS and AIDS-related Hodgkin’s disease were negative for EBV by dot-blot and in situ hybridization assays.
We conclude that EBV plays a role in the development of AIDS-related lymphomas, but the fact that half these lymphomas are
EBV-negative suggests that other mechanisms such as polyclonal stimulation of B-cells by HIV products may also be important.
This study was supported by the DFG, SFB 172 to BBC and HKMH and by the BMFT grant 01KI 88061 to BBC 相似文献
4.
Load balancing in a workstation-based cluster system has been investigated extensively, mainly focusing on the effective usage
of global CPU and memory resources. However, if a significant portion of applications running in the system is I/O-intensive,
traditional load balancing policies can cause system performance to decrease substantially. In this paper, two I/O-aware load-balancing
schemes, referred to as IOCM and WAL-PM, are presented to improve the overall performance of a cluster system with a general
and practical workload including I/O activities. The proposed schemes dynamically detect I/O load imbalance of nodes in a
cluster, and determine whether to migrate some I/O load from overloaded nodes to other less- or under-loaded nodes. The current
running jobs are eligible to be migrated in WAL-PM only if overall performance improves. Besides balancing I/O load, the scheme
judiciously takes into account both CPU and memory load sharing in the system, thereby maintaining the same level of performance
as existing schemes when I/O load is low or well balanced. Extensive trace-driven simulations for both synthetic and real
I/O-intensive applications show that: (1) Compared with existing schemes that only consider CPU and memory, the proposed schemes
improve the performance with respect to mean slowdown by up to a factor of 20; (2) When compared to the existing approaches
that only consider I/O with non-preemptive job migrations, the proposed schemes achieve improvements in mean slowdown by up
to a factor of 10; (3) Under CPU-memory intensive workloads, our scheme improves the performance over the existing approaches
that only consider I/O by up to 47.5%.
Xiao Qin received the BSc and MSc degrees in computer science from Huazhong University of Science and Technology in 1992 and 1999,
respectively. He received the PhD degree in computer science from the University of Nebraska-Lincoln in 2004. Currently, he
is an assistant professor in the department of computer science at the New Mexico Institute of Mining and Technology. His
research interests include parallel and distributed systems, storage systems, real-time computing, performance evaluation,
and fault-tolerance. He served on program committees of international conferences like CLUSTER, ICPP, and IPCCC. During 2000–2001,
he was on the editorial board of The IEEE Distributed System Online. He is a member of the IEEE.
Hong Jiang received the B.Sc. degree in Computer Engineering in 1982 from Huazhong University of Science and Technology, Wuhan, China;
the M.A.Sc. degree in Computer Engineering in 1987 from the University of Toronto, Toronto, Canada; and the PhD degree in
Computer Science in 1991 from the Texas A&M University, College Station, Texas, USA. Since August 1991 he has been at the
University of Nebraska-Lincoln, Lincoln, Nebraska, USA, where he is Associate Professor and Vice Chair in the Department of
Computer Science and Engineering. His present research interests are computer architecture, parallel/distributed computing,
computer storage systems and parallel I/O, performance evaluation, middleware, networking, and computational engineering.
He has over 70 publications in major journals and international Conferences in these areas and his research has been supported
by NSF, DOD and the State of Nebraska. Dr. Jiang is a Member of ACM, the IEEE Computer Society, and the ACM SIGARCH and ACM
SIGCOMM.
Yifeng Zhu received the B.E. degree in Electrical Engineering from Huazhong University of Science and Technology in 1998 and the M.S.
degree in computer science from University of Nebraska Lincoln (UNL) in 2002. Currently he is working towards his Ph.D. degree
in the department of computer science and engineering at UNL. His main fields of research interests are parallel I/O, networked
storage, parallel scheduling, and cluster computing. He is a student member of IEEE.
David Swanson received a Ph.D. in physical (computational) chemistry at the University of Nebraska-Lincoln (UNL) in 1995, after which he
worked as an NSF-NATO postdoctoral fellow at the Technical University of Wroclaw, Poland, in 1996, and subsequently as a National
Research Council Research Associate at the Naval Research Laboratory in Washington, DC, from 1997–1998. In early 1999 he returned
to UNL where he has coordinated the Research Computing Facility and currently serves as an Assistant Research Professor in
the Department of Computer Science and Engineering. The Office of Naval Research, the National Science Foundation, and the
State of Nebraska have supported his research in areas such as large-scale parallel simulation and distributed systems. 相似文献
5.
We present gmblock, a block-level storage sharing system over Myrinet which uses an optimized I/O path to transfer data directly
between the storage medium and the network, bypassing the host CPU and main memory bus of the storage server. It is device
driver independent and retains the protection and isolation features of the OS. We evaluate the performance of a prototype
gmblock server and find that: (a) the proposed techniques eliminate memory and peripheral bus contention, increasing remote
I/O bandwidth significantly, in the order of 20–200% compared to an RDMA-based approach, (b) the impact of remote I/O to local
computation becomes negligible, (c) the performance characteristics of RAID storage combined with limited NIC resources reduce
performance. We introduce synchronized send operations to improve the degree of disk to network I/O overlapping. We deploy
the OCFS2 shared-disk filesystem over gmblock and show gains for various application benchmarks, provided I/O scheduling can
eliminate the disk bottleneck due to concurrent access. 相似文献
6.
Performance Evaluation of the Quadrics Interconnection Network 总被引:1,自引:0,他引:1
Fabrizio Petrini Eitan Frachtenberg Adolfy Hoisie Salvador Coll 《Cluster computing》2003,6(2):125-142
7.
Navjot S. Sodhi Tien Ming Lee Cagan H. Sekercioglu Edward L. Webb Dewi M. Prawiradilaga David J. Lohman Naomi E. Pierce Arvin C. Diesmos Madhu Rao Paul R. Ehrlich 《Biodiversity and Conservation》2010,19(4):1175-1188
Garnering support from local people is critical for maintaining ecologically viable and functional protected areas. However,
empirical data illustrating local people’s awareness of the importance of nature’s services is limited; hence possibly impeding
effective ecosystem (environmental)-services based conservation efforts. Using data from five protected forests in four developing
Southeast Asian countries, we provide evidence that local people living near parks value a wide range of environmental services,
including cultural, provisioning, and regulating services, provided by the forests. Local people with longer residency valued
environmental services more. Educated as well as poor people valued forest ecosystem services more. Conservation education
has some influence on people’s environmental awareness. For conservation endeavors to be successful, large-scale transmigration
programs should be avoided and local people must be provided with alternative sustenance opportunities and basic education
in addition to environmental outreach to reduce their reliance on protected forests and to enhance conservation support. 相似文献
8.
Btihaj Ajana 《Journal of bioethical inquiry》2010,7(2):237-258
In recent years, there has been a growing interest in finding stronger means of securitising identity against the various
risks presented by the mobile globalised world. Biometric technology has featured quite prominently on the policy and security
agenda of many countries. It is being promoted as the solution du jour for protecting and managing the uniqueness of identity
in order to combat identity theft and fraud, crime and terrorism, illegal work and employment, and to efficiently govern various
domains and services including asylum, immigration and social welfare. In this paper, I shall interrogate the ways in which
biometrics is about the uniqueness of identity and what kind of identity biometrics is concerned with. I argue that in posing
such questions at the outset, we can start delimiting the distinctive bioethical stakes of biometrics beyond the all-too-familiar
concerns of privacy, data protection and the like. I take cue mostly from Cavarero’s Arendt-inspired distinction between the
“what” and the “who” elements of a person, and from Ricoeur’s distinction between the “idem” and “ipse” versions of identity.
By engaging with these philosophical distinctions and concepts, and with particular reference to the example of asylum policy,
I seek to examine and emphasise an important ethical issue pertaining to the practice of biometric identification. This issue
relates mainly to the paradigmatic shift from the biographical story (which for so long has been the means by which an asylum
application is assessed) to bio-digital samples (that are now the basis for managing and controlling the identities of asylum
applicants). The purging of identity from its narrative dimension lies at the core of biometric technology’s overzealous aspiration
to accuracy, precision and objectivity, and raises one of the most pressing bioethical questions vis-à-vis the realm of identification. 相似文献
9.
Robert I Colautti Sarah A Bailey Colin D. A. van Overdijk Keri Amundsen Hugh J. MacIsaac 《Biological invasions》2006,8(1):45-59
Biological invasions by nonindigenous species (NIS) can have adverse effects on economically important goods and services,
and sometimes result in an ‘invisible tax’ on natural resources (e.g. reduced yield). The combined economic costs of NIS may
be significant, with implications for environmental policy and resource management; yet economic impact assessments are rare
at a national scale. Impacts of nuisance NIS may be direct (e.g. loss of hardwood trees) or indirect (e.g. alteration of ecosystem
services provided by growing hardwoods). Moreover, costs associated with these effects may be accrued to resources and services
with clear ‘market’ values (e.g. crop production) and to those with more ambiguous, ‘non-market’ values (e.g. aesthetic value
of intact forest). We characterised and projected economic costs associated with nuisance NIS in Canada, through a combination
of case-studies and an empirical model derived from 21 identified effects of 16 NIS. Despite a severe dearth of available
data, characterised costs associated with ten NIS in Canadian fisheries, agriculture and forestry totalled $187 million Canadian
(CDN) per year. These costs were dwarfed by the ‘invisible tax’ projected for sixteen nuisance NIS found in Canada, which
was estimated at between $13.3 and $34.5 billion CDN per year. Canada remains highly vulnerable to new nuisance NIS, but available
manpower and financial resources appear insufficient to deal with this problem.
An erratum to this article is available at . 相似文献
10.
A Distributed Multi-Storage Resource Architecture and I/O Performance Prediction for Scientific Computing 总被引:1,自引:0,他引:1
I/O intensive applications have posed great challenges to computational scientists. A major problem of these applications is that users have to sacrifice performance requirements in order to satisfy storage capacity requirements in a conventional computing environment. Further performance improvement is impeded by the physical nature of these storage media even when state-of-the-art I/O optimizations are employed.In this paper, we present a distributed multi-storage resource architecture, which can satisfy both performance and capacity requirements by employing multiple storage resources. Compared to a traditional single storage resource architecture, our architecture provides a more flexible and reliable computing environment. This architecture can bring new opportunities for high performance computing as well as inherit state-of-the-art I/O optimization approaches that have already been developed. It provides application users with high-performance storage access even when they do not have the availability of a single large local storage archive at their disposal. We also develop an Application Programming Interface (API) that provides transparent management and access to various storage resources in our computing environment. Since I/O usually dominates the performance in I/O intensive applications, we establish an I/O performance prediction mechanism which consists of a performance database and a prediction algorithm to help users better evaluate and schedule their applications. A tool is also developed to help users automatically generate performance data stored in databases. The experiments show that our multi-storage resource architecture is a promising platform for high performance distributed computing. 相似文献
11.
Silvio Gianinazzi Armelle Gollotte Marie-Noëlle Binet Diederik van Tuinen Dirk Redecker Daniel Wipf 《Mycorrhiza》2010,20(8):519-530
The beneficial effects of arbuscular mycorrhizal (AM) fungi on plant performance and soil health are essential for the sustainable
management of agricultural ecosystems. Nevertheless, since the ‘first green revolution’, less attention has been given to
beneficial soil microorganisms in general and to AM fungi in particular. Human society benefits from a multitude of resources
and processes from natural and managed ecosystems, to which AM make a crucial contribution. These resources and processes,
which are called ecosystem services, include products like food and processes like nutrient transfer. Many people have been
under the illusion that these ecosystem services are free, invulnerable and infinitely available; taken for granted as public
benefits, they lack a formal market and are traditionally absent from society’s balance sheet. In 1997, a team of researchers
from the USA, Argentina and the Netherlands put an average price tag of US $33 trillion a year on these fundamental ecosystem
services. The present review highlights the key role that the AM symbiosis can play as an ecosystem service provider to guarantee
plant productivity and quality in emerging systems of sustainable agriculture. The appropriate management of ecosystem services
rendered by AM will impact on natural resource conservation and utilisation with an obvious net gain for human society. 相似文献
12.
There are many bioinformatics tools that deal with input/output (I/O) issues by using filing systems from the most common operating systems, such as Linux or MS Windows. However, as data volumes increase, there is a need for more efficient disk access, ad hoc memory management and specific page-replacement policies. We propose a device driver that can be used by multiple applications. It keeps the application code unchanged, providing a non-intrusive and flexible strategy for I/O calls that may be adopted in a straightforward manner. With our approach, database developers can define their own I/O management strategies. We used our device driver to manage Basic Local Alignment Search Tool (BLAST) I/O calls. Based on preliminary experimental results with National Center for Biotechnology Information (NCBI) BLAST, this approach can provide database management systems-like data management features, which may be used for BLAST and many other computational biology applications. 相似文献
13.
J. A. Buso L. S. Boiteux G. C. C. Tai S. J. Peloquin 《TAG. Theoretical and applied genetics. Theoretische und angewandte Genetik》2000,101(1-2):139-145
Diploid potato clones with 2n-pollen formation by first-division restitution without crossing-over (FDR-NCO) are ideal testers
to estimate the breeding value of elite 4x cultivars by virtue of transmitting their genotypes practically intact to their progenies. This characteristic facilitates
genetic analysis, since meiotic recombination would take place only in the 4x parent and not in the diploid parent. We evaluated (under short-day conditions) families from complete factorial crosses
between four 4x cultivars and five 2x(FDR-NCO) clones. Families were compared with two standard 4x cultivars (’Bintje’ and ’Delta’) for total tuber yield (TTY), commercial yield (CY), haulm maturity (HM), plant vigor (PV),
plant-top uniformity (PU), eye depth (ED), number of tubers per hill (NTH), and the CY/TTY index (CTI). For TTY, the contrasts
family group (310 g/ hill) vs ’Delta’ (430 g/hill) and the family group vs ’Bintje’ (210 g/hill) were significant. Only 25%
of the families were different from ’Delta’ and 20% of them outyielded ’Bintje’. For CY, differences were observed between
families (240 g/hill) vs ’Delta’ (340 g/hill) and families vs ’Bintje’ (150 g/hill). The two best families had 53% CY over
’Bintje’. Surprisingly, only one family had a higher NTH than ’Bintje’. No differences were observed for HM. Seventy five
and 30% of the families had an ED similar to ’Delta’ (ED = 2) and ’Bintje’ (ED = 1), respectively. A multivariate analysis
indicated that 63% of the data variability could be explained by two factors. TTY, CY, and PV had high loading on the first
factor, whereas ED, PU and HM had high loading on the second factor; CTI and NTH had equal sizes on both factors. High TTY
and PV were associated with high NTH and CTI. Deep eye, PU, and late maturity were associated with high NTH and reduced CTI.
The distributions of factor scores of the entries indicated that some 2x parents had strong influences (irrespective of the direction of their effects) on the crosses. Six crosses due to two 2x males were in the ’Bintje’ quarter with negative scores for both factors (implying low TTY, poor vigor, and low NTH). Also
three crosses due to another 2x clone were distributed in the quarter of positive factor 1 and negative factor 2. These crosses plus another one were in
the same quarter of ’Delta’ (implying high yields, low ED, low PU, and early maturity). The FDR-NCO clones provide a homogeneous
sample of heterozygous 2n-gametes allowing the unique opportunity to estimate the relative contribution of the random meiotic
products (from the 4x parents) and the ’somatic’ 2x genome for the phenotypic expression of quantitative traits. The interesting result was that measurable effects (favorable
or not) on the data variability were mainly determined by the genomic contribution of the haploid-species hybrids. Three out
of five 2x-male parents showed rather strong effects on progenies. No such effects were observed on the four 4x-female parents.
Received: 3 September 1999 / Accepted: 24 November 1999 相似文献
14.
While aggregating the throughput of existing disks on cluster nodes is a cost-effective approach to alleviate the I/O bottleneck
in cluster computing, this approach suffers from potential performance degradations due to contentions for shared resources
on the same node between storage data processing and user task computation. This paper proposes to judiciously utilize the
storage redundancy in the form of mirroring existed in a RAID-10 style file system to alleviate this performance degradation.
More specifically, a heuristic scheduling algorithm is developed, motivated from the observations of a simple cluster configuration,
to spatially schedule write operations on the nodes with less load among each mirroring pair. The duplication of modified
data to the mirroring nodes is performed asynchronously in the background. The read performance is improved by two techniques:
doubling the degree of parallelism and hot-spot skipping. A synthetic benchmark is used to evaluate these algorithms in a
real cluster environment and the proposed algorithms are shown to be very effective in performance enhancement.
Yifeng Zhu received his B.Sc. degree in Electrical Engineering in 1998 from Huazhong University of Science and Technology, Wuhan, China;
the M.S. and Ph.D. degree in Computer Science from University of Nebraska – Lincoln in 2002 and 2005 respectively. He is an
assistant professor in the Electrical and Computer Engineering department at University of Maine. His main research interests
are cluster computing, grid computing, computer architecture and systems, and parallel I/O storage systems. Dr. Zhu is a Member
of ACM, IEEE, the IEEE Computer Society, and the Francis Crowe Society.
Hong Jiang received the B.Sc. degree in Computer Engineering in 1982 from Huazhong University of Science and Technology, Wuhan, China;
the M.A.Sc. degree in Computer Engineering in 1987 from the University of Toronto, Toronto, Canada; and the PhD degree in
Computer Science in 1991 from the Texas A&M University, College Station, Texas, USA. Since August 1991 he has been at the
University of Nebraska-Lincoln, Lincoln, Nebraska, USA, where he is Professor and Vice Chair in the Department of Computer
Science and Engineering. His present research interests are computer architecture, parallel/distributed computing, cluster
and Grid computing, computer storage systems and parallel I/O, performance evaluation, real-time systems, middleware, and
distributed systems for distance education. He has over 100 publications in major journals and international Conferences in
these areas and his research has been supported by NSF, DOD and the State of Nebraska. Dr. Jiang is a Member of ACM, the IEEE
Computer Society, and the ACM SIGARCH.
Xiao Qin received the BS and MS degrees in computer science from Huazhong University of Science and Technology in 1992 and 1999, respectively.
He received the PhD degree in computer science from the University of Nebraska-Lincoln in 2004. Currently, he is an assistant
professor in the department of computer science at the New Mexico Institute of Mining and Technology. He had served as a subject
area editor of IEEE Distributed System Online (2000–2001). His research interests are in parallel and distributed systems, storage systems, real-time computing, performance
evaluation, and fault-tolerance. He is a member of the IEEE.
Dan Feng received the Ph.D degree from Huazhong University of Science and Technology, Wuhan, China, in 1997. She is currently a professor
of School of Computer, Huazhong University of Science and Technology, Wuhan, China. She is the principal scientist of the
the National Grand Fundamental Research 973 Program of China “Research on the organization and key technologies of the Storage
System on the next generation Internet.” Her research interests include computer architecture, storage system, parallel I/O,
massive storage and performance evaluation.
David Swanson received a Ph.D. in physical (computational) chemistry at the University of Nebraska-Lincoln (UNL) in 1995, after which he
worked as an NSF-NATO postdoctoral fellow at the Technical University of Wroclaw, Poland, in 1996, and subsequently as a National
Research Council Research Associate at the Naval Research Laboratory in Washington, DC, from 1997–1998. In 1999 he returned
to UNL where he directs the Research Computing Facility and currently serves as an Assistant Research Professor in the Department
of Computer Science and Engineering. The Office of Naval Research, the National Science Foundation, and the State of Nebraska
have supported his research in areas such as large-scale scientific simulation and distributed systems. 相似文献
15.
Protein–protein interactions (PPI’s) are of fundamental importance in biology and biomedicine. Identifying and characterizing
protein interactions based on various genomic and proteomic data has become a canonical problem in computational biology.
Approaching this task as a binary classification problem, we propose a hierarchical Bayesian probit-based framework, incorporating
multiple sources of relational protein data as covariates, for modeling binary network topology. More importantly, this model
has two distinctive features: (1) capturing the latent characteristics of nodes in the network by an eigenmodel, and (2) accounting
for and correcting the link uncertainty in the training data, a well-known critical issue with protein interactions generated
by high-throughput technology. We evaluate and compare the predictive performance of the proposed model with three submodels
without one or both of these features. Results from two yeast functional subnetworks have demonstrated that both the latent
eigenmodel and accounting for link uncertainty are important for better predictions, and the latter can yield substantial
improvement in predictive precision. 相似文献
16.
Parallel file systems have been developed in recent years to ease the I/O bottleneck of high-end computing system. These advanced file systems offer several data layout strategies in order to meet the performance goals of specific I/O workloads. However, while a layout policy may perform well on some I/O workload, it may not perform as well for another. Peak I/O performance is rarely achieved due to the complex data access patterns. Data access is application dependent. In this study, a cost-intelligent data access strategy based on the application-specific optimization principle is proposed. This strategy improves the I/O performance of parallel file systems. We first present examples to illustrate the difference of performance under different data layouts. By developing a cost model which estimates the completion time of data accesses in various data layouts, the layout can better match the application. Static layout optimization can be used for applications with dominant data access patterns, and dynamic layout selection with hybrid replications can be used for applications with complex I/O patterns. Theoretical analysis and experimental testing have been conducted to verify the proposed cost-intelligent layout approach. Analytical and experimental results show that the proposed cost model is effective and the application-specific data layout approach can provide up to a 74% performance improvement for data-intensive applications. 相似文献
17.
To be an effective platform for high‐performance distributed applications, off-the-shelf Object Request Broker (ORB) middleware,
such as CORBA, must preserve communication-layer quality of service (QoS) properties both vertically (i.e., network interface
↔ application layer) and horizontally (i.e., end-to-end). However, conventional network interfaces, I/O subsystems, and middleware
interoperability protocols are not well-suited for applications that possess stringent throughput, latency, and jitter requirements.
It is essential, therefore, to develop vertically and horizontally integrated ORB endsystems that can be (1) configured flexibly
to support high-performance network interfaces and I/O subsystems and (2) used transparently by performance-sensitive applications.
This paper provides three contributions to research on high-performance I/O support for QoS-enabled ORB middleware. First,
we outline the key research challenges faced by high-performance ORB endsystem developers. Second, we describe how our real-time
I/O (RIO) subsystem and pluggable protocol framework enables ORB endsystems to preserve high-performance network interface
QoS up to applications running on off-the-shelf hardware and software. Third, we illustrate empirically how highly optimized
ORB middleware can be integrated with real-time I/O subsystem to reduce latency bounds on communication between high-priority
clients without unduly penalizing low-priority and best-effort clients. Our results demonstrate how it is possible to develop
ORB endsystems that are both highly flexible and highly efficient.
This revised version was published online in July 2006 with corrections to the Cover Date. 相似文献
18.
Smooth and coordinated motion requires precisely timed muscle activation patterns, which due to biophysical limitations, must
be predictive and executed in a feed-forward manner. In a previous study, we tested Kawato’s original proposition, that the
cerebellum implements an inverse controller, by mapping a multizonal microcomplex’s (MZMC) biophysics to a joint’s inverse
transfer function and showing that inferior olivary neuron may use their intrinsic oscillations to mirror a joint’s oscillatory
dynamics. Here, to continue to validate our mapping, we propose that climbing fiber input into the deep cerebellar nucleus
(DCN) triggers rebounds, primed by Purkinje cell inhibition, implementing gain on IO’s signal to mirror the spinal cord reflex’s
gain thereby achieving inverse control. We used biophysical modeling to show that Purkinje cell inhibition and climbing fiber
excitation interact in a multiplicative fashion to set DCN’s rebound strength; where the former primes the cell for rebound
by deinactivating its T-type Ca2+ channels and the latter triggers the channels by rapidly depolarizing the cell. We combined this result with our control
theory mapping to predict how experimentally injecting current into DCN will affect overall motor output performance, and
found that injecting current will proportionally scale the output and unmask the joint’s natural response as observed by motor
output ringing at the joint’s natural frequency. Experimental verification of this prediction will lend support to a MZMC
as a joint’s inverse controller and the role we assigned underlying biophysical principles that enable it. 相似文献
19.
Thomas E. Graedel Elizabeth Saxton 《The International Journal of Life Cycle Assessment》2002,7(4):219-224
Superior environmental performance has not traditionally been a goal of managers of telecommunications facilities, but there
is now considerable pressure to go ‘beyond compliance,’ even for facilities that have been in existence for many years. In
this regard, significant environmental improvement can be achieved by taking a life-cycle perspective. We have conducted a
streamlined life-cycle assessment on two existing telecommunications facilities, one providing installation and maintenance
services and the other network management services. With the results of this assessment as a basis, we propose a number of
generic steps that can be taken to improve the environmental performance of most existing telecommunications facilities. 相似文献
20.
As the speed of mass spectrometers, sophistication of sample fractionation, and complexity of experimental designs increase,
the volume of tandem mass spectra requiring reliable automated analysis continues to grow. Software tools that quickly, effectively,
and robustly determine the peptide associated with each spectrum with high confidence are sorely needed. Currently available
tools that postprocess the output of sequence-database search engines use three techniques to distinguish the correct peptide
identifications from the incorrect: statistical significance re-estimation, supervised machine learning scoring and prediction,
and combining or merging of search engine results. We present a unifying framework that encompasses each of these techniques
in a single model-free machine-learning framework that can be trained in an unsupervised manner. The predictor is trained
on the fly for each new set of search results without user intervention, making it robust for different instruments, search
engines, and search engine parameters. We demonstrate the performance of the technique using mixtures of known proteins and
by using shuffled databases to estimate false discovery rates, from data acquired on three different instruments with two
different ionization technologies. We show that this approach outperforms machine-learning techniques applied to a single
search engine’s output, and demonstrate that combining search engine results provides additional benefit. We show that the
performance of the commercial Mascot tool can be bested by the machine-learning combination of two open-source tools X!Tandem
and OMSSA, but that the use of all three search engines boosts performance further still. The Peptide identification Arbiter
by Machine Learning (PepArML) unsupervised, model-free, combining framework can be easily extended to support an arbitrary
number of additional searches, search engines, or specialized peptide–spectrum match metrics for each spectrum data set. PepArML
is open-source and is available from .
Electronic supplementary material The online version of this article (doi: ) contains supplementary material, which is available to authorized users. 相似文献