首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Rapid prototyping of distributed systems can be achieved by integrating commercial off-the-shelf (COTS) components. With components as the building blocks, it is important to predict the performance of the system based on the performance of individual components. In this paper, performance prediction of a system consisting of a small number of components is investigated under different inter-component communication patterns, and the number of threads provided by components. Based on the experimental results, it can be inferred that the proposed composition rules provide a reasonably accurate prediction of the performance of a system made out of these components.
Barrett R. BryantEmail:
  相似文献   

2.
MOTIVATION: The human genome project and the development of new high-throughput technologies have created unparalleled opportunities to study the mechanism of diseases, monitor the disease progression and evaluate effective therapies. Gene expression profiling is a critical tool to accomplish these goals. The use of nucleic acid microarrays to assess the gene expression of thousands of genes simultaneously has seen phenomenal growth over the past five years. Although commercial sources of microarrays exist, investigators wanting more flexibility in the genes represented on the array will turn to in-house production. The creation and use of cDNA microarrays is a complicated process that generates an enormous amount of information. Effective data management of this information is essential to efficiently access, analyze, troubleshoot and evaluate the microarray experiments. RESULTS: We have developed a distributable software package designed to track and store the various pieces of data generated by a cDNA microarray facility. This includes the clone collection storage data, annotation data, workflow queues, microarray data, data repositories, sample submission information, and project/investigator information. This application was designed using a 3-tier client server model. The data access layer (1st tier) contains the relational database system tuned to support a large number of transactions. The data services layer (2nd tier) is a distributed COM server with full database transaction support. The application layer (3rd tier) is an internet based user interface that contains both client and server side code for dynamic interactions with the user. AVAILABILITY: This software is freely available to academic institutions and non-profit organizations at http://www.genomics.mcg.edu/niddkbtc.  相似文献   

3.
When users’ tasks in a distributed heterogeneous computing environment (e.g., cluster of heterogeneous computers) are allocated resources, the total demand placed on some system resources by the tasks, for a given interval of time, may exceed the availability of those resources. In such a case, some tasks may receive degraded service or be dropped from the system. One part of a measure to quantify the success of a resource management system (RMS) in such a distributed environment is the collective value of the tasks completed during an interval of time, as perceived by the user, application, or policy maker. The Flexible Integrated System Capability (FISC) measure presented here is a measure for quantifying this collective value. The FISC measure is a flexible multi-dimensional measure such that any task attribute can be inserted and may include priorities, versions of a task or data, deadlines, situational mode, security, application- and domain-specific QoS, and task dependencies. For an environment where it is important to investigate how well data communication requests are satisfied, the data communication request satisfied can be the basis of the FISC measure instead of tasks completed. The motivation behind the FISC measure is to determine the performance of resource management schemes if tasks have multiple attributes that needs to be satisfied. The goal of this measure is to compare the results of different resource management heuristics that are trying to achieve the same performance objective but with different approaches. This research was supported by the DARPA/ITO Quorum Program, by the DARPA/ISO BADD Program and the Office of Naval Research under ONR grant number N00014-97-1-0804, by the DARPA/ITO AICE program under contract numbers DABT63-99-C-0010 and DABT63-99-C-0012, and by the Colorado State University George T. Abell Endowment. Intel and Microsoft donated some of the equipment used in this research. Jong-Kook Kim is pursuing a Ph.D. degree from the School of Electrical and Computer Engineering at Purdue University (expected in August 2004). Jong-Kook received his M.S. degree in electrical and computer engineering from Purdue University in May 2000. He received his B.S. degree in electronic engineering from Korea University, Seoul, Korea in 1998. He has presented his work at several international conferences and has been a reviewer for numerous conferences and journals. His research interests include heterogeneous distributed computing, computer architecture, performance measure, resource management, evolutionary heuristics, and power-aware computing. He is a student member of the IEEE, IEEE Computer Society, and ACM. Debra Hensgen is a member of the Research and Evaluation Team at OpenTV in Mountain View, California. OpenTV produces middleware for set-top boxes in support of interactive television. She received her Ph.D. in the area of Distributed Operating Systems from the University of Kentucky. Prior to moving to private industry, as an Associate Professor in the systems area, she worked with students and colleagues to design and develop tools and systems for resource management, network re-routing algorithms and systems that preserve quality of service guarantees, and visualization tools for performance debugging of parallel and distributed systems. She has published numerous papers concerning her contributions to the Concurra toolkit for automatically generating safe, efficient concurrent code, the Graze parallel processing performance debugger, the SAAM path information base, and the SmartNet and MSHN Resource Management Systems. Taylor Kidd is currently a Software Architect for Vidiom Systems in Portland Oregon. His current work involves the writing of multi-company industrial specifications and the architecting of software systems for the digital cable television industry. He has been involved in the establishment of international specifications for digital interactive television in both Europe and in the US. Prior to his current position, Dr. Kidd has been a researcher for the US Navy as well as an Associate Professor at the Naval Postgraduate School. Dr Kidd received his Ph.D. in Electrical Engineering in 1991 from the University of California, San Diego. H. J. Siegel was appointed the George T. Abell Endowed Chair Distinguished Professor of Electrical and Computer Engineering at Colorado State University (CSU) in August 2001, where he is also a Professor of Computer Science. In December 2002, he became the first Director of the CSU Information Science and Technology Center (ISTeC). ISTeC is a university-wide organization for promoting, facilitating, and enhancing CSU’s research, education, and outreach activities pertaining to the design and innovative application of computer, communication, and information systems. From 1976 to 2001, he was a professor at Purdue University. He received two BS degrees from MIT, and the MA, MSE, and PhD degrees from Princeton University. His research interests include parallel and distributed computing, heterogeneous computing, robust computing systems, parallel algorithms, parallel machine interconnection networks, and reconfigurable parallel computer systems. He has co-authored over 300 published papers on parallel and distributed computing and communication, is an IEEE Fellow, is an ACM Fellow, was a Coeditor-in-Chief of the Journal of Parallel and Distributed Computing, and was on the Editorial Boards of both the IEEE Transactions on Parallel and Distributed Systems and the IEEE Transactions on Computers. He was Program Chair/Co-Chair of three major international conferences, General Chair/Co-Chair of four international conferences, and Chair/Co-Chair of five workshops. He has been an international keynote speaker and tutorial lecturer, and has consulted for industry and government. David St. John is Chief Information Officer for WeatherFlow, Inc., a weather services company specializing in coastal weather observations and forecasts. He received a master’s degree in Engineering from the University of California, Irvine. He spent several years as the head of staff on the Management System for Heterogeneous Networks project in the Computer Science Department of the Naval Postgraduate School. His current relationship with cluster computing is as a user of the Regional Atmospheric Modeling System (RAMS), a numerical weather model developed at Colorado State University. WeatherFlow runs RAMS operationally on a Linux-based cluster. Cynthia Irvine is a Professor of Computer Science at the Naval Postgraduate School in Monterey, California. She received her Ph.D. from Case Western Reserve University and her B.A. in Physics from Rice University. She joined the faculty of the Naval Postgraduate School in 1994. Previously she worked in industry on the development of high assurance secure systems. In 2001, Dr. Irvine received the Naval Information Assurance Award. Dr. Irvine is the Director of the Center for Information Systems Security Studies and Research at the Naval Postgraduate School. She has served on special panels for NSF, DARPA, and OSD. In the area of computer security education Dr. Irvine has most recently served as the general chair of the Third World Conference on Information Security Education and the Fifth Workshop on Education in Computer Security. She co-chaired the NSF workshop on Cyber-security Workforce Needs Assessment and Educational Innovation and was a participant in the Computing Research Association/NSF sponsored Grand Challenges in Information Assurance meeting. She is a member of the editorial board of the Journal of Information Warfare and has served as a reviewer and/or program committee member of a variety of security related conferences. She has written over 100 papers and articles and has supervised the work of over 80 students. Professor Irvine is a member of the ACM, the AAS, a life member of the ASP, and a Senior Member of the IEEE. Timothy E. Levin is a Research Associate Professor at the Naval Postgraduate School. He has spent over 18 years working in the design, development, evaluation, and verification of secure computer systems, including operating systems, databases and networks. His current research interests include high assurance system design and analysis, development of models and methods for the dynamic selection of QoS security attributes, and the application of formal methods to the development of secure computer systems. Viktor K. Prasanna received his BS in Electronics Engineering from the Bangalore University and his MS from the School of Automation, Indian Institute of Science. He obtained his Ph.D. in Computer Science from the Pennsylvania State University in 1983. Currently, he is a Professor in the Department of Electrical Engineering as well as in the Department of Computer Science at the University of Southern California, Los Angeles. He is also an associate member of the Center for Applied Mathematical Sciences (CAMS) at USC. He served as the Division Director for the Computer Engineering Division during 1994–98. His research interests include parallel and distributed systems, embedded systems, configurable architectures and high performance computing. Dr. Prasanna has published extensively and consulted for industries in the above areas. He has served on the organizing committees of several international meetings in VLSI computations, parallel computation, and high performance computing. He is the Steering Co-chair of the International Parallel and Distributed Processing Symposium [merged IEEE International Parallel Processing Symposium (IPPS) and the Symposium on Parallel and Distributed Processing (SPDP)] and is the Steering Chair of the International Conference on High Performance Computing(HiPC). He serves on the editorial boards of the Journal of Parallel and Distributed Computing and the Proceedings of the IEEE. He is the Editor-in-Chief of the IEEE Transactions on Computers. He was the founding Chair of the IEEE Computer Society Technical Committee on Parallel Processing. He is a Fellow of the IEEE. Richard F. Freund is the originator of GridIQ’s network scheduling concepts that arose from mathematical and computing approaches he developed for the Department of Defense in the early 1980’s. Dr. Freund has over twenty-five years experience in computational mathematics, algorithm design, high performance computing, distributed computing, network planning, and heterogeneous scheduling. Since 1989, Dr. Freund has published over 45 journal articles in these fields. He has also been an editor of special editions of IEEE Computer and the Journal of Parallel and Distributed Computing. In addition, he is a founder of the Heterogeneous Computing Workshop, held annually in conjunction with the International Parallel Processing Symposium. Dr. Freund is the recipient of many awards, which includes the prestigious Department of Defense Meritorious Civilian Service Award in 1984 and the Lauritsen-Bennet Award from the Space and Naval Warfare Systems Command in San Diego, California.  相似文献   

4.
Iterative applications are known to run as slow as their slowest computational component. This paper introduces malleability, a new dynamic reconfiguration strategy to overcome this limitation. Malleability is the ability to dynamically change the data size and number of computational entities in an application. Malleability can be used by middleware to autonomously reconfigure an application in response to dynamic changes in resource availability in an architecture-aware manner, allowing applications to optimize the use of multiple processors and diverse memory hierarchies in heterogeneous environments. The modular Internet Operating System (IOS) was extended to reconfigure applications autonomously using malleability. Two different iterative applications were made malleable. The first is used in astronomical modeling, and representative of maximum-likelihood applications was made malleable in the SALSA programming language. The second models the diffusion of heat over a two dimensional object, and is representative of applications such as partial differential equations and some types of distributed simulations. Versions of the heat application were made malleable both in SALSA and MPI. Algorithms for concurrent data redistribution are given for each type of application. Results show that using malleability for reconfiguration is 10 to 100 times faster on the tested environments. The algorithms are also shown to be highly scalable with respect to the quantity of data involved. While previous work has shown the utility of dynamically reconfigurable applications using only computational component migration, malleability is shown to provide up to a 15% speedup over component migration alone on a dynamic cluster environment. This work is part of an ongoing research effort to enable applications to be highly reconfigurable and autonomously modifiable by middleware in order to efficiently utilize distributed environments. Grid computing environments are becoming increasingly heterogeneous and dynamic, placing new demands on applications’ adaptive behavior. This work shows that malleability is a key aspect in enabling effective dynamic reconfiguration of iterative applications in these environments.
Carlos A. VarelaEmail:
  相似文献   

5.
We present a decentralized algorithm for online clustering analysis used for anomaly detection in self-monitoring distributed systems. In particular, we demonstrate the monitoring of a network of printing devices that can perform the analysis without the use of external computing resources (i.e. in-network analysis). We also show how to ensure the robustness of the algorithm, in terms of anomaly detection accuracy, in the face of failures of the network infrastructure on which the algorithm runs. Further, we evaluate the tradeoff in terms of overhead necessary for ensuring this robustness and present a method to reduce this overhead while maintaining the detection accuracy of the algorithm.
Naveen SharmaEmail:
  相似文献   

6.
The use of human pluripotent stem cells, including embryonic and induced pluripotent stem cells, in therapeutic applications will require the development of robust, scalable culture technologies for undifferentiated cells. Advances made in large-scale cultures of other mammalian cells will facilitate expansion of undifferentiated human embryonic stem cells (hESCs), but challenges specific to hESCs will also have to be addressed, including development of defined, humanized culture media and substrates, monitoring spontaneous differentiation and heterogeneity in the cultures, and maintaining karyotypic integrity in the cells. This review will describe our current understanding of environmental factors that regulate hESC self-renewal and efforts to provide these cues in various scalable bioreactor culture systems.  相似文献   

7.
Frequent itemset mining is widely used as a fundamental data mining technique. Recently, there have been proposed a number of MapReduce-based frequent itemset mining methods in order to overcome the limits on data size and speed of mining that sequential mining methods have. However, the existing MapReduce-based methods still do not have a good scalability due to high workload skewness, large intermediate data, and large network communication overhead. In this paper, we propose BIGMiner, a fast and scalable MapReduce-based frequent itemset mining method. BIGMiner generates equal-sized sub-databases called transaction chunks and performs support counting only based on transaction chunks and bitwise operations without generating and shuffling intermediate data. As a result, BIGMiner achieves very high scalability due to no workload skewness, no intermediate data, and small network communication overhead. Through extensive experiments using large-scale datasets of up to 6.5 billion transactions, we have shown that BIGMiner consistently and significantly outperforms the state-of-the-art methods without any memory problems.  相似文献   

8.
The Collaborative Computing Transport Layer (CCTL) is a communication substrate consisting of a suite of group communication protocols. The design of CCTL supports the needs of distributed collaborative applications. CCTL is based on a two-level group hierarchy that naturally matches the structure of many collaborative applications and that allows several implementation optimizations. Logical interconnections among processes, called channels, define an efficient, light-weight group mechanism, providing a variety of communication services such as reliability and message ordering. Related channels are associated with a heavy-weight group, called a session, that provides group management services, such as membership, for its associated channels. Sessions and channels run different protocol stacks, allowing a flexible and useful separation of group management semantics and communication service quality. This also allows the efficient reuse of existing group management services when introducing new communication services. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

9.
The state of development of Distributed Database Systems (DDB) is briefly surveyed and a basic taxonomy of the products, prototypes and architectures presented. The applicability of the approach to applications arising in primary medical care networks is assessed by looking in some detail at three particular offerings in this field. Some insights into the implications for the future direction of DDB research and development are gained from this.  相似文献   

10.
The Saccharomyces cerevisiae prion [URE3] is the infectious amyloid form of the Ure2p protein. [URE3] provides a useful model system for studying amyloid formation and stability in vivo. When grown in the presence of a good nitrogen source, [URE3] cells are able to take up ureidosuccinate, an intermediate in uracil biosynthesis, while cells lacking the [URE3] prion can not. This ability to take up ureidosuccinate has been commonly used to assay for the presence of [URE3]. However, this assay has a number of practical limitations, affecting the range of experiments that can be performed with [URE3]. Here, we describe recently developed alternative selection methods for the presence or absence of [URE3]. They make use of the Ure2p-regulated DAL5 promoter in conjunction with ADE2, URA3, kanMX, and CAN1 reporter genes, and allow for higher stringency in selection both for and against [URE3], nonselective assay of prion variants, and direct transformation of prion filaments. We discuss advantages and limitations of each of these assays.  相似文献   

11.
Scientific data analytics in high-performance computing environments has been evolving along with the advancement of computing capabilities. With the onset of exascale computing, the increasing gap between compute performance and I/O bandwidth has rendered the traditional post-simulation processing a tedious process. Despite the challenges due to increased data production, there exists an opportunity to benefit from “cheap” computing power to perform query-driven exploration and visualization during simulation time. To accelerate such analyses, applications traditionally augment, post-simulation, raw data with large indexes, which are then repeatedly utilized for data exploration. However, the generation of current state-of-the-art indexes involves a compute- and memory-intensive processing, thus rendering them inapplicable in an in situ context. In this paper we propose DIRAQ, a parallel in situ, in network data encoding and reorganization technique that enables the transformation of simulation output into a query-efficient form, with negligible runtime overhead to the simulation run. DIRAQ’s effective core-local, precision-based encoding approach incorporates an embedded compressed index that is 3–6 \(\times \) smaller than current state-of-the-art indexing schemes. Its data-aware index adjustmentation improves performance of group-level index layout creation by up to 35 % and reduces the size of the generated index by up to 27 %. Moreover, DIRAQ’s in network index merging strategy enables the creation of aggregated indexes that speed up spatial-context query responses by up to \(10\times \) versus alternative techniques. DIRAQ’s topology-, data-, and memory-aware aggregation strategy results in efficient I/O and yields overall end-to-end encoding and I/O time that is less than that required to write the raw data with MPI collective I/O.  相似文献   

12.
A fault detection service for wide area distributed computations   总被引:6,自引:0,他引:6  
The potential for faults in distributed computing systems is a significant complicating factor for application developers. While a variety of techniques exist for detecting and correcting faults, the implementation of these techniques in a particular context can be difficult. Hence, we propose a fault detection service designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications. This service uses well-known techniques based on unreliable fault detectors to detect and report component failure, while allowing the user to trade off timeliness of reporting against false positive rates. We describe the architecture of this service, report on experimental results that quantify its cost and accuracy, and describe its use in two applications, monitoring the status of system components of the GUSTO computational grid testbed and as part of the NetSolve network-enabled numerical solver. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

13.
FPGA based distributed self healing architecture for reusable systems   总被引:1,自引:0,他引:1  
Creating an environment of “no doubt” for computing systems is critical for supporting next generation science, engineering, and commercial applications. With reconfigurable devices such as Field Programmable Gate Arrays (FPGAs), designers are provided with a seductive tool to use as a basis for sophisticated but highly reliable platforms. Reconfigurable computing platforms potentially offer the enhancement of reliability and recovery from catastrophic failures through partial and dynamic reconfigurations; and eliminate the need for redundant hardware resources typically used by existing fault-tolerant systems. We propose a two-level self-healing methodology to offer 100% availability for mission critical systems with comparatively less hardware overhead and performance degradation. Our proposed system first undertakes healing at the node-level. Failing to rectify the system at the node-level, network-level healing is then undertaken. We have designed a system based on Xilinx Virtex-5 FPGAs and Cirronet wireless mesh nodes to demonstrate autonomous wireless healing capability among networked node devices. Our prototype is a proof-of-concept work which demonstrates the feasibility of using FPGAs to provide maximum computational availability in a critical self-healing distributed architecture.  相似文献   

14.
Distributed systems provide geographically distributed resources for large-scale applications while managing large volumes of data. In this context, replication of data in several sites of the system is an effective solution for achieving interesting performances. A number of data replication strategies have been proposed in the literature. Data popularity is one of the most important parameters taken into consideration by these strategies. It analyzes the historic of the data access pattern, and provides predictions for future data requests. However, measuring data popularity is a challenging task because there are several factors that contribute to the evaluation of data popularity. In this paper, a new adaptive measurement for data popularity in distributed systems is proposed. The proposed measurement covers all factors taken into consideration by previous work of the literature. It also takes into consideration new factors to deal with the dynamic nature of the system so it can adapt to any access pattern. We show that the exploitation of our measurement improves the performances of replication strategies, while offering the possibility to use the data popularity parameter in new contexts in replication management.  相似文献   

15.
MOTIVATION: The diverse microarray datasets that have become available over the past several years represent a rich opportunity and challenge for biological data mining. Many supervised and unsupervised methods have been developed for the analysis of individual microarray datasets. However, integrated analysis of multiple datasets can provide a broader insight into genetic regulation of specific biological pathways under a variety of conditions. RESULTS: To aid in the analysis of such large compendia of microarray experiments, we present Microarray Experiment Functional Integration Technology (MEFIT), a scalable Bayesian framework for predicting functional relationships from integrated microarray datasets. Furthermore, MEFIT predicts these functional relationships within the context of specific biological processes. All results are provided in the context of one or more specific biological functions, which can be provided by a biologist or drawn automatically from catalogs such as the Gene Ontology (GO). Using MEFIT, we integrated 40 Saccharomyces cerevisiae microarray datasets spanning 712 unique conditions. In tests based on 110 biological functions drawn from the GO biological process ontology, MEFIT provided a 5% or greater performance increase for 54 functions, with a 5% or more decrease in performance in only two functions.  相似文献   

16.
Voltage imaging enables monitoring neural activity at sub-millisecond and sub-cellular scale, unlocking the study of subthreshold activity, synchrony, and network dynamics with unprecedented spatio-temporal resolution. However, high data rates (>800MB/s) and low signal-to-noise ratios create bottlenecks for analyzing such datasets. Here we present VolPy, an automated and scalable pipeline to pre-process voltage imaging datasets. VolPy features motion correction, memory mapping, automated segmentation, denoising and spike extraction, all built on a highly parallelizable, modular, and extensible framework optimized for memory and speed. To aid automated segmentation, we introduce a corpus of 24 manually annotated datasets from different preparations, brain areas and voltage indicators. We benchmark VolPy against ground truth segmentation, simulations and electrophysiology recordings, and we compare its performance with existing algorithms in detecting spikes. Our results indicate that VolPy’s performance in spike extraction and scalability are state-of-the-art.  相似文献   

17.
18.
Spheroids, a widely used three-dimensional (3D) culture model, are standard in hepatocyte culture as they preserve long-term hepatocyte functionality and enhance survivability. In this study, we investigated the effects of three operation modes in 3D culture — static, orbital shaking, and under vertical bidirectional flow using spheroid forming units (SFUs) on hepatic differentiation and drug metabolism to propose the best for mass production of functionally enhanced spheroids. Spheroids in SFUs exhibited increased hepatic gene expression, albumin secretion, and cytochrome P450 3A4 (CYP3A4) activity during the differentiation period (12 days). SFUs advantages include facilitated mass production and a relatively earlier peak of CYP3A4 activity. However, CYP3A4 activity was not well maintained under dimethyl sulfoxide (DMSO)-free conditions (13–18 days), dramatically reducing drug metabolism capability. Continued shear stimulation without differentiation stimuli in assay conditions markedly attenuated CYP3A4 activity, which was less severe in static conditions. In this condition, SFU spheroids exhibited dedifferentiation characteristics, such as increased proliferation and Notch signaling genes. We found that the dedifferentiation could be overcome by using the serum-free medium formulation. Therefore, we suggest that SFUs represent the best option for the mass production of functionally improved spheroids and so the serum-free conditions should be maintained during drug metabolism analysis.  相似文献   

19.
Large amount of monitoring data can be collected from distributed systems as the observables to analyze system behaviors. However, without reasonable models to characterize systems, we can hardly interpret such monitoring data effectively for system management. In this paper, a new concept named flow intensity is introduced to measure the intensity with which internal monitoring data reacts to the volume of user requests in distributed transaction systems. We propose a novel approach to automatically model and search relationships between the flow intensities measured at various points across the system. If the modeled relationships hold all the time, they are regarded as invariants of the underlying system. Experimental results from a real system demonstrate that such invariants widely exist in distributed transaction systems. Further we discuss how such invariants can be used to characterize complex systems and support autonomic system management. Guofei Jiang received the B.S. and Ph.D. degrees in electrical and computer engineering from Beijing Institute of Technology, China, in 1993 and 1998, respectively. During 1998–2000, he was a postdoctoral fellow in computer engineering at Dartmouth College, NH. He is currently a research staff member with the Robust and Secure Systems Group in NEC Laboratories America at Princeton, NJ. During 2000–2004, he was a research scientist in the Institute for Security Technology Studies at Dartmouth College. His current research focus is on distributed system, dependable and secure computing, system and information theory. He has published over 50 technical papers in these areas. He is an associate editor of IEEE Security and Privacy magazine and has served in the program committees of many conferences. Haifeng Chen received the BEng and MEng degrees, both in automation, from Southeast University, China, in 1994 and 1997 respectively, and the PhD degree in computer engineering from Rutgers University, New Jersey, in 2004. He has worked as a researcher in the Chinese national research institute of power automation. He is currently a research staff member at NEC laboratory America, Princeton, NJ. His research interests include data mining, autonomic computing, pattern recognition and robust statistics. Kenji Yoshihira received the B.E. in EE at University of Tokyo in 1996 and designed processor chips for enterprise computer at Hitachi Ltd. for five years. He employed himself in CTO at Investoria Inc. in Japan to develop an Internet service system for financial information distribution through 2002 and received the M.S. in CS at New York University in 2004. He is currently a research staff member with the Robust and Secure Systems Group in NEC Laboratories America, inc. in NJ. His current research focus is on distributed system and autonomic computing.  相似文献   

20.
In this note we outline some recent results on the development of a statistical testing methodology for inverse problems involving partial differential equation models. Applications to several problems from biology are presented. The statistical tests, which are in the spirit of analysis of variance (ANOVA), are based on asymptotic distributional results for estimators and residuals in a least squares approach.Research supported in part under grants NSF MCS 8504316, NASA NAG-1-517, and AFOSRF-49620-86-C-0111. Part of this research was carried out while the first author was a visiting scientist at the Institute for Computer Applications in Science and Engineering (ICASE), NASA Langley Research Center, Hampton, VA, which is operated under NASA contracts NASI-18107 and NASI-18605  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号