首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
Diagnostic surgical pathology or tissue–based diagnosis still remains the most reliable and specific diagnostic medical procedure. The development of whole slide scanners permits the creation of virtual slides and to work on so-called virtual microscopes. In addition to interactive work on virtual slides approaches have been reported that introduce automated virtual microscopy, which is composed of several tools focusing on quite different tasks. These include evaluation of image quality and image standardization, analysis of potential useful thresholds for object detection and identification (segmentation), dynamic segmentation procedures, adjustable magnification to optimize feature extraction, and texture analysis including image transformation and evaluation of elementary primitives. Grid technology seems to possess all features to efficiently target and control the specific tasks of image information and detection in order to obtain a detailed and accurate diagnosis. Grid technology is based upon so-called nodes that are linked together and share certain communication rules in using open standards. Their number and functionality can vary according to the needs of a specific user at a given point in time. When implementing automated virtual microscopy with Grid technology, all of the five different Grid functions have to be taken into account, namely 1) computation services, 2) data services, 3) application services, 4) information services, and 5) knowledge services. Although all mandatory tools of automated virtual microscopy can be implemented in a closed or standardized open system, Grid technology offers a new dimension to acquire, detect, classify, and distribute medical image information, and to assure quality in tissue–based diagnosis.  相似文献   

2.
Cactus Tools for Grid Applications   总被引:3,自引:0,他引:3  
Cactus is an open source problem solving environment designed for scientists and engineers. Its modular structure facilitates parallel computation across different architectures and collaborative code development between different groups. The Cactus Code originated in the academic research community, where it has been developed and used over many years by a large international collaboration of physicists and computational scientists. We discuss here how the intensive computing requirements of physics applications now using the Cactus Code encourage the use of distributed and metacomputing, and detail how its design makes it an ideal application test-bed for Grid computing. We describe the development of tools, and the experiments which have already been performed in a Grid environment with Cactus, including distributed simulations, remote monitoring and steering, and data handling and visualization. Finally, we discuss how Grid portals, such as those already developed for Cactus, will open the door to global computing resources for scientific users.  相似文献   

3.
4.
For several applications and algorithms used in applied bioinformatics, a bottle neck in terms of computational time may arise when scaled up to facilitate analyses of large datasets and databases. Re-codification, algorithm modification or sacrifices in sensitivity and accuracy may be necessary to accommodate for limited computational capacity of single work stations. Grid computing offers an alternative model for solving massive computational problems by parallel execution of existing algorithms and software implementations. We present the implementation of a Grid-aware model for solving computationally intensive bioinformatic analyses exemplified by a blastp sliding window algorithm for whole proteome sequence similarity analysis, and evaluate the performance in comparison with a local cluster and a single workstation. Our strategy involves temporary installations of the BLAST executable and databases on remote nodes at submission, accommodating for dynamic Grid environments as it avoids the need of predefined runtime environments (preinstalled software and databases at specific Grid-nodes). Importantly, the implementation is generic where the BLAST executable can be replaced by other software tools to facilitate analyses suitable for parallelisation. This model should be of general interest in applied bioinformatics. Scripts and procedures are freely available from the authors.  相似文献   

5.
File and Object Replication in Data Grids   总被引:23,自引:0,他引:23  
Data replication is a key issue in a Data Grid and can be managed in different ways and at different levels of granularity: for example, at the file level or object level. In the High Energy Physics community, Data Grids are being developed to support the distributed analysis of experimental data. We have produced a prototype data replication tool, the Grid Data Mirroring Package (GDMP) that is in production use in one physics experiment, with middleware provided by the Globus Toolkit used for authentication, data movement, and other purposes. We present here a new, enhanced GDMP architecture and prototype implementation that uses Globus Data Grid tools for efficient file replication. We also explain how this architecture can address object replication issues in an object-oriented database management system. File transfer over wide-area networks requires specific performance tuning in order to gain optimal data transfer rates. We present performance results obtained with GridFTP, an enhanced version of FTP, and discuss tuning parameters.  相似文献   

6.
ABCGrid: Application for Bioinformatics Computing Grid   总被引:1,自引:0,他引:1  
We have developed a package named Application for Bioinformatics Computing Grid (ABCGrid). ABCGrid was designed for biology laboratories to use heterogeneous computing resources and access bioinformatics applications from one master node. ABCGrid is very easy to install and maintain at the premise of robustness and high performance. We implement a mechanism to install and update all applications and databases in worker nodes automatically to reduce the workload of manual maintenance. We use a backup task method and self-adaptive job dispatch approach to improve performance. Currently, ABCGrid integrates NCBI_BLAST, Hmmpfam and CE, running on a number of computing platforms including UNIX/Linux, Windows and Mac OS X. AVAILABILITY: The source code, executables and documents can be downloaded from http://abcgrid.cbi.pku.edu.cn  相似文献   

7.
The Rutgers Computational Grid (RCG) project is aimed at providing high throughput performance to Rutgers university faculty and students. The RCG employs dual processor PCs, with Pentium II and III processors, as computational nodes, running the Linux RedHat operating system. The Load Sharing Facility (LSF) scheduling system from Platform Computing is used for job control and monitoring. The nodes are grouped into subclusters physically located in several departments and controlled by a single master node through LSF. The hardware and software used in RCG are described. Utilization and performance issues, including parallel performance, are discussed based on the experience of the first two years of RCG operation.  相似文献   

8.
In this article, we present a graph theoretic method to visualize and analyze the system behavior under different operating conditions. The system attributes (or variables) are the nodes in the graphs, and partial correlation between a pair of attributes defines the distance between corresponding nodes, resulting in a fully connected graph. Then, the redundant links are reduced using Pathfinder Network Scaling technique to uncover the latent network structure. We use a simulated biological reactor dataset in normal and faulty operation to validate our method. The method is general and can be used to analyze several different systems.  相似文献   

9.
MOTIVATION: Since the newly developed Grid platform has been considered as a powerful tool to share resources in the Internet environment, it is of interest to demonstrate an efficient methodology to process massive biological data on the Grid environments at a low cost. This paper presents an efficient and economical method based on a Grid platform to predict secondary structures of all proteins in a given organism, which normally requires a long computation time through sequential execution, by means of processing a large amount of protein sequence data simultaneously. From the prediction results, a genome scale protein fold space can be pursued. RESULTS: Using the improved Grid platform, the secondary structure prediction on genomic scale and protein topology derived from the new scoring scheme for four different model proteomes was presented. This protein fold space was compared with structures from the Protein Data Bank, database and it showed similarly aligned distribution. Therefore, the fold space approach based on this new scoring scheme could be a guideline for predicting a folding family in a given organism.  相似文献   

10.
The National Fusion Collaboratory project seeks to enable fusion scientists to exploit Grid capabilities in support of experimental science. To this end we are exploring the concept of a collaborative control room that harnesses Grid and collaborative technologies to provide an environment in which remote experimental devices, codes, and expertise can interact in real time during an experiment. This concept has the potential to make fusion experiments more efficient by enabling researchers to perform more analysis and by engaging more expertise from a geographically distributed team of scientists and resources. As the realities of software development, talent distribution, and budgets increasingly encourage pooling resources and specialization, we see such environments as a necessary tool for future science. In this paper, we describe an experimental mock-up of a remote interaction with the DIII-D control room. The collaborative control room was demonstrated at SC03 and later reviewed at an international ITER Grid Workshop. We describe how the combined effect of various technologies—collaborative, visualization, and Grid—can be used effectively in experimental science. Specifically, we describe the Access Grid, experimental data presentation tools, and agreement-based resource management and workflow systems enabling time-bounded end-to-end application execution. We also report on FusionGrid services whose use during the fusion experimental cycle became possible for the first time thanks to this technology, and we discuss its potential use in future fusion experiments.  相似文献   

11.
Computational Grids [17,25] have become an important asset in large-scale scientific and engineering research. By providing a set of services that allow a widely distributed collection of resources to be tied together into a relatively seamless computing framework, teams of researchers can collaborate to solve problems that they could not have attempted before. Unfortunately the task of building Grid applications remains extremely difficult because there are few tools available to support developers. To build reliable and re-usable Grid applications, programmers must be equipped with a programming framework that hides the details of most Grid services and allows the developer a consistent, non-complex model in which applications can be composed from well tested, reliable sub-units. This paper describes experiences with using a software component framework for building Grid applications. The framework, which is based on the DOE Common Component Architecture (CCA) [1,2,3,8], allows individual components to export function/service interfaces that can be remotely invoked by other components. The framework also provides a simple messaging/event system for asynchronous notification between application components. The paper also describes how the emerging Web-services [52] model fits with a component-oriented application design philosophy. To illustrate the connection between Web services and Grid application programming we describe a simple design pattern for application factory services which can be used to simplify the task of building reliable Grid programs. Finally we address several issues of Grid programming that better understood from the perspective of Peer-to-Peer (P2P) systems. In particular we describe how models for collaboration and resource sharing fit well with many Grid application scenarios.  相似文献   

12.
Cross-referencing experimental data with our current knowledge of signaling network topologies is one central goal of mathematical modeling of cellular signal transduction networks. We present a new methodology for data-driven interrogation and training of signaling networks. While most published methods for signaling network inference operate on Bayesian, Boolean, or ODE models, our approach uses integer linear programming (ILP) on interaction graphs to encode constraints on the qualitative behavior of the nodes. These constraints are posed by the network topology and their formulation as ILP allows us to predict the possible qualitative changes (up, down, no effect) of the activation levels of the nodes for a given stimulus. We provide four basic operations to detect and remove inconsistencies between measurements and predicted behavior: (i) find a topology-consistent explanation for responses of signaling nodes measured in a stimulus-response experiment (if none exists, find the closest explanation); (ii) determine a minimal set of nodes that need to be corrected to make an inconsistent scenario consistent; (iii) determine the optimal subgraph of the given network topology which can best reflect measurements from a set of experimental scenarios; (iv) find possibly missing edges that would improve the consistency of the graph with respect to a set of experimental scenarios the most. We demonstrate the applicability of the proposed approach by interrogating a manually curated interaction graph model of EGFR/ErbB signaling against a library of high-throughput phosphoproteomic data measured in primary hepatocytes. Our methods detect interactions that are likely to be inactive in hepatocytes and provide suggestions for new interactions that, if included, would significantly improve the goodness of fit. Our framework is highly flexible and the underlying model requires only easily accessible biological knowledge. All related algorithms were implemented in a freely available toolbox SigNetTrainer making it an appealing approach for various applications.  相似文献   

13.
We study evolutionary dynamics in a population whose structure is given by two graphs: the interaction graph determines who plays with whom in an evolutionary game; the replacement graph specifies the geometry of evolutionary competition and updating. First, we calculate the fixation probabilities of frequency dependent selection between two strategies or phenotypes. We consider three different update mechanisms: birth-death, death-birth and imitation. Then, as a particular example, we explore the evolution of cooperation. Suppose the interaction graph is a regular graph of degree h, the replacement graph is a regular graph of degree g and the overlap between the two graphs is a regular graph of degree l. We show that cooperation is favored by natural selection if b/c>hg/l. Here, b and c denote the benefit and cost of the altruistic act. This result holds for death-birth updating, weak-selection and large population size. Note that the optimum population structure for cooperators is given by maximum overlap between the interaction and the replacement graph (g=h=l), which means that the two graphs are identical. We also prove that a modified replicator equation can describe how the expected values of the frequencies of an arbitrary number of strategies change on replacement and interaction graphs: the two graphs induce a transformation of the payoff matrix.  相似文献   

14.
Software Component Frameworks are well known in the commercial business application world and now this technology is being explored with great interest as a way to build large-scale scientific applications on parallel computers. In the case of Grid systems, the current architectural model is based on the emerging web services framework. In this paper we describe progress that has been made on the Common Component Architecture model (CCA) and discuss its success and limitations when applied to problems in Grid computing. Our primary conclusion is that a component model fits very well with a services-oriented Grid, but the model of composition must allow for a very dynamic (both in space and in time) control of composition. We note that this adds a new dimension to conventional service workflow and it extends the “Inversion of Control” aspects of most component systems. Dennis Gannon is a professor of Computer Science at Indiana University. He received his Ph.D. in Computer Science from the University of Illinois in 1980 and his Ph.D. in Mathematics from the University of California in 1974. From 1980 to 1985, he was on the faculty at Purdue University. His research interests include software tools for high performance distributed systems and problem solving environments for scientific computation. Sriram Krishnan received his Ph.D. in Computer Science from Indiana University in 2004. He is currently in the Grid Development Group at the San Diego Supercomputer Center where he is working on designing a Web services based architecture for biomedical applications that is secure and scalable, and is conducive to the creation of complex workflows. He received my undergraduate degree in Computer Engineering from the University of Mumbai, India. Liang Fang is a Ph.D. student in Computer Science at Indiana University. His research interests include Grid computing, Web services, portals, their security and scalability issues. He is a Research Assistant in Computer Science at Indiana University, currently responsible for investigating authorization and other security solutions to the project of Linked Environments for Atmospheric Discovery (LEAD). Gopi Kandaswamy is a Ph.D. student in the Computer Science Department at Indiana University where he is current a Research Assistant. His research interests include Web services and workflow systems for the Grid. Yogesh Simmhan received his B.E. degree in Computer Science from Madras University, India in 2000, and is a doctoral candidate in Computer Science at Indiana University. He is currently working as a Research Assistant at Indiana University, investigating data management issues in the LEAD project. His interests lie in data provenance for workflow systems and its use in data quality estimation. Aleksander Slominski is a Ph.D. student in the Computer Science at Indiana University. His research interests include Grid and Web Services, streaming XML Pull Parsing and performance, Grid security, asynchronous messaging, events, and notifications brokers, component technologies, and workflow composition. He is currently working as a Research Assistant investigating creation and execution of dynamic workflows using Grid Process Execution Language (GPEL) based on WS-BPEL.  相似文献   

15.
免疫组织化学方法检测脑红蛋白在大鼠中枢神经系统的分布   总被引:17,自引:0,他引:17  
目的 探讨脑红蛋白(NGB)基因在中枢神经系统中的分布。方法 用免疫组织化学ABC法研究了NGB蛋白在成年大鼠脑内的分布和定位。结果 NGB蛋白在成年大鼠脑中有非常广泛的表达。其分布区域包括大脑皮质,海马,丘脑和下丘脑的部分核团,脑桥及小脑,NGB免疫反应阳性物质定位于神经元的细胞质。结论 NGB蛋白在大鼠脑中有非常广泛的表达,提示NGB基因在中枢神经系统的功能活动中可能起重要作用。  相似文献   

16.
We have recently developed monolayer purification as a rapid and convenient technique to produce specimens of His-tagged proteins or macromolecular complexes for single-particle electron microscopy (EM) without biochemical purification. Here, we introduce the Affinity Grid, a pre-fabricated EM grid featuring a dried lipid monolayer that contains Ni-NTA lipids (lipids functionalized with a nickel-nitrilotriacetic acid group). The Affinity Grid, which can be stored for several months under ambient conditions, further simplifies and extends the use of monolayer purification. After characterizing the Affinity Grid, we used it to isolate, within minutes, ribosomal complexes from Escherichia coli cell extracts containing His-tagged rpl3, the human homolog of the E. coli 50 S subunit rplC. Ribosomal complexes with or without associated mRNA could be prepared depending on the way the sample was applied to the Affinity Grid . Vitrified Affinity Grid specimens could be used to calculate three-dimensional reconstructions of the 50 S ribosomal subunit as well as the 70 S ribosome and 30 S ribosomal subunit from images of the same sample. We established that Affinity Grids are stable for some time in the presence of glycerol and detergents, which allowed us to isolate His-tagged aquaporin-9 (AQP9) from detergent-solubilized membrane fractions of Sf9 insect cells. The Affinity Grid can thus be used to prepare single-particle EM specimens of soluble complexes and membrane proteins.  相似文献   

17.
We present a rectangle-based segmentation algorithm that sets up a graph and performs a graph cut to separate an object from the background. However, graph-based algorithms distribute the graph's nodes uniformly and equidistantly on the image. Then, a smoothness term is added to force the cut to prefer a particular shape. This strategy does not allow the cut to prefer a certain structure, especially when areas of the object are indistinguishable from the background. We solve this problem by referring to a rectangle shape of the object when sampling the graph nodes, i.e., the nodes are distributed non-uniformly and non-equidistantly on the image. This strategy can be useful, when areas of the object are indistinguishable from the background. For evaluation, we focus on vertebrae images from Magnetic Resonance Imaging (MRI) datasets to support the time consuming manual slice-by-slice segmentation performed by physicians. The ground truth of the vertebrae boundaries were manually extracted by two clinical experts (neurological surgeons) with several years of experience in spine surgery and afterwards compared with the automatic segmentation results of the proposed scheme yielding an average Dice Similarity Coefficient (DSC) of 90.97±2.2%.  相似文献   

18.
Abstract: Net additions to stock (NAS) are an indicator based on economy-wide material flow accounting and analysis. NAS, a measure of the physical growth rate of an economy, can be used for estimates of future waste flows. It is calculated using two methods: The indirect method of calculation is a simple difference between all input and output flows, whereas the direct method involves measuring the amounts of materials added to particular categories of physical stock and the amounts of waste flows from these stocks.
The study described in this article had one leading objective: to make available direct NAS data for the Czech Republic, which could later be used for predicting future waste flows. Two additional objectives emerged from the first: (1) to develop a method for direct NAS calculation from data availability in the Czech Republic; (2) to calculate NAS directly, compare the results with those achieved in indirect NAS calculation, and discuss the identified differences.
The NAS for the Czech Republic calculated by the direct method is equal to approximately 65 million tonnes on average in 2000–2002 and is approximately 27% lower than the NAS acquired by the indirect method of calculation. The actual values of directly calculated NAS and its uncertainties suggest that the indirect NAS is more likely to be an overestimation than an underestimation. Durables account for about 2% of the total direct NAS, whereas the rest is attributed to infrastructure and buildings. The direct NAS is dominated by nonmetal construction commodities such as building stone and bricks, which equal approximately 89% of the total direct NAS.
Calculation of NAS by the direct method has been proved to be feasible in the Czech Republic. Moreover, uncertainties related to direct NAS are lower than those related to indirectly acquired NAS.  相似文献   

19.
短暂前脑缺血小鼠海马脑红蛋白表达的动态变化   总被引:1,自引:0,他引:1  
研究了小鼠短暂前脑缺血再灌注后不同时相点脑红蛋白表达的动态变化及其意义。用夹闭双侧颈总动脉的方法建立C57BL/6小鼠缺血再灌注动物模型;采用RT-PCR及Western blotting方法检测各组小鼠海马组织中脑红蛋白在转录和翻译水平表达的动态变化。结果显示,脑红蛋白在mRNA水平表达的动态变化为:与假手术对照组(100±0.00)比较,再灌注后6h(132.59±28.26,P<0.05)开始升高;24h(157.36±13.85,P<0.001)达高峰;48h(146.55±23.17,P<0.01)开始下降;72h(118.42±34.23,P>0.05)基本恢复至正常水平。脑红蛋白在蛋白水平表达的动态变化为:与假手术对照组(100±0.00)比较,再灌注后6h(111.46±23.54,P>0.05)轻微升高,24h(141.25±32.12,P<0.01)达高峰,48h(138.02±19.68,P<0.05)开始下降,72h(119.29±35.18,P>0.05)基本恢复至正常水平。结果提示,脑缺血再灌注各时相点的脑红蛋白mRNA及蛋白表达水平均增加,可能是机体的应激反应,但持续时间较短(48h以内)。  相似文献   

20.
Though biomedical research often draws on knowledge from a wide variety of fields, few visualization methods for biomedical data incorporate meaningful cross-database exploration. A new approach is offered for visualizing and exploring a query-based subset of multiple heterogeneous biomedical databases. Databases are modeled as an entity-relation graph containing nodes (database records) and links (relationships between records). Users specify a keyword search string to retrieve an initial set of nodes, and then explore intra- and interdatabase links. Results are visualized with user-defined semantic substrates to take advantage of the rich set of attributes usually present in biomedical data. Comments from domain experts indicate that this visualization method is potentially advantageous for biomedical knowledge exploration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号