首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Diagnostic surgical pathology or tissue–based diagnosis still remains the most reliable and specific diagnostic medical procedure. The development of whole slide scanners permits the creation of virtual slides and to work on so-called virtual microscopes. In addition to interactive work on virtual slides approaches have been reported that introduce automated virtual microscopy, which is composed of several tools focusing on quite different tasks. These include evaluation of image quality and image standardization, analysis of potential useful thresholds for object detection and identification (segmentation), dynamic segmentation procedures, adjustable magnification to optimize feature extraction, and texture analysis including image transformation and evaluation of elementary primitives. Grid technology seems to possess all features to efficiently target and control the specific tasks of image information and detection in order to obtain a detailed and accurate diagnosis. Grid technology is based upon so-called nodes that are linked together and share certain communication rules in using open standards. Their number and functionality can vary according to the needs of a specific user at a given point in time. When implementing automated virtual microscopy with Grid technology, all of the five different Grid functions have to be taken into account, namely 1) computation services, 2) data services, 3) application services, 4) information services, and 5) knowledge services. Although all mandatory tools of automated virtual microscopy can be implemented in a closed or standardized open system, Grid technology offers a new dimension to acquire, detect, classify, and distribute medical image information, and to assure quality in tissue–based diagnosis.  相似文献   

2.
M Puech  F Giroud 《Cytometry》1999,36(1):11-17
BACKGROUND: DNA image analysis is frequently performed in clinical practice as a prognostic tool and to improve diagnosis. The precision of prognosis and diagnosis depends on the accuracy of analysis and particularly on the quality of image analysis systems. It has been reported that image analysis systems used for DNA quantification differ widely in their characteristics (Thunissen et al.: Cytometry 27: 21-25, 1997). This induces inter-laboratory variations when the same sample is analysed in different laboratories. In microscopic image analysis, the principal instrumentation errors arise from the optical and electronic parts of systems. They bring about problems of instability, non-linearity, and shading and glare phenomena. METHODS: The aim of this study is to establish tools and standardised quality control procedures for microscopic image analysis systems. Specific reference standard slides have been developed to control instability, non-linearity, shading and glare phenomena and segmentation efficiency. RESULTS: Some systems have been controlled with these tools and these quality control procedures. Interpretation criteria and accuracy limits of these quality control procedures are proposed according to the conclusions of a European project called PRESS project (Prototype Reference Standard Slide). Beyond these limits, tested image analysis systems are not qualified to realise precise DNA analysis. CONCLUSIONS: The different procedures presented in this work determine if an image analysis system is qualified to deliver sufficiently precise DNA measurements for cancer case analysis. If the controlled systems are beyond the defined limits, some recommendations are given to find a solution to the problem.  相似文献   

3.
With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.  相似文献   

4.
Assessment of fluoroscopic image quality has not kept pace with technological developments in interventional imaging equipment. Access to ‘for presentation’ data on these systems has motivated this investigation into a novel quantitative method of measuring image quality. We have developed a statistical algorithm as an alternative to subjective assessment using threshold contrast detail detectability techniques. Using sets of uniformity exposed fluoroscopy frames, the algorithm estimates the minimum contrast necessary for conspicuity of a range of virtual target object areas A. Pixel mean value distributions in a central image region are Gaussian, with standard deviation σ Pixel binning produces background distributions with area A. For 95% confidence of conspicuity a target object must exhibit a minimum contrast of 3.29σ. A range of threshold contrasts are calculated for a range of virtual areas. Analysis on a few seconds of fluoroscopy data is performed remotely and no test object is required. In this study Threshold Index and Contrast Detail curves were calculated for different incident air kerma rates at the detector, different levels of electronic magnification and different types of image processing. A limited number of direct comparisons were made with subjective assessments using the Leeds TO.10 test object. Results obtained indicate that the statistical algorithm is not only more sensitive to changes in levels of detector dose rate and magnification, but also to levels of image processing, including edge-enhancement. Threshold Index curves thus produced could be used as an interventional system optimisation tool and to objectively compare image quality between vendor systems.  相似文献   

5.
In critical care tight control of blood glucose levels has been shown to lead to better clinical outcomes. The need to develop new protocols for tight glucose control, as well as the opportunity to optimize a variety of other drug therapies, has led to resurgence in model-based medical decision support in this area. One still valid hindrance to developing new model-based protocols using so-called virtual patients, retrospective clinical data, and Monte Carlo methods is the large amount of computational time and resources needed.This paper develops fast analytical-based methods for insulin-glucose system model that are generalizable to other similar systems. Exploiting the structure and partial solutions in a subset of the model is the key in finding accurate fast solutions to the full model. This approach successfully reduced computing time by factors of 5600-144 000 depending on the numerical error management method, for large (50-164 patients) virtual trials and Monte Carlo analysis. It thus allows new model-based or model-derived protocols to be rapidly developed via extensive simulation. The new method is rigorously compared to existing standard numerical solutions and is found to be highly accurate to within 0.2%.  相似文献   

6.
7.
The previous generation of image analysis machines were capable of processing and analyzing binary, i.e., black and white images, and making measurements and decisions thereon. The Magiscan 2, one of the new generation computers, is capable of analyzing gray-level images in a variety of sophisticated ways. Its uses in the medical application of image analysis are presented, as are the techniques used to analyze images in general. The two principal current medical uses are the automatic karyotyping of chromosomes and the automatic screening of cervical smears. Other applications discussed include three-dimensional reconstruction of structures from two-dimensional sections and the possibility of developing expert systems for medical diagnostics.  相似文献   

8.
ConfocalVR is a virtual reality (VR) application created to improve the ability of researchers to study the complexity of cell architecture. Confocal microscopes take pictures of fluorescently labeled proteins or molecules at different focal planes to create a stack of two-dimensional images throughout the specimen. Current software applications reconstruct the three-dimensional (3D) image and render it as a two-dimensional projection onto a computer screen where users need to rotate the image to expose the full 3D structure. This process is mentally taxing, breaks down if you stop the rotation, and does not take advantage of the eye's full field of view. ConfocalVR exploits consumer-grade VR systems to fully immerse the user in the 3D cellular image. In this virtual environment, the user can (1) adjust image viewing parameters without leaving the virtual space, (2) reach out and grab the image to quickly rotate and scale the image to focus on key features, and (3) interact with other users in a shared virtual space enabling real-time collaborative exploration and discussion. We found that immersive VR technology allows the user to rapidly understand cellular architecture and protein or molecule distribution. We note that it is impossible to understand the value of immersive visualization without experiencing it first hand, so we encourage readers to get access to a VR system, download this software, and evaluate it for yourself. The ConfocalVR software is available for download at http://www.confocalvr.com, and is free for nonprofits.  相似文献   

9.
Background: Quantitative systems pharmacology (QSP) is an emerging discipline that integrates diverse data to quantitatively explore the interactions between drugs and multi-scale systems including small compounds, nucleic acids, proteins, pathways, cells, organs and disease processes. Results: Various computational methods such as ADME/T evaluation, molecular modeling, logical modeling, network modeling, pathway analysis, multi-scale systems pharmacology platforms and virtual patient for QSP have been developed. We reviewed the major progresses and broad applications in medical guidance, drug discovery and exploration of pharmacodynamic material basis and mechanism of traditional Chinese medicine. Conclusion: QSP has significant achievements in recent years and is a promising approach for quantitative evaluation of drug efficacy and systematic exploration of mechanisms of action of drugs.  相似文献   

10.
The malaria hypothesis, which addresses a strong selective pressure on human genes resulting from a chain of processes that originated with the practice of agriculture, is an example of an evolutionary consequence of niche construction. This scenario has led us to formulate the following questions: Are the genetic adaptations of populations with a history of contact with malaria reflected in the local medical systems? Likewise, could environmental changes (deforestation) and the incidence of malaria result in an adaptive response in these local health care systems? We collected secondary data for the entire African continent from different databases and secondary sources and measured the response of health care systems as the variation in the richness of antimalarial medicinal plants. Our results did not indicate a cause-and-effect relationship between the tested variables and the medical systems, but a subsequent analysis of variance showed an increase in the mean of medicinal plants in regions with a higher incidence of malaria prior to disease control measures. We suggest that this response had a greater impact on local medical knowledge than other variables, such as genetic frequency and deforestation.  相似文献   

11.
Implementing an accurate face recognition system requires images in different variations, and if our database is large, we suffer from problems such as storing cost and low speed in recognition algorithms. On the other hand, in some applications there is only one image available per person for training recognition model. In this article, we propose a neural network model inspired of bidirectional analysis and synthesis brain network which can learn nonlinear mapping between image space and components space. Using a deep neural network model, we have tried to separate pose components from person ones. After setting apart these components, we can use them to synthesis virtual images of test data in different pose and lighting conditions. These virtual images are used to train neural network classifier. The results showed that training neural classifier with virtual images gives better performance than training classifier with frontal view images.  相似文献   

12.
The vast amount of data produced by today’s medical imaging systems has led medical professionals to turn to novel technologies in order to efficiently handle their data and exploit the rich information present in them. In this context, artificial intelligence (AI) is emerging as one of the most prominent solutions, promising to revolutionise every day clinical practice and medical research. The pillar supporting the development of reliable and robust AI algorithms is the appropriate preparation of the medical images to be used by the AI-driven solutions. Here, we provide a comprehensive guide for the necessary steps to prepare medical images prior to developing or applying AI algorithms. The main steps involved in a typical medical image preparation pipeline include: (i) image acquisition at clinical sites, (ii) image de-identification to remove personal information and protect patient privacy, (iii) data curation to control for image and associated information quality, (iv) image storage, and (v) image annotation. There exists a plethora of open access tools to perform each of the aforementioned tasks and are hereby reviewed. Furthermore, we detail medical image repositories covering different organs and diseases. Such repositories are constantly increasing and enriched with the advent of big data. Lastly, we offer directions for future work in this rapidly evolving field.  相似文献   

13.
The morphology of roots and root systems influences the efficiency by which plants acquire nutrients and water, anchor themselves and provide stability to the surrounding soil. Plant genotype and the biotic and abiotic environment significantly influence root morphology, growth and ultimately crop yield. The challenge for researchers interested in phenotyping root systems is, therefore, not just to measure roots and link their phenotype to the plant genotype, but also to understand how the growth of roots is influenced by their environment. This review discusses progress in quantifying root system parameters (e.g. in terms of size, shape and dynamics) using imaging and image analysis technologies and also discusses their potential for providing a better understanding of root:soil interactions. Significant progress has been made in image acquisition techniques, however trade‐offs exist between sample throughput, sample size, image resolution and information gained. All of these factors impact on downstream image analysis processes. While there have been significant advances in computation power, limitations still exist in statistical processes involved in image analysis. Utilizing and combining different imaging systems, integrating measurements and image analysis where possible, and amalgamating data will allow researchers to gain a better understanding of root:soil interactions.  相似文献   

14.
小波变换及其在医学图像处理中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
医学图像的好坏直接影响着医生对病情的诊断和治疗,因此利用数字图像处理等技术对医学图像进行有效的处理,已成为医学图像处理研究和开发的一大热点。小波变换是对傅里叶变换的继承和发展,在医学影像领域有着广泛的应用前景。本文介绍了二维离散小渡变换的一般形式,在图像分解与重构的基础上.系统地阐述了利用小小组变换的时频域特性与多分辨分析对医学图像进行去噪、增强以及边缘提取等深层次的处理,有效的改善图像质量。  相似文献   

15.
Microbes frequently live within multicellular, solid surface-attached assemblages termed biofilms. These microbial communities have architectural features that contribute to population heterogeneity and consequently to emergent cell functions. Therefore, three-dimensional (3D) features of biofilm structure are important for understanding the physiology and ecology of these microbial systems. This paper details several protocols for scanning electron microscopy and confocal laser scanning microscopy (CLSM) of biofilms grown on polystyrene pegs in the Calgary Biofilm Device (CBD). Furthermore, a procedure is described for image processing of CLSM data stacks using amira™, a virtual reality tool, to create surface and/or volume rendered 3D visualizations of biofilm microorganisms. The combination of microscopy with microbial cultivation in the CBD — an apparatus that was designed for highthroughput susceptibility testing — allows for structure-function analysis of biofilms under multivariate growth and exposure conditions.  相似文献   

16.
Virtualization technology reduces the costs for server installation, operation, and maintenance and it can simplify development of distributed systems. Currently, there are various virtualization technologies such as Xen, KVM, VMware, and etc, and all these technologies support various virtualization functions individually on the heterogeneous platforms. Therefore, it is important to be able to integrate and manage these heterogeneous virtualized resources in order to develop distributed systems based on the current virtualization techniques. This paper presents an integrated management system that is able to provide information for the usage of heterogeneous virtual resources and also to control them. The main focus of the system is to abstract various virtual resources and to reconfigure them flexibly. For this, an integrated management system has been developed and implemented based on a libvirt-based virtualization API and data distribution service (DDS).  相似文献   

17.
BACKGROUND: The photobleaching fluorescence resonance energy transfer (pbFRET) technique is a spectroscopic method to measure proximity relations between fluorescently labeled macromolecules using digital imaging microscopy. To calculate the energy transfer values one has to determine the bleaching time constants in pixel-by-pixel fashion from the image series recorded on the donor-only and donor and acceptor double-labeled samples. Because of the large number of pixels and the time-consuming calculations, this procedure should be assisted by powerful image data processing software. There is no commercially available software that is able to fulfill these requirements. METHODS: New evaluation software was developed to analyze pbFRET data for Windows platform in National Instrument LabVIEW 6.1. This development environment contains a mathematical virtual instrument package, in which the Levenberg-Marquardt routine is also included. As a reference experiment, FRET efficiency between the two chains (beta2-microglobulin and heavy chain) of major histocompatibility complex (MHC) class I glycoproteins and FRET between MHC I and MHC II molecules were determined in the plasma membrane of JY, human B lymphoma cells. RESULTS: The bleaching time constants calculated on pixel-by-pixel basis can be displayed as a color-coded map or as a histogram from raw image format. CONCLUSION: In this report we introduce a new version of pbFRET analysis and data processing software that is able to generate a full analysis pattern of donor photobleaching image series under various conditions. .  相似文献   

18.
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.  相似文献   

19.
Oligonucleotide microarrays, also called "DNA chips," are currently made by a light-directed chemistry that requires a large number of photolithographic masks for each chip. Here we describe a maskless array synthesizer (MAS) that replaces the chrome masks with virtual masks generated on a computer, which are relayed to a digital micromirror array. A 1:1 reflective imaging system forms an ultraviolet image of the virtual mask on the active surface of the glass substrate, which is mounted in a flow cell reaction chamber connected to a DNA synthesizer. Programmed chemical coupling cycles follow light exposure, and these steps are repeated with different virtual masks to grow desired oligonucleotides in a selected pattern. This instrument has been used to synthesize oligonucleotide microarrays containing more than 76,000 features measuring 16 microm 2. The oligonucleotides were synthesized at high repetitive yield and, after hybridization, could readily discriminate single-base pair mismatches. The MAS is adaptable to the fabrication of DNA chips containing probes for thousands of genes, as well as any other solid-phase combinatorial chemistry to be performed in high-density microarrays.  相似文献   

20.
The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号