首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Automated image analysis on virtual slides is evolving rapidly and will play an important role in the future of digital pathology. Due to the image size, the computational cost of processing whole slide images (WSIs) in full resolution is immense. Moreover, image analysis requires well focused images in high magnification.

Methods

We present a system that merges virtual microscopy techniques, open source image analysis software, and distributed parallel processing. We have integrated the parallel processing framework JPPF, so batch processing can be performed distributed and in parallel. All resulting meta data and image data are collected and merged. As an example the system is applied to the specific task of image sharpness assessment. ImageJ is an open source image editing and processing framework developed at the NIH having a large user community that contributes image processing algorithms wrapped as plug-ins in a wide field of life science applications. We developed an ImageJ plug-in that supports both basic interactive virtual microscope and batch processing functionality. For the application of sharpness inspection we employ an approach with non-overlapping tiles. Compute nodes retrieve image tiles of moderate size from the streaming server and compute the focus measure. Each tile is divided into small sub images to calculate an edge based sharpness criterion which is used for classification. The results are aggregated in a sharpness map.

Results

Based on the system we calculate a sharpness measure and classify virtual slides into one of the following categories - excellent, okay, review and defective. Generating a scaled sharpness map enables the user to evaluate sharpness of WSIs and shows overall quality at a glance thus reducing tedious assessment work.

Conclusions

Using sharpness assessment as an example, the introduced system can be used to process, analyze and parallelize analysis of whole slide images based on open source software.
  相似文献   

2.

Background

Currently available microscope slide scanners produce whole slide images at various resolutions from histological sections. Nevertheless, acquisition area and so visualization of large tissue samples are limited by the standardized size of glass slides, used daily in pathology departments. The proposed solution has been developed to build composite virtual slides from images of large tumor fragments.

Materials and methods

Images of HES or immunostained histological sections of carefully labeled fragments from a representative slice of breast carcinoma were acquired with a digital slide scanner at a magnification of 20×. The tiling program involves three steps: the straightening of tissue fragment images using polynomial interpolation method, and the building and assembling of strips of contiguous tissue sample whole slide images in × and y directions. The final image is saved in a pyramidal BigTiff file format. The program has been tested on several tumor slices. A correlation quality control has been done on five images artificially cut.

Results

Sixty tumor slices from twenty surgical specimens, cut into two to twenty six pieces, were reconstructed. A median of 98.71% is obtained by computing the correlation coefficients between native and reconstructed images for quality control.

Conclusions

The proposed method is efficient and able to adapt itself to daily work conditions of classical pathology laboratories.
  相似文献   

3.

Introduction

Data processing is one of the biggest problems in metabolomics, given the high number of samples analyzed and the need of multiple software packages for each step of the processing workflow.

Objectives

Merge in the same platform the steps required for metabolomics data processing.

Methods

KniMet is a workflow for the processing of mass spectrometry-metabolomics data based on the KNIME Analytics platform.

Results

The approach includes key steps to follow in metabolomics data processing: feature filtering, missing value imputation, normalization, batch correction and annotation.

Conclusion

KniMet provides the user with a local, modular and customizable workflow for the processing of both GC–MS and LC–MS open profiling data.
  相似文献   

4.

Background

The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications.

Results

We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup.

Conclusion

Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.
  相似文献   

5.

Background

Automated image analysis, measurements of virtual slides, and open access electronic measurement user systems require standardized image quality assessment in tissue-based diagnosis.

Aims

To describe the theoretical background and the practical experiences in automated image quality estimation of colour images acquired from histological slides.

Theory, material and measurements

Digital images acquired from histological slides should present with textures and objects that permit automated image information analysis. The quality of digitized images can be estimated by spatial independent and local filter operations that investigate in homogenous brightness, low peak to noise ratio (full range of available grey values), maximum gradients, equalized grey value distribution, and existence of grey value thresholds. Transformation of the red-green-blue (RGB) space into the hue-saturation-intensity (HSI) space permits the detection of colour and intensity maxima/minima. The feature distance of the original image to its standardized counterpart is an appropriate measure to quantify the actual image quality. These measures have been applied to a series of H&;E stained, fluorescent (DAPI, Texas Red, FITC), and immunohistochemically stained (PAP, DAB) slides. More than 5,000 slides have been measured and partly analyzed in a time series.

Results

Analysis of H&;E stained slides revealed low shading corrections (10%) and moderate grey value standardization (10 – 20%) in the majority of cases. Immunohistochemically stained slides displayed greater shading and grey value correction. Fluorescent stained slides are often revealed to high brightness. Images requiring only low standardization corrections possess at least 5 different statistically significant thresholds, which are useful for object segmentation. Fluorescent images of good quality only posses one singular intensity maximum in contrast to good images obtained from H&;E stained slides that present with 2 – 3 intensity maxima.

Conclusion

Evaluation of image quality and creation of formally standardized images should be performed prior to automatic analysis of digital images acquired from histological slides. Spatial dependent and local filter operations as well as analysis of the RGB and HSI spaces are appropriate methods to reproduce evaluated formal image quality.
  相似文献   

6.

Introduction

Untargeted metabolomics is a powerful tool for biological discoveries. To analyze the complex raw data, significant advances in computational approaches have been made, yet it is not clear how exhaustive and reliable the data analysis results are.

Objectives

Assessment of the quality of raw data processing in untargeted metabolomics.

Methods

Five published untargeted metabolomics studies, were reanalyzed.

Results

Omissions of at least 50 relevant compounds from the original results as well as examples of representative mistakes were reported for each study.

Conclusion

Incomplete raw data processing shows unexplored potential of current and legacy data.
  相似文献   

7.

Background

Cervical cancer is the fifth most common cancer among women, which is the third leading cause of cancer death in women worldwide. Brachytherapy is the most effective treatment for cervical cancer. For brachytherapy, computed tomography (CT) imaging is necessary since it conveys tissue density information which can be used for dose planning. However, the metal artifacts caused by brachytherapy applicators remain a challenge for the automatic processing of image data for image-guided procedures or accurate dose calculations. Therefore, developing an effective metal artifact reduction (MAR) algorithm in cervical CT images is of high demand.

Methods

A novel residual learning method based on convolutional neural network (RL-ARCNN) is proposed to reduce metal artifacts in cervical CT images. For MAR, a dataset is generated by simulating various metal artifacts in the first step, which will be applied to train the CNN. This dataset includes artifact-insert, artifact-free, and artifact-residual images. Numerous image patches are extracted from the dataset for training on deep residual learning artifact reduction based on CNN (RL-ARCNN). Afterwards, the trained model can be used for MAR on cervical CT images.

Results

The proposed method provides a good MAR result with a PSNR of 38.09 on the test set of simulated artifact images. The PSNR of residual learning (38.09) is higher than that of ordinary learning (37.79) which shows that CNN-based residual images achieve favorable artifact reduction. Moreover, for a 512?×?512 image, the average removal artifact time is less than 1 s.

Conclusions

The RL-ARCNN indicates that residual learning of CNN remarkably reduces metal artifacts and improves critical structure visualization and confidence of radiation oncologists in target delineation. Metal artifacts are eliminated efficiently free of sinogram data and complicated post-processing procedure.
  相似文献   

8.

Background

Images of frozen hydrated [vitrified] virus particles were taken close-to-focus in an electron microscope containing structural signals at high spatial frequencies. These images had very low contrast due to the high levels of noise present in the image. The low contrast made particle selection, classification and orientation determination very difficult. The final purpose of the classification is to improve the signal-to-noise ratio of the particle representing the class, which is usually the average. In this paper, the proposed method is based on wavelet filtering and multi-resolution processing for the classification and reconstruction of this very noisy data. A multivariate statistical analysis (MSA) is used for this classification.

Results

The MSA classification method is noise dependant. A set of 2600 projections from a 3D map of a herpes simplex virus -to which noise was added- was classified by MSA. The classification shows the power of wavelet filtering in enhancing the quality of class averages (used in 3D reconstruction) compared to Fourier band pass filtering. A 3D reconstruction of a recombinant virus (VP5-VP19C) is presented as an application of multi-resolution processing for classification and reconstruction.

Conclusion

The wavelet filtering and multi-resolution processing method proposed in this paper offers a new way for processing very noisy images obtained from electron cryo-microscopes. The multi-resolution and filtering improves the speed and accuracy of classification, which is vital for the 3D reconstruction of biological objects. The VP5-VP19C recombinant virus reconstruction presented here is an example, which demonstrates the power of this method. Without this processing, it is not possible to get the correct 3D map of this virus.
  相似文献   

9.

Background

The finite element method (FEM) is a powerful mathematical tool to simulate and visualize the mechanical deformation of tissues and organs during medical examinations or interventions. It is yet a challenge to build up an FEM mesh directly from a volumetric image partially because the regions (or structures) of interest (ROIs) may be irregular and fuzzy.

Methods

A software package, ImageParser, is developed to generate an FEM mesh from 3-D tomographic medical images. This software uses a semi-automatic method to detect ROIs from the context of image including neighboring tissues and organs, completes segmentation of different tissues, and meshes the organ into elements.

Results

The ImageParser is shown to build up an FEM model for simulating the mechanical responses of the breast based on 3-D CT images. The breast is compressed by two plate paddles under an overall displacement as large as 20% of the initial distance between the paddles. The strain and tangential Young's modulus distributions are specified for the biomechanical analysis of breast tissues.

Conclusion

The ImageParser can successfully exact the geometry of ROIs from a complex medical image and generate the FEM mesh with customer-defined segmentation information.
  相似文献   

10.

Background

With the improvements in biosensors and high-throughput image acquisition technologies, life science laboratories are able to perform an increasing number of experiments that involve the generation of a large amount of images at different imaging modalities/scales. It stresses the need for computer vision methods that automate image classification tasks.

Results

We illustrate the potential of our image classification method in cell biology by evaluating it on four datasets of images related to protein distributions or subcellular localizations, and red-blood cell shapes. Accuracy results are quite good without any specific pre-processing neither domain knowledge incorporation. The method is implemented in Java and available upon request for evaluation and research purpose.

Conclusion

Our method is directly applicable to any image classification problems. We foresee the use of this automatic approach as a baseline method and first try on various biological image classification problems.
  相似文献   

11.

Introduction

Concerning NMR-based metabolomics, 1D spectra processing often requires an expert eye for disentangling the intertwined peaks.

Objectives

The objective of NMRProcFlow is to assist the expert in this task in the best way without requirement of programming skills.

Methods

NMRProcFlow was developed to be a graphical and interactive 1D NMR (1H & 13C) spectra processing tool.

Results

NMRProcFlow (http://nmrprocflow.org), dedicated to metabolic fingerprinting and targeted metabolomics, covers all spectra processing steps including baseline correction, chemical shift calibration and alignment.

Conclusion

Biologists and NMR spectroscopists can easily interact and develop synergies by visualizing the NMR spectra along with their corresponding experimental-factor levels, thus setting a bridge between experimental design and subsequent statistical analyses.
  相似文献   

12.

Background

Applications in biomedical science and life science produce large data sets using increasingly powerful imaging devices and computer simulations. It is becoming increasingly difficult for scientists to explore and analyze these data using traditional tools. Interactive data processing and visualization tools can support scientists to overcome these limitations.

Results

We show that new data processing tools and visualization systems can be used successfully in biomedical and life science applications. We present an adaptive high-resolution display system suitable for biomedical image data, algorithms for analyzing and visualization protein surfaces and retinal optical coherence tomography data, and visualization tools for 3D gene expression data.

Conclusion

We demonstrated that interactive processing and visualization methods and systems can support scientists in a variety of biomedical and life science application areas concerned with massive data analysis.
  相似文献   

13.

Background

Cord blood lipids are potential disease biomarkers. We aimed to determine if their concentrations were affected by delayed blood processing.

Method

Refrigerated cord blood from six healthy newborns was centrifuged every 12 h for 4 days. Plasma lipids were analysed by liquid chromatography/mass spectroscopy.

Results

Of 262 lipids identified, only eight varied significantly over time. These comprised three dihexosylceramides, two phosphatidylserines and two phosphatidylethanolamines whose relative concentrations increased and one sphingomyelin that decreased.

Conclusion

Delay in separation of plasma from refrigerated cord blood has minimal effect overall on the plasma lipidome.
  相似文献   

14.
15.

Introduction

Mass spectrometry imaging (MSI) experiments result in complex multi-dimensional datasets, which require specialist data analysis tools.

Objectives

We have developed massPix—an R package for analysing and interpreting data from MSI of lipids in tissue.

Methods

massPix produces single ion images, performs multivariate statistics and provides putative lipid annotations based on accurate mass matching against generated lipid libraries.

Results

Classification of tissue regions with high spectral similarly can be carried out by principal components analysis (PCA) or k-means clustering.

Conclusion

massPix is an open-source tool for the analysis and statistical interpretation of MSI data, and is particularly useful for lipidomics applications.
  相似文献   

16.

Background

The aim of this paper is to provide a general discussion, algorithm, and actual working programs of the deformation method for fast simulation of biological tissue formed by fibers and fluid. In order to demonstrate the benefit of the clinical applications software, we successfully used our computational program to deform a 3D breast image acquired from patients, using a 3D scanner, in a real hospital environment.

Results

The method implements a quasi-static solution for elastic global deformations of objects. Each pair of vertices of the surface is connected and defines an elastic fiber. The set of all the elastic fibers defines a mesh of smaller size than the volumetric meshes, allowing for simulation of complex objects with less computational effort. The behavior similar to the stress tensor is obtained by the volume conservation equation that mixes the 3D coordinates. Step by step, we show the computational implementation of this approach.

Conclusions

As an example, a 2D rectangle formed by only 4 vertices is solved and, for this simple geometry, all intermediate results are shown. On the other hand, actual implementations of these ideas in the form of working computer routines are provided for general 3D objects, including a clinical application.
  相似文献   

17.

Background

The dynamic growing and shortening behaviors of microtubules are central to the fundamental roles played by microtubules in essentially all eukaryotic cells. Traditionally, microtubule behavior is quantified by manually tracking individual microtubules in time-lapse images under various experimental conditions. Manual analysis is laborious, approximate, and often offers limited analytical capability in extracting potentially valuable information from the data.

Results

In this work, we present computer vision and machine-learning based methods for extracting novel dynamics information from time-lapse images. Using actual microtubule data, we estimate statistical models of microtubule behavior that are highly effective in identifying common and distinct characteristics of microtubule dynamic behavior.

Conclusion

Computational methods provide powerful analytical capabilities in addition to traditional analysis methods for studying microtubule dynamic behavior. Novel capabilities, such as building and querying microtubule image databases, are introduced to quantify and analyze microtubule dynamic behavior.
  相似文献   

18.

Background

Cephalometric analysis and measurements of skull parameters using X-Ray images plays an important role in predicating and monitoring orthodontic treatment. Manual analysis and measurements of cephalometric is considered tedious, time consuming, and subjected to human errors. Several cephalometric systems have been developed to automate the cephalometric procedure; however, no clear insights have been reported about reliability, performance, and usability of those systems. This study utilizes some techniques to evaluate reliability, performance, and usability metric using SUS methods of the developed cephalometric system which has not been reported in previous studies.

Methods

In this study a novel system named Ceph-X is developed to computerize the manual tasks of orthodontics during cephalometric measurements. Ceph-X is developed by using image processing techniques with three main models: enhancements X-ray image model, locating landmark model, and computation model. Ceph-X was then evaluated by using X-ray images of 30 subjects (male and female) obtained from University of Malaya hospital. Three orthodontics specialists were involved in the evaluation of accuracy to avoid intra examiner error, and performance for Ceph-X, and 20 orthodontics specialists were involved in the evaluation of the usability, and user satisfaction for Ceph-X by using the SUS approach.

Results

Statistical analysis for the comparison between the manual and automatic cephalometric approaches showed that Ceph-X achieved a great accuracy approximately 96.6%, with an acceptable errors variation approximately less than 0.5 mm, and 1°. Results showed that Ceph-X increased the specialist performance, and minimized the processing time to obtain cephalometric measurements of human skull. Furthermore, SUS analysis approach showed that Ceph-X has an excellent usability user’s feedback.

Conclusions

The Ceph-X has proved its reliability, performance, and usability to be used by orthodontists for the analysis, diagnosis, and treatment of cephalometric.
  相似文献   

19.

Background

The significant advancement in the mobile sensing technologies has brought great interests on application development for the Internet-of-Things (IoT). With the advantages of contactlessness data retrieval and efficient data processing of intelligent IoT-based objects, versatile innovative types of on-demand medical relevant services have promptly been developed and deployed. Critical characteristics involved within the data processing and operation must thoroughly be considered. To achieve the efficiency of data retrieval and the robustness of communications among IoT-based objects, sturdy security primitives are required to preserve data confidentiality and entity authentication.

Methods

A robust nursing-care support system is developed for efficient and secure communication among mobile bio-sensors, active intelligent objects, the IoT gateway and the backend nursing-care server in which further data analysis can be performed to provide high-quality and on-demand nursing-care service.

Results

We realize the system implementation with an IoT-based testbed, i.e. the Raspberry PI II platform, to present the practicability of the proposed IoT-oriented nursing-care support system in which a user-friendly computation cost, i.e. 6.33 ms, is required for a normal session of our proposed system. Based on the protocol analysis we conducted, the security robustness of the proposed nursing-care support system is guaranteed.

Conclusions

According to the protocol analysis and performance evaluation, the practicability of the proposed method is demonstrated. In brief, we can claim that our proposed system is very suitable for IoT-based environments and will be a highly competitive candidate for the next generation of nursing-care service systems.
  相似文献   

20.
Guo S  Tang J  Deng Y  Xia Q 《BMC genomics》2010,11(Z2):S13

Background

Starches are the main storage polysaccharides in plants and are distributed widely throughout plants including seeds, roots, tubers, leaves, stems and so on. Currently, microscopic observation is one of the most important ways to investigate and analyze the structure of starches. The position, shape, and size of the starch granules are the main measurements for quantitative analysis. In order to obtain these measurements, segmentation of starch granules from the background is very important. However, automatic segmentation of starch granules is still a challenging task because of the limitation of imaging condition and the complex scenarios of overlapping granules.

Results

We propose a novel method to segment starch granules in microscopic images. In the proposed method, we first separate starch granules from background using automatic thresholding and then roughly segment the image using watershed algorithm. In order to reduce the oversegmentation in watershed algorithm, we use the roundness of each segment, and analyze the gradient vector field to find the critical points so as to identify oversegments. After oversegments are found, we extract the features, such as the position and intensity of the oversegments, and use fuzzy c-means clustering to merge the oversegments to the objects with similar features. Experimental results demonstrate that the proposed method can alleviate oversegmentation of watershed segmentation algorithm successfully.

Conclusions

We present a new scheme for starch granules segmentation. The proposed scheme aims to alleviate the oversegmentation in watershed algorithm. We use the shape information and critical points of gradient vector flow (GVF) of starch granules to identify oversegments, and use fuzzy c-mean clustering based on prior knowledge to merge these oversegments to the objects. Experimental results on twenty microscopic starch images demonstrate the effectiveness of the proposed scheme.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号