首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications.

Results

We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup.

Conclusion

Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.
  相似文献   

2.

Background

Automated image analysis, measurements of virtual slides, and open access electronic measurement user systems require standardized image quality assessment in tissue-based diagnosis.

Aims

To describe the theoretical background and the practical experiences in automated image quality estimation of colour images acquired from histological slides.

Theory, material and measurements

Digital images acquired from histological slides should present with textures and objects that permit automated image information analysis. The quality of digitized images can be estimated by spatial independent and local filter operations that investigate in homogenous brightness, low peak to noise ratio (full range of available grey values), maximum gradients, equalized grey value distribution, and existence of grey value thresholds. Transformation of the red-green-blue (RGB) space into the hue-saturation-intensity (HSI) space permits the detection of colour and intensity maxima/minima. The feature distance of the original image to its standardized counterpart is an appropriate measure to quantify the actual image quality. These measures have been applied to a series of H&;E stained, fluorescent (DAPI, Texas Red, FITC), and immunohistochemically stained (PAP, DAB) slides. More than 5,000 slides have been measured and partly analyzed in a time series.

Results

Analysis of H&;E stained slides revealed low shading corrections (10%) and moderate grey value standardization (10 – 20%) in the majority of cases. Immunohistochemically stained slides displayed greater shading and grey value correction. Fluorescent stained slides are often revealed to high brightness. Images requiring only low standardization corrections possess at least 5 different statistically significant thresholds, which are useful for object segmentation. Fluorescent images of good quality only posses one singular intensity maximum in contrast to good images obtained from H&;E stained slides that present with 2 – 3 intensity maxima.

Conclusion

Evaluation of image quality and creation of formally standardized images should be performed prior to automatic analysis of digital images acquired from histological slides. Spatial dependent and local filter operations as well as analysis of the RGB and HSI spaces are appropriate methods to reproduce evaluated formal image quality.
  相似文献   

3.

Background

With the improvements in biosensors and high-throughput image acquisition technologies, life science laboratories are able to perform an increasing number of experiments that involve the generation of a large amount of images at different imaging modalities/scales. It stresses the need for computer vision methods that automate image classification tasks.

Results

We illustrate the potential of our image classification method in cell biology by evaluating it on four datasets of images related to protein distributions or subcellular localizations, and red-blood cell shapes. Accuracy results are quite good without any specific pre-processing neither domain knowledge incorporation. The method is implemented in Java and available upon request for evaluation and research purpose.

Conclusion

Our method is directly applicable to any image classification problems. We foresee the use of this automatic approach as a baseline method and first try on various biological image classification problems.
  相似文献   

4.

Background

It is difficult for neurosurgeons to perceive the complex three-dimensional anatomical relationships in the sellar region.

Methods

To investigate the value of using a virtual reality system for planning resection of sellar region tumors. The study included 60 patients with sellar tumors. All patients underwent computed tomography angiography, MRI-T1W1, and contrast enhanced MRI-T1W1 image sequence scanning. The CT and MRI scanning data were collected and then imported into a Dextroscope imaging workstation, a virtual reality system that allows structures to be viewed stereoscopically. During preoperative assessment, typical images for each patient were chosen and printed out for use by the surgeons as references during surgery.

Results

All sellar tumor models clearly displayed bone, the internal carotid artery, circle of Willis and its branches, the optic nerve and chiasm, ventricular system, tumor, brain, soft tissue and adjacent structures. Depending on the location of the tumors, we simulated the transmononasal sphenoid sinus approach, transpterional approach, and other approaches. Eleven surgeons who used virtual reality models completed a survey questionnaire. Nine of the participants said that the virtual reality images were superior to other images but that other images needed to be used in combination with the virtual reality images.

Conclusions

The three-dimensional virtual reality models were helpful for individualized planning of surgery in the sellar region. Virtual reality appears to be promising as a valuable tool for sellar region surgery in the future.
  相似文献   

5.
6.

Background

Applications in biomedical science and life science produce large data sets using increasingly powerful imaging devices and computer simulations. It is becoming increasingly difficult for scientists to explore and analyze these data using traditional tools. Interactive data processing and visualization tools can support scientists to overcome these limitations.

Results

We show that new data processing tools and visualization systems can be used successfully in biomedical and life science applications. We present an adaptive high-resolution display system suitable for biomedical image data, algorithms for analyzing and visualization protein surfaces and retinal optical coherence tomography data, and visualization tools for 3D gene expression data.

Conclusion

We demonstrated that interactive processing and visualization methods and systems can support scientists in a variety of biomedical and life science application areas concerned with massive data analysis.
  相似文献   

7.

Background

Non-proliferative diabetic retinopathy is the early stage of diabetic retinopathy. Automatic detection of non-proliferative diabetic retinopathy is significant for clinical diagnosis, early screening and course progression of patients.

Methods

This paper introduces the design and implementation of an automatic system for screening non-proliferative diabetic retinopathy based on color fundus images. Firstly, the fundus structures, including blood vessels, optic disc and macula, are extracted and located, respectively. In particular, a new optic disc localization method using parabolic fitting is proposed based on the physiological structure characteristics of optic disc and blood vessels. Then, early lesions, such as microaneurysms, hemorrhages and hard exudates, are detected based on their respective characteristics. An equivalent optical model simulating human eyes is designed based on the anatomical structure of retina. Main structures and early lesions are reconstructed in the 3D space for better visualization. Finally, the severity of each image is evaluated based on the international criteria of diabetic retinopathy.

Results

The system has been tested on public databases and images from hospitals. Experimental results demonstrate that the proposed system achieves high accuracy for main structures and early lesions detection. The results of severity classification for non-proliferative diabetic retinopathy are also accurate and suitable.

Conclusions

Our system can assist ophthalmologists for clinical diagnosis, automatic screening and course progression of patients.
  相似文献   

8.

Background

There are many scanners of glass slides on the market now. Quality of digital images produced by them may be different and pathologists who examine virtual slides on a monitor may subjectively evaluate it. However, objective comparison of quality of digital slides captured by various devices requires assessment algorithms, which will be automatically executed.

Methods

In this work such an algorithm is proposed and implemented. It is dedicated for comparing quality of virtual slides which show the same glass slide captured by two or more scanners. In the first step this method looks for the largest corresponding areas in the slides. This task is realized by defining boundaries of tissues and providing the relative scale factor. Then, a certain number of smaller areas, which show the same fragments of both slides, is selected. The chosen fragments are analyzed using Gray Level Co-occurrence Matrix (GLCM). For GLCM matrices some of the Haralick features are calculated, like contrast or entropy. Basing on results for some sample images, features appropriate for quality assessment are chosen. Aggregation of values from all selected fragments allows to compare the quality of images captured by tested devices.

Results

Described method was tested on two sets of ten virtual slides, acquired by scanning the same set of ten glass slides by two different devices. First set was scanned and digitized using the robotic microscope Axioscope2 (Zeiss) equipped with AxioCam Hrc CCD camera. Second set was scanned by DeskScan (Zeiss) with standard equipment. Before analyzing captured virtual slides, images were stitched and converted using software which utilizes advances in aerial and satellite imaging.The results of the experiment show that calculated quality factors are higher for virtual slides acquired using first mentioned device (Axioscope2 with AxioCam).

Conclusions

Results of the tests are consistent with opinion of the pathologists who assessed quality of virtual slides captured by these devices. This shows that the method has potential in automatic evaluation of virtual slides’ quality.
  相似文献   

9.

Background

Currently available microscope slide scanners produce whole slide images at various resolutions from histological sections. Nevertheless, acquisition area and so visualization of large tissue samples are limited by the standardized size of glass slides, used daily in pathology departments. The proposed solution has been developed to build composite virtual slides from images of large tumor fragments.

Materials and methods

Images of HES or immunostained histological sections of carefully labeled fragments from a representative slice of breast carcinoma were acquired with a digital slide scanner at a magnification of 20×. The tiling program involves three steps: the straightening of tissue fragment images using polynomial interpolation method, and the building and assembling of strips of contiguous tissue sample whole slide images in × and y directions. The final image is saved in a pyramidal BigTiff file format. The program has been tested on several tumor slices. A correlation quality control has been done on five images artificially cut.

Results

Sixty tumor slices from twenty surgical specimens, cut into two to twenty six pieces, were reconstructed. A median of 98.71% is obtained by computing the correlation coefficients between native and reconstructed images for quality control.

Conclusions

The proposed method is efficient and able to adapt itself to daily work conditions of classical pathology laboratories.
  相似文献   

10.

Background

To perform a three-dimensional (3-D) reconstruction of electron cryomicroscopy (cryo-EM) images of viruses, it is necessary to determine the similarity of image blocks of the two-dimensional (2-D) projections of the virus. The projections containing high resolution information are typically very noisy. Instead of the traditional Euler metric, this paper proposes a new method, based on the geodesic metric, to measure the similarity of blocks.

Results

Our method is a 2-D image denoising approach. A data set of 2243 cytoplasmic polyhedrosis virus (CPV) capsid particle images in different orientations was used to test the proposed method. Relative to Block-matching and three-dimensional filtering (BM3D), Stein’s unbiased risk estimator (SURE), Bayes shrink and K-means singular value decomposition (K-SVD), the experimental results show that the proposed method can achieve a peak signal-to-noise ratio (PSNR) of 45.65. The method can remove the noise from the cryo-EM image and improve the accuracy of particle picking.

Conclusions

The main contribution of the proposed model is to apply the geodesic distance to measure the similarity of image blocks. We conclude that manifold learning methods can effectively eliminate the noise of the cryo-EM image and improve the accuracy of particle picking.
  相似文献   

11.

Background and aims

Root hair growth and development are important features of plant response to varying soil conditions and of nutrient and water uptake. Most current methods of characterizing root hairs in the field are unreliable or inefficient. We describe a method to quantify root hair area in digital images, such as those collected in situ by minirhizotron systems.

Methods

This method uses ImageJ and R open source software and is partially automated using code presented here. It requires manual tracing of a subset of root hair images (training data set) to which a multivariate logistic regression is fit with each color channel in the image as an independent variable. Thereafter the model is applied to complete sets of selected root hair sections to estimate total root hair area.

Results

There was good agreement between the training data sets and the predictions of the regression models in castor (Ricinus communis L.), maize (Zea mays L.), and papaya (Carica papaya L.).

Conclusion

This method enables time-efficient and consistent quantification of root hairs using in situ root imaging systems that are already widely in use.
  相似文献   

12.

Introduction

Data processing is one of the biggest problems in metabolomics, given the high number of samples analyzed and the need of multiple software packages for each step of the processing workflow.

Objectives

Merge in the same platform the steps required for metabolomics data processing.

Methods

KniMet is a workflow for the processing of mass spectrometry-metabolomics data based on the KNIME Analytics platform.

Results

The approach includes key steps to follow in metabolomics data processing: feature filtering, missing value imputation, normalization, batch correction and annotation.

Conclusion

KniMet provides the user with a local, modular and customizable workflow for the processing of both GC–MS and LC–MS open profiling data.
  相似文献   

13.

Introduction

Untargeted metabolomics is a powerful tool for biological discoveries. To analyze the complex raw data, significant advances in computational approaches have been made, yet it is not clear how exhaustive and reliable the data analysis results are.

Objectives

Assessment of the quality of raw data processing in untargeted metabolomics.

Methods

Five published untargeted metabolomics studies, were reanalyzed.

Results

Omissions of at least 50 relevant compounds from the original results as well as examples of representative mistakes were reported for each study.

Conclusion

Incomplete raw data processing shows unexplored potential of current and legacy data.
  相似文献   

14.

Background

Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process.

Method

We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features.

Results

we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are of the type "others". A serial of experimental results are obtained. Firstly, each image categorizing results is presented, and next image categorizing performance indexes such as precision, recall, F-score, are all listed. Different features which include conventional image features and our proposed novel features indicate different categorizing performance, and the results are demonstrated. Thirdly, we conduct an accuracy comparison between support vector machine classification method and our proposed sparse representation classification method. At last, our proposed approach is compared with three peer classification method and experimental results verify our impressively improved performance.

Conclusions

Compared with conventional image features that do not exploit characteristics regarding text positions and distributions inside images embedded in biomedical publications, our proposed image features coupled with the SR based representation model exhibit superior performance for classifying biomedical images as demonstrated in our comparative benchmark study.
  相似文献   

15.

Introduction

Concerning NMR-based metabolomics, 1D spectra processing often requires an expert eye for disentangling the intertwined peaks.

Objectives

The objective of NMRProcFlow is to assist the expert in this task in the best way without requirement of programming skills.

Methods

NMRProcFlow was developed to be a graphical and interactive 1D NMR (1H & 13C) spectra processing tool.

Results

NMRProcFlow (http://nmrprocflow.org), dedicated to metabolic fingerprinting and targeted metabolomics, covers all spectra processing steps including baseline correction, chemical shift calibration and alignment.

Conclusion

Biologists and NMR spectroscopists can easily interact and develop synergies by visualizing the NMR spectra along with their corresponding experimental-factor levels, thus setting a bridge between experimental design and subsequent statistical analyses.
  相似文献   

16.

Background

The finite element method (FEM) is a powerful mathematical tool to simulate and visualize the mechanical deformation of tissues and organs during medical examinations or interventions. It is yet a challenge to build up an FEM mesh directly from a volumetric image partially because the regions (or structures) of interest (ROIs) may be irregular and fuzzy.

Methods

A software package, ImageParser, is developed to generate an FEM mesh from 3-D tomographic medical images. This software uses a semi-automatic method to detect ROIs from the context of image including neighboring tissues and organs, completes segmentation of different tissues, and meshes the organ into elements.

Results

The ImageParser is shown to build up an FEM model for simulating the mechanical responses of the breast based on 3-D CT images. The breast is compressed by two plate paddles under an overall displacement as large as 20% of the initial distance between the paddles. The strain and tangential Young's modulus distributions are specified for the biomechanical analysis of breast tissues.

Conclusion

The ImageParser can successfully exact the geometry of ROIs from a complex medical image and generate the FEM mesh with customer-defined segmentation information.
  相似文献   

17.

Background

Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue.

Methods

The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time.

Results

We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time.

Conclusions

This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.
  相似文献   

18.

Introduction

Mass spectrometry imaging (MSI) experiments result in complex multi-dimensional datasets, which require specialist data analysis tools.

Objectives

We have developed massPix—an R package for analysing and interpreting data from MSI of lipids in tissue.

Methods

massPix produces single ion images, performs multivariate statistics and provides putative lipid annotations based on accurate mass matching against generated lipid libraries.

Results

Classification of tissue regions with high spectral similarly can be carried out by principal components analysis (PCA) or k-means clustering.

Conclusion

massPix is an open-source tool for the analysis and statistical interpretation of MSI data, and is particularly useful for lipidomics applications.
  相似文献   

19.
20.

Background

The accuracy and precision of liquid handling can be altered by several causes including wearing or failure of parts, and human error. The last cause is crucial since point-of-care testing (POCT) devices can be used by non-experienced users or patients themselves. Therefore it is important to improve the method of informing the users of POCT device malfunctions due to damage of parts or human error.

Methods

In this paper, image-based failure monitoring of the automated pipetting was introduced for POCT devices. An inexpensive, high-performance camera for smartphones was employed in our previous work to resolve various malfunctions such as incorrect insertion of the tip, false positioning of the tip and pump, and improper operation of the pump. The image acquired from the camera was analyzed to detect the malfunctions. In this paper, the reagent volume in the tip was estimated from the image processing to verify the pump operation. First, the color component corresponding to the reagent intrinsic color was extracted to identify the reagent area in the tip before applying the binary image processing. The extracted reagent area was projected horizontally and the support length of the projection image was calculated. As the support length was related to the reagent volume, it was referred to the volume length. The relationship between the measured volume length and the previously measured solution mass was investigated. If we can predict the mass of the solution by the volume length, we will be able to detect the pump malfunction.

Results

The cube of the volume length obtained by the proposed image processing method showed a very linear relationship with the reagent mass in the tip injected by the pumping operation (R2?=?0.996), indicating that the volume length could be utilized to estimate the reagent volume to monitor the accuracy and precision of the pumping operation.

Conclusions

An inexpensive smartphone camera was enough to detect various malfunctions of a POCT device with pumping operation. The proposed image processing could monitor the level of inaccuracy of pumping volume in limited range. The simple image processing such as a fixed threshold and projections was employed for the cost optimization and system robustness. However it delivered the promising results because the imaging condition was highly controllable in the devices.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号