首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Positron emission tomography (PET) images have been incorporated into the radiotherapy process as a powerful tool to assist in the contouring of lesions, leading to the emergence of a broad spectrum of automatic segmentation schemes for PET images (PET-AS). However, not all proposed PET-AS algorithms take into consideration the previous steps of image preparation. PET image noise has been shown to be one of the most relevant affecting factors in segmentation tasks. This study demonstrates a nonlinear filtering method based on spatially adaptive wavelet shrinkage using three-dimensional context modelling that considers the correlation of each voxel with its neighbours. Using this noise reduction method, excellent edge conservation properties are obtained. To evaluate the influence in the segmentation schemes of this filter, it was compared with a set of Gaussian filters (the most conventional) and with two previously optimised edge-preserving filters. Five segmentation schemes were used (most commonly implemented in commercial software): fixed thresholding, adaptive thresholding, watershed, adaptive region growing and affinity propagation clustering. Segmentation results were evaluated using the Dice similarity coefficient and classification error. A simple metric was also included to improve the characterisation of the filters used for induced blurring evaluation, based on the measurement of the average edge width. The proposed noise reduction procedure improves the results of segmentation throughout the performed settings and was shown to be more stable in low-contrast and high-noise conditions. Thus, the capacity of the segmentation method is reinforced by the denoising plan used.  相似文献   

2.
MOTIVATION: We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the use of the bivariate t distribution for the foreground intensity provides a model that is less sensitive to extreme observations; (5) as a consequence of the aforementioned properties, it allows segmentation to be undertaken for a wide range of spot shapes, including doughnut, sickle shape and artifacts. RESULTS: We apply our method for gridding, segmentation and estimation to cDNA microarray real images and artificial data. Our method provides better segmentation results in spot shapes as well as intensity estimation than Spot and spotSegmentation R language softwares. It detected blank spots as well as bright artifact for the real data, and estimated spot intensities with high-accuracy for the synthetic data. AVAILABILITY: The algorithms were implemented in Matlab. The Matlab codes implementing both the gridding and segmentation/estimation are available upon request. SUPPLEMENTARY INFORMATION: Supplementary material is available at Bioinformatics online.  相似文献   

3.
Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively.  相似文献   

4.
PurposeArm-artifact, a type of streak artifact frequently observed in computed tomography (CT) images obtained at arms-down positioning in polytrauma patients, is known to degrade image quality. This study aimed to develop a novel arm-artifact reduction algorithm (AAR) applied to projection data.MethodsA phantom resembling an adult abdomen with two arms was scanned using a 16-row CT scanner. The projection data were processed by AAR, and CT images were reconstructed. The artifact reduction for the same phantom was compared with that achieved by two latest iterative reconstruction (IR) techniques (IR1 and IR2) using a normalized artifact index (nAI) at two locations (ventral and dorsal side). Image blurring as a processing side effect was compared with IR2 of the model-based IR using a plastic needle phantom. Additionally, the projection data of two clinical cases were processed using AAR, and the image noise was evaluated.ResultsAAR and IR2 significantly reduced nAI by 87.5% and 74.0%, respectively at the ventral side and 84.2% and 69.6%, respectively, at the dorsal side compared with each filtered back projection (P < 0.01), whereas IR1 did not. The proposed algorithm mostly maintained the original spatial resolution, compared with IR2, which yielded apparent image blurring. The image noise in the clinical cases was also reduced significantly (P < 0.01).ConclusionsAAR was more effective and superior than the latest IR techniques and is expected to improve the image quality of polytrauma CT imaging with arms-down positioning.  相似文献   

5.
A novel pre-treatment process for image segmentation, based on anisotropic diffusion and robust statistics, is presented in this paper. Image smoothing with edge preservation is shown to help upper limb segmentation (shoulder segmentation in particular) in MRI datasets. The anisotropic diffusion process is mainly controlled by an automated stopping function that depends on the values of voxel gradient. Voxel gradients are divided into two classes: one for high values, corresponding to edge voxels or noisy voxels, one for low values. The anisotropic diffusion process is also controlled by a threshold on voxel gradients that separates both classes. A global estimation of this threshold parameter is classically used. In this paper, we propose a new method based on a local robust estimation. It allows a better removing of noise while preserving edges in the images. An entropy criterion is used to quantify the ability of the algorithm to remove noise with different signal to noise ratios in synthetic images. Another quantitative evaluation criterion based on the Pratt Figure of Merit (FOM) is proposed to evaluate the edge preservation and their location accuracy with respect to a manual segmentation. The results on synthetic and MRI data of shoulder show the assets of the local model in terms of areas homogeneity and edges locations.  相似文献   

6.
Zuo XN  Xing XX 《PloS one》2011,6(10):e26703
Neuroimaging community usually employs spatial smoothing to denoise magnetic resonance imaging (MRI) data, e.g., Gaussian smoothing kernels. Such an isotropic diffusion (ISD) based smoothing is widely adopted for denoising purpose due to its easy implementation and efficient computation. Beyond these advantages, Gaussian smoothing kernels tend to blur the edges, curvature and texture of images. Researchers have proposed anisotropic diffusion (ASD) and non-local diffusion (NLD) kernels. We recently demonstrated the effect of these new filtering paradigms on preprocessing real degraded MRI images from three individual subjects. Here, to further systematically investigate the effects at a group level, we collected both structural and functional MRI data from 23 participants. We first evaluated the three smoothing strategies' impact on brain extraction, segmentation and registration. Finally, we investigated how they affect subsequent mapping of default network based on resting-state functional MRI (R-fMRI) data. Our findings suggest that NLD-based spatial smoothing maybe more effective and reliable at improving the quality of both MRI data preprocessing and default network mapping. We thus recommend NLD may become a promising method of smoothing structural MRI images of R-fMRI pipeline.  相似文献   

7.
Improving gene quantification by adjustable spot-image restoration   总被引:1,自引:0,他引:1  
MOTIVATION: One of the major factors that complicate the task of microarray image analysis is that microarray images are distorted by various types of noise. In this study a robust framework is proposed, designed to take into account the effect of noise in microarray images in order to assist the demanding task of microarray image analysis. The proposed framework, incorporates in the microarray image processing pipeline a novel combination of spot adjustable image analysis and processing techniques and consists of the following stages: (1) gridding for facilitating spot identification, (2) clustering (unsupervised discrimination between spot and background pixels) applied to spot image for automatic local noise assessment, (3) modeling of local image restoration process for spot image conditioning (adjustable wiener restoration using an empirically determined degradation function), (4) automatic spot segmentation employing seeded-region-growing, (5) intensity extraction and (6) assessment of the reproducibility (real data) and the validity (simulated data) of the extracted gene expression levels. RESULTS: Both simulated and real microarray images were employed in order to assess the performance of the proposed framework against well-established methods implemented in publicly available software packages (Scanalyze and SPOT). Regarding simulated images, the novel combination of techniques, introduced in the proposed framework, rendered the detection of spot areas and the extraction of spot intensities more accurate. Furthermore, on real images the proposed framework proved of better stability across replicates. Results indicate that the proposed framework improves spots' segmentation and, consequently, quantification of gene expression levels. AVAILABILITY: All algorithms were implemented in Matlab (The Mathworks, Inc., Natick, MA, USA) environment. The codes that implement microarray gridding, adaptive spot restoration and segmentation/intensity extraction are available upon request. Supplementary results and the simulated microarray images used in this study are available for download from: ftp://users:bioinformatics@mipa.med.upatras.gr. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.  相似文献   

8.
MOTIVATION:To develop a highly accurate, practical and fast automated segmentation algorithm for three-dimensional images containing biological objects. To test the algorithm on images of the Drosophila brain, and identify, count and determine the locations of neurons in the images. RESULTS: A new adjustable-threshold algorithm was developed to efficiently segment fluorescently labeled objects contained within three-dimensional images obtained from laser scanning confocal microscopy, or two-photon microscopy. The result of the test segmentation with Drosophila brain images showed that the algorithm is extremely accurate and provided detailed information about the locations of neurons in the Drosophila brain. Centroids of each object (nucleus of each neuron) were also recorded into an algebraic matrix that describes the locations of the neurons. AVAILABILITY: Interested parties should send their request for the NeuronMapper(TM) program with the segmentation algorithm to artemp@bcm.tmc.edu.  相似文献   

9.
The purpose of this study was to quantify the effect of soft tissue artifact during three-dimensional motion capture and assess the effectiveness of an optimization method to reduce this effect. Four subjects were captured performing upper-arm internal-external rotation with retro-reflective marker sets attached to their upper extremities. A mechanical arm, with the same marker set attached, replicated the tasks human subjects performed. Artificial sinusoidal noise was then added to the recorded mechanical arm data to simulate soft tissue artifact. All data were processed by an optimization model. The result from both human and mechanical arm kinematic data demonstrates that soft tissue artifact can be reduced by an optimization model, although this error cannot be successfully eliminated. The soft tissue artifact from human subjects and the simulated soft tissue artifact from artificial sinusoidal noise were demonstrated to be considerably different. It was therefore concluded that the kinematic noise caused by skin movement artifact during upper-arm internal-external rotation does not follow a sinusoidal pattern and cannot be effectively eliminated by an optimization model.  相似文献   

10.
This paper discusses two problems related to three-dimensional object recognition. The first is segmentation and the selection of a candidate object in the image, the second is the recognition of a three-dimensional object from different viewing positions. Regarding segmentation, it is shown how globally salient structures can be extracted from a contour image based on geometrical attributes, including smoothness and contour length. This computation is performed by a parallel network of locally connected neuron-like elements. With respect to the effect of viewing, it is shown how the problem can be overcome by using the linear combinations of a small number of two-dimensional object views. In both problems the emphasis is on methods that are relatively low level in nature. Segmentation is performed using a bottom-up process, driven by the geometry of image contours. Recognition is performed without using explicit three-dimensional models, but by the direct manipulation of two-dimensional images.  相似文献   

11.
One way for breast cancer diagnosis is provided by taking radiographic (X-ray) images (termed mammograms) for suspect patients, images further used by physicians to identify potential abnormal areas thorough visual inspection. When digital mammograms are available, computer-aided based diagnostic may help the physician in having a more accurate decision. This implies automatic abnormal areas detection using segmentation, followed by tumor classification. This work aims at describing an approach to deal with the classification of digital mammograms. Patches around tumors are manually extracted to segment the abnormal areas from the remaining of the image, considered as background. The mammogram images are filtered using Gabor wavelets and directional features are extracted at different orientation and frequencies. Principal Component Analysis is employed to reduce the dimension of filtered and unfiltered high-dimensional data. Proximal Support Vector Machines are used to final classify the data. Superior mammogram image classification performance is attained when Gabor features are extracted instead of using original mammogram images. The robustness of Gabor features for digital mammogram images distorted by Poisson noise with different intensity levels is also addressed.  相似文献   

12.
科学可视化是指运用计算机图形学和图像处理技术,将科学计算过程中或者是计算结果的数据转换为图形或图像,在屏幕上显示出来并进行交互式处理的理论技术或方法。介绍了用反卷积荧光显微成像技术获得活体大鼠胰腺B细胞三维图像及对其进行科学可视化的主要过程和两种常用可视化算法,并运用这两种方法对所得到的三维图像进行处理以分析和研究细胞内分泌囊泡的空间分布。结果显示,当仅观察细胞三维图像的二维切片时,三维图像中的某些重要信息会被忽略,而使用科学可视化方法则可以从三维角度直观观察活体细胞内分泌囊泡的空间分布,并且可以观察到分泌囊泡的释放趋势和整体分布,从而为细胞生物学研究提供重要的信息。  相似文献   

13.
The paper presents a new approach for medical image segmentation. Exudates are a visible sign of diabetic retinopathy that is the major reason of vision loss in patients with diabetes. If the exudates extend into the macular area, blindness may occur. Automated detection of exudates will assist ophthalmologists in early diagnosis. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after getting optimized by Pillar algorithm; pillars are constructed in such a way that they can withstand the pressure. Improved pillar algorithm can optimize the K-means clustering for image segmentation in aspects of precision and computation time. This evaluates the proposed approach for image segmentation by comparing with Kmeans and Fuzzy C-means in a medical image. Using this method, identification of dark spot in the retina becomes easier and the proposed algorithm is applied on diabetic retinal images of all stages to identify hard and soft exudates, where the existing pillar K-means is more appropriate for brain MRI images. This proposed system help the doctors to identify the problem in the early stage and can suggest a better drug for preventing further retinal damage.  相似文献   

14.
《IRBM》2022,43(6):640-657
ObjectivesImage segmentation plays an important role in the analysis and understanding of the cellular process. However, this task becomes difficult when there is intensity inhomogeneity between regions, and it is more challenging in the presence of the noise and clustered cells. The goal of the paper is propose an image segmentation framework that tackles the above cited problems.Material and methodsA new method composed of two steps is proposed: First, segment the image using B-spline level set with Region-Scalable Fitting (RSF) active contour model, second apply the Watershed algorithm based on new object markers to refine the segmentation and separate clustered cells. The major contributions of the paper are: 1) Use of a continuous formulation of the level set in the B-spline basis, 2) Develop the energy function and its derivative by introducing the RSF model to deal with intensity inhomogeneity, 3) For the Watershed, propose a relevant choice of markers that considers the cell properties.ResultsExperimental results are performed on widely used synthetic images, in addition to simulated and real biological images, without and with additive noise. They attest the high quality of segmentation of the proposed method in terms of quantitative and qualitative evaluation.ConclusionThe proposed method is able to tackle many difficulties at the same time: overlapped intensities, noise, different cell sizes and clustered cells. It provides an efficient tool for image segmentation especially biological ones.  相似文献   

15.
Background: Analyzing MR scans of low-grade glioma, with highly accurate segmentation will have an enormous potential in neurosurgery for diagnosis and therapy planning. Low-grade gliomas are mainly distinguished by their infiltrating character and irregular contours, which make the analysis, and therefore the segmentation task, more difficult. Moreover, MRI images show some constraints such as intensity variation and the presence of noise.Methods: To tackle these issues, a novel segmentation method built from the local properties of image is presented in this paper. Phase-based edge detection is estimated locally by the monogenic signal using quadrature filters. This way of detecting edges is, from a theoretical point of view, intensity invariant and responds well to the MR images. To strengthen the tumor detection process, a region-based term is designated locally in order to achieve a local maximum likelihood segmentation of the region of interest. A Gaussian probability distribution is considered to model local images intensities.Results: The proposed model is evaluated using a set of real subjects and synthetic images derived from the Brain Tumor Segmentation challenge –BraTS 2015. In addition, the obtained results are compared to the manual segmentation performed by two experts. Quantitative evaluations are performed using the proposed approach with regard to four related existing methods.Conclusion: The comparison of the proposed method, shows more accurate results than the four existing methods.  相似文献   

16.
In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach.  相似文献   

17.
《Médecine Nucléaire》2007,31(5):219-234
Scintigraphic images are strongly affected by Poisson noise. This article presents the results of a comparison between denoising methods for Poisson noise according to different criteria: the gain in signal-to-noise ratio, the preservation of resolution and contrast, and the visual quality. The wavelet techniques recently developed to denoise Poisson noise limited images are divided into two groups based on: (1) the Haar representation, (2) the transformation of Poisson noise into white Gaussian noise by the Haar–Fisz transform followed by a denoising. In this study, three variants of the first group and three variants of the second, including the adaptative Wiener filter, four types of wavelet thresholdings and the Bayesian method of Pizurica were compared to Metz and Hanning filters and to Shine, a systematic noise elimination process. All these methods, except Shine, are parametric. For each of them, ranges of optimal values for the parameters were highlighted as a function of the aforementioned criteria. The intersection of ranges for the wavelet methods without thresholding was empty, and these methods were therefore not further compared quantitatively. The thresholding techniques and Shine gave the best results in resolution and contrast. The largest improvement in signal-to-noise ratio was obtained by the filters. Ideally, these filters should be accurately defined for each image. This is difficult in the clinical context. Moreover, they generate oscillation artefacts. In addition, the wavelet techniques did not bring significant improvements, and are rather slow. Therefore, Shine, which is fast and works automatically, appears to be an interesting alternative.  相似文献   

18.
Automated image detection and segmentation in blood smears.   总被引:4,自引:0,他引:4  
S S Poon  R K Ward  B Palcic 《Cytometry》1992,13(7):766-774
A simple technique which automatically detects and then segments nucleated cells in Wright's giemsa-stained blood smears is presented. Our method differs from others in 1) the simplicity of our algorithms; 2) inclusion of touching (as well as nontouching) cells; and 3) use of these algorithms to segment as well as to detect nucleated cells employing conventionally prepared smears. Our method involves: 1) acquisition of spectral images; 2) preprocessing the acquired images; 3) detection of single and touching cells in the scene; 4) segmentation of the cells into nuclear and cytoplasmic regions; and 5) postprocessing of the segmented regions. The first two steps of this algorithm are employed to obtain high-quality images, to remove random noise, and to correct aberration and shading effects. Spectral information of the image is used in step 3 to segment the nucleated cells from the rest of the scene. Using the initial cell masks, nucleated cells which are just touching are detected and separated. Simple features are then extracted and conditions applied such that single nucleated cells are finally selected. In step 4, the intensity variations of the cells are then used to segment the nucleus from the cytoplasm. The success rate in segmenting the nucleated cells is between 81 and 93%. The major errors in segmentation of the nucleus and the cytoplasm in the recognized nucleated cells are 3.5% and 2.2%, respectively.  相似文献   

19.
Lowering the cumulative radiation dose to a patient undergoing fluoroscopic examination requires efficient denoising algorithms. We propose a method, which extensively utilizes temporal dimension in order to maximize denoising efficiency. A set of subsequent images is processed and two estimates of denoised images are calculated. One is based on a special implementation of an adaptive edge preserving wavelet transform, while the other is based on the statistical method intersection of confidence intervals (ICI) rule. Wavelet transform is thought to produce high quality denoised images and ICI estimate can be used to further improve denoising performance about object edges. The estimates are fused to produce the final denoised image. We show that the proposed method performs very well and do not suffer from blurring in clinically important parts of images. As a result, its application could allow for significant lowering of the fluoroscope single frame dose.  相似文献   

20.
The 3D spatial organization of genes and other genetic elements within the nucleus is important for regulating gene expression. Understanding how this spatial organization is established and maintained throughout the life of a cell is key to elucidating the many layers of gene regulation. Quantitative methods for studying nuclear organization will lead to insights into the molecular mechanisms that maintain gene organization as well as serve as diagnostic tools for pathologies caused by loss of nuclear structure. However, biologists currently lack automated and high throughput methods for quantitative and qualitative global analysis of 3D gene organization. In this study, we use confocal microscopy and fluorescence in-situ hybridization (FISH) as a cytogenetic technique to detect and localize the presence of specific DNA sequences in 3D. FISH uses probes that bind to specific targeted locations on the chromosomes, appearing as fluorescent spots in 3D images obtained using fluorescence microscopy. In this article, we propose an automated algorithm for segmentation and detection of 3D FISH spots. The algorithm is divided into two stages: spot segmentation and spot detection. Spot segmentation consists of 3D anisotropic smoothing to reduce the effect of noise, top-hat filtering, and intensity thresholding, followed by 3D region-growing. Spot detection uses a Bayesian classifier with spot features such as volume, average intensity, texture, and contrast to detect and classify the segmented spots as either true or false spots. Quantitative assessment of the proposed algorithm demonstrates improved segmentation and detection accuracy compared to other techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号