首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Positron emission tomography (PET) images have been incorporated into the radiotherapy process as a powerful tool to assist in the contouring of lesions, leading to the emergence of a broad spectrum of automatic segmentation schemes for PET images (PET-AS). However, not all proposed PET-AS algorithms take into consideration the previous steps of image preparation. PET image noise has been shown to be one of the most relevant affecting factors in segmentation tasks. This study demonstrates a nonlinear filtering method based on spatially adaptive wavelet shrinkage using three-dimensional context modelling that considers the correlation of each voxel with its neighbours. Using this noise reduction method, excellent edge conservation properties are obtained. To evaluate the influence in the segmentation schemes of this filter, it was compared with a set of Gaussian filters (the most conventional) and with two previously optimised edge-preserving filters. Five segmentation schemes were used (most commonly implemented in commercial software): fixed thresholding, adaptive thresholding, watershed, adaptive region growing and affinity propagation clustering. Segmentation results were evaluated using the Dice similarity coefficient and classification error. A simple metric was also included to improve the characterisation of the filters used for induced blurring evaluation, based on the measurement of the average edge width. The proposed noise reduction procedure improves the results of segmentation throughout the performed settings and was shown to be more stable in low-contrast and high-noise conditions. Thus, the capacity of the segmentation method is reinforced by the denoising plan used.  相似文献   

2.
Micro-CT provides a high-resolution 3D imaging of micro-architecture in a non-invasive way, which becomes a significant tool in biomedical research and preclinical applications. Due to the limited power of micro-focus X-ray tube, photon starving occurs and noise is inevitable for the projection images, resulting in the degradation of spatial resolution, contrast and image details. In this paper, we propose a C-GAN (Conditional Generative Adversarial Nets) denoising algorithm in projection domain for Micro-CT imaging. The noise statistic property is utilized directly and a novel variance loss is developed to suppress the blurry effects during denoising procedure. Conditional Generative Adversarial Networks (C-GAN) is employed as a framework to implement the denoising task. To guarantee the pixelwised accuracy, fully convolutional network is served as the generator structure. During the alternative training of the generator and the discriminator, the network is able to learn noise distribution automatically. Moreover, residual learning and skip connection architecture are applied for faster network training and further feature fusion. To evaluate the denoising performance, mouse lung, milkvetch root and bamboo stick are imaged by micro-CT in the experiments. Compared with BM3D, CNN-MSE and CNN-VGG, the proposed method can suppress noise effectively and recover image details without introducing any artifacts or blurry effect. The result proves that our method is feasible, efficient and practical.  相似文献   

3.
《Médecine Nucléaire》2007,31(5):219-234
Scintigraphic images are strongly affected by Poisson noise. This article presents the results of a comparison between denoising methods for Poisson noise according to different criteria: the gain in signal-to-noise ratio, the preservation of resolution and contrast, and the visual quality. The wavelet techniques recently developed to denoise Poisson noise limited images are divided into two groups based on: (1) the Haar representation, (2) the transformation of Poisson noise into white Gaussian noise by the Haar–Fisz transform followed by a denoising. In this study, three variants of the first group and three variants of the second, including the adaptative Wiener filter, four types of wavelet thresholdings and the Bayesian method of Pizurica were compared to Metz and Hanning filters and to Shine, a systematic noise elimination process. All these methods, except Shine, are parametric. For each of them, ranges of optimal values for the parameters were highlighted as a function of the aforementioned criteria. The intersection of ranges for the wavelet methods without thresholding was empty, and these methods were therefore not further compared quantitatively. The thresholding techniques and Shine gave the best results in resolution and contrast. The largest improvement in signal-to-noise ratio was obtained by the filters. Ideally, these filters should be accurately defined for each image. This is difficult in the clinical context. Moreover, they generate oscillation artefacts. In addition, the wavelet techniques did not bring significant improvements, and are rather slow. Therefore, Shine, which is fast and works automatically, appears to be an interesting alternative.  相似文献   

4.
Lowering the cumulative radiation dose to a patient undergoing fluoroscopic examination requires efficient denoising algorithms. We propose a method, which extensively utilizes temporal dimension in order to maximize denoising efficiency. A set of subsequent images is processed and two estimates of denoised images are calculated. One is based on a special implementation of an adaptive edge preserving wavelet transform, while the other is based on the statistical method intersection of confidence intervals (ICI) rule. Wavelet transform is thought to produce high quality denoised images and ICI estimate can be used to further improve denoising performance about object edges. The estimates are fused to produce the final denoised image. We show that the proposed method performs very well and do not suffer from blurring in clinically important parts of images. As a result, its application could allow for significant lowering of the fluoroscope single frame dose.  相似文献   

5.
One of the most commonly used methods for protein separation is 2‐DE. After 2‐DE gel scanning, images with a plethora of spot features emerge that are usually contaminated by inherent noise. The objective of the denoising process is to remove noise to the extent that the true spots are recovered correctly and accurately i.e. without introducing distortions leading to the detection of false‐spot features. In this paper we propose and justify the use of the contourlet transform as a tool for 2‐DE gel images denoising. We compare its effectiveness with state‐of‐the‐art methods such as wavelets‐based multiresolution image analysis and spatial filtering. We show that contourlets not only achieve better average S/N performance than wavelets and spatial filters, but also preserve better spot boundaries and faint spots and alter less the intensities of informative spot features, leading to more accurate spot volume estimation and more reliable spot detection, operations that are essential to differential expression proteomics for biomarkers discovery.  相似文献   

6.
IntroductionMedical images are usually affected by biological and physical artifacts or noise, which reduces image quality and hence poses difficulties in visual analysis, interpretation and thus requires higher doses and increased radiographs repetition rate.ObjectivesThis study aims at assessing image quality during CT abdomen and brain examinations using filtering techniques as well as estimating the radiogenic risk associated with CT abdomen and brain examinations.Materials and MethodsThe data were collected from the Radiology Department at Royal Care International (RCI) Hospital, Khartoum, Sudan. The study included 100 abdominal CT images and 100 brain CT images selected from adult patients. Filters applied are namely: Mean filter, Gaussian filter, Median filter and Minimum filter. In this study, image quality after denoising is measured based on the Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and the Structural Similarity Index Metric (SSIM).ResultsThe results show that the images quality parameters become higher after applications of filters. Median filter showed improved image quality as interpreted by the measured parameters: PSNR and SSIM, and it is thus considered as a better filter for removing the noise from all other applied filters.DiscussionThe noise removed by the different filters applied to the CT images resulted in enhancing high quality images thereby effectively revealing the important details of the images without increasing the patients’ risks from higher doses.ConclusionsFiltering and image reconstruction techniques not only reduce the dose and thus the radiation risks, but also enhances high quality imaging which allows better diagnosis.  相似文献   

7.
针对传统震动滤波和各向异性扩散混合模型存在缺点,提出了一种新的图像增强和去噪方法。该方法将改进的震动滤波项和图像细节保真项同时引入增强和去噪方程,使其根据图像结构信息产生相应变化幅度。通过实验表明,本文提出的方法达到较理想的增强和去噪效果,使得生物医学图像不仅具有很好的平滑效果,而且增强了边缘,同时保留了尽可能多的图像结构和细节信息,并且还很大程度上缩短了计算时间。  相似文献   

8.
This study investigates the inter-trial variability of saccade trajectories observed in five rhesus macaques (Macaca mulatta). For each time point during a saccade, the inter-trial variance of eye position and its covariance with eye end position were evaluated. Data were modeled by a superposition of three noise components due to 1) planning noise, 2) signal-dependent motor noise, and 3) signal-dependent premotor noise entering within an internal feedback loop. Both planning noise and signal-dependent motor noise (together called accumulating noise) predict a simple S-shaped variance increase during saccades, which was not sufficient to explain the data. Adding noise within an internal feedback loop enabled the model to mimic variance/covariance structure in each monkey, and to estimate the noise amplitudes and the feedback gain. Feedback noise had little effect on end point noise, which was dominated by accumulating noise. This analysis was further extended to saccades executed during inactivation of the caudal fastigial nucleus (cFN) on one side of the cerebellum. Saccades ipsiversive to an inactivated cFN showed more end point variance than did normal saccades. During cFN inactivation, eye position during saccades was statistically more strongly coupled to eye position at saccade end. The proposed model could fit the variance/covariance structure of ipsiversive and contraversive saccades. Inactivation effects on saccade noise are explained by a decrease of the feedback gain and an increase of planning and/or signal-dependent motor noise. The decrease of the fitted feedback gain is consistent with previous studies suggesting a role for the cerebellum in an internal feedback mechanism. Increased end point variance did not result from impaired feedback but from the increase of accumulating noise. The effects of cFN inactivation on saccade noise indicate that the effects of cFN inactivation cannot be explained entirely with the cFN’s direct connections to the saccade-related premotor centers in the brainstem.  相似文献   

9.
IntroductionTo develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising.Methods & materialsSetup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training.ResultsMore than 6 convolutional layers with convolutional kernels >5 × 5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality.ConclusionsUse of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer.  相似文献   

10.
Due to the sensitivity of biological sample to the radiation damage, the low dose imaging conditions used for electron microscopy result in extremely noisy images. The processes of digitization, image alignment, and 3D reconstruction also introduce additional sources of noise in the final 3D structure. In this paper, we investigate the effectiveness of a bilateral denoising filter in various biological electron microscopy applications. In contrast to the conventional low pass filters, which inevitably smooth out both noise and structural features simultaneously, we found that bilateral filter holds a distinct advantage in being capable of effectively suppressing noise without blurring the high resolution details. In as much, we have applied this technique to individual micrographs, entire 3D reconstructions, segmented proteins, and tomographic reconstructions.  相似文献   

11.

Objective

Dynamic positron emission tomography (PET), which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM).

Theory

NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch.

Methods

To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches – Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches.

Results

The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high intensity details while lowering the background noise variance.  相似文献   

12.
This paper discusses the suitability, in terms of noise reduction, of various methods which can be applied to an image type often used in radiation therapy: the portal image. Among these methods, the analysis focuses on those operating in the wavelet domain. Wavelet-based methods tested on natural images – such as the thresholding of the wavelet coefficients, the minimization of the Stein unbiased risk estimator on a linear expansion of thresholds (SURE-LET), and the Bayes least-squares method using as a prior a Gaussian scale mixture (BLS-GSM method) – are compared with other methods that operate on the image domain – an adaptive Wiener filter and a nonlocal mean filter (NLM). For the assessment of the performance, the peak signal-to-noise ratio (PSNR), the structural similarity index (SSIM), the Pearson correlation coefficient, and the Spearman rank correlation (ρ) coefficient are used. The performance of the wavelet filters and the NLM method are similar, but wavelet filters outperform the Wiener filter in terms of portal image denoising. It is shown how BLS-GSM and NLM filters produce the smoothest image, while keeping soft-tissue and bone contrast. As for the computational cost, filters using a decimated wavelet transform (decimated thresholding and SURE-LET) turn out to be the most efficient, with calculation times around 1 s.  相似文献   

13.
In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach.  相似文献   

14.
As a powerful diagnostic tool, optical coherence tomography (OCT) has been widely used in various clinical setting. However, OCT images are susceptible to inherent speckle noise that may contaminate subtle structure information, due to low-coherence interferometric imaging procedure. Many supervised learning-based models have achieved impressive performance in reducing speckle noise of OCT images trained with a large number of noisy-clean paired OCT images, which are not commonly feasible in clinical practice. In this article, we conducted a comparative study to investigate the denoising performance of OCT images over different deep neural networks through an unsupervised Noise2Noise (N2N) strategy, which only trained with noisy OCT samples. Four representative network architectures including U-shaped model, multi-information stream model, straight-information stream model and GAN-based model were investigated on an OCT image dataset acquired from healthy human eyes. The results demonstrated all four unsupervised N2N models offered denoised OCT images with a performance comparable with that of supervised learning models, illustrating the effectiveness of unsupervised N2N models in denoising OCT images. Furthermore, U-shaped models and GAN-based models using UNet network as generator are two preferred and suitable architectures for reducing speckle noise of OCT images and preserving fine structure information of retinal layers under unsupervised N2N circumstances.  相似文献   

15.
In susceptibility-weighted imaging (SWI), the high resolution required to obtain a proper contrast generation leads to a reduced signal-to-noise ratio (SNR). The application of a denoising filter to produce images with higher SNR and still preserve small structures from excessive blurring is therefore extremely desirable. However, as the distributions of magnitude and phase noise may introduce biases during image restoration, the application of a denoising filter is non-trivial. Taking advantage of the potential multispectral nature of MR images, a multicomponent approach using a Non-Local Means (MNLM) denoising filter may perform better than a component-by-component image restoration method. Here we present a new MNLM-based method (Multicomponent-Imaginary-Real-SWI, hereafter MIR-SWI) to produce SWI images with high SNR and improved conspicuity. Both qualitative and quantitative comparisons of MIR-SWI with the original SWI scheme and previously proposed SWI restoring pipelines showed that MIR-SWI fared consistently better than the other approaches. Noise removal with MIR-SWI also provided improvement in contrast-to-noise ratio (CNR) and vessel conspicuity at higher factors of phase mask multiplications than the one suggested in the literature for SWI vessel imaging. We conclude that a proper handling of noise in the complex MR dataset may lead to improved image quality for SWI data.  相似文献   

16.
The purpose of this study was to examine the dependence of image texture features on MR acquisition parameters and reconstruction using a digital MR imaging phantom. MR signal was simulated in a parallel imaging radiofrequency coil setting as well as a single element volume coil setting, with varying levels of acquisition noise, three acceleration factors, and four image reconstruction algorithms. Twenty-six texture features were measured on the simulated images, ground truth images, and clinical brain images. Subtle algorithm-dependent errors were observed on reconstructed phantom images, even in the absence of added noise. Sources of image error include Gibbs ringing at image edge gradients (tissue interfaces) and well-known artifacts due to high acceleration; two of the iterative reconstruction algorithms studied were able to mitigate these image errors. The difference of the texture features from ground truth, and their variance over reconstruction algorithm and parallel imaging acceleration factor, were compared to the clinical “effect size”, i.e., the feature difference between high- and low-grade tumors on T1- and T2-weighted brain MR images of twenty glioma patients. The measured feature error (difference from ground truth) was small for some features, but substantial for others. The feature variance due to reconstruction algorithm and acceleration factor were generally smaller than the clinical effect size. Certain texture features may be preserved by MR imaging, but adequate precautions need to be taken regarding their validity and reliability. We present a general simulation framework for assessing the robustness and accuracy of radiomic textural features under various MR acquisition/reconstruction scenarios.  相似文献   

17.
Denoising is a fundamental early stage in 2‐DE image analysis strongly influencing spot detection or pixel‐based methods. A novel nonlinear adaptive spatial filter (median‐modified Wiener filter, MMWF), is here compared with five well‐established denoising techniques (Median, Wiener, Gaussian, and Polynomial‐Savitzky–Golay filters; wavelet denoising) to suggest, by means of fuzzy sets evaluation, the best denoising approach to use in practice. Although median filter and wavelet achieved the best performance in spike and Gaussian denoising respectively, they are unsuitable for contemporary removal of different types of noise, because their best setting is noise‐dependent. Vice versa, MMWF that arrived second in each single denoising category, was evaluated as the best filter for global denoising, being its best setting invariant of the type of noise. In addition, median filter eroded the edge of isolated spots and filled the space between close‐set spots, whereas MMWF because of a novel filter effect (drop‐off‐effect) does not suffer from erosion problem, preserves the morphology of close‐set spots, and avoids spot and spike fuzzyfication, an aberration encountered for Wiener filter. In our tests, MMWF was assessed as the best choice when the goal is to minimize spot edge aberrations while removing spike and Gaussian noise.  相似文献   

18.
王小兵  孙久运 《生物磁学》2011,(20):3954-3957
目的:医学影像在获取、存储、传输过程中会不同程度地受到噪声污染,这极大影像了其在临床诊疗中的应用。为了有效地滤除医学影像噪声,提出了一种混合滤波算法。方法:该算法首先将含有高斯和椒盐噪声的图像进行形态学开运算,然后对开运算后的图像进行二维小波分解,得到高频和低频小波分解系数。保留低频系数不变,将高频系数经过维纳滤波器进行滤波,最后进行小波系数重构。结果:采用该混合滤波算法、小波阚值去噪、中值滤波、维纳滤波分别对含有混合噪声的医学影像分别进行滤除噪声处理,该滤波算法去噪后影像的PSNR值明显高于其他三种方法。结论:该混合滤波算法是一种较为有效的医学影像噪声滤除方法。  相似文献   

19.
In this paper, a new filtering method is presented to remove the Rician noise from magnetic resonance images (MRI) acquired using single coil MRI acquisition system. This filter is based on nonlocal neutrosophic set (NLNS) approach of Wiener filtering. A neutrosophic set (NS), a part of neutrosophy theory, studies the origin, nature, and scope of neutralities, as well as their interactions with different ideational spectra. Now, we apply the neutrosophic set into image domain and define some concepts and operators for image denoising. First, the nonlocal mean is applied to the noisy MRI. The resultant image is transformed into NS domain, described using three membership sets: true (T), indeterminacy (I) and false (F). The entropy of the neutrosophic set is defined and employed to measure the indeterminacy. The ω-Wiener filtering operation is used on T and F to decrease the set indeterminacy and to remove the noise. The experiments have been conducted on simulated MR images from Brainweb database and clinical MR images. The results show that the NLNS Wiener filter produces better denoising results in terms of qualitative and quantitative measures compared with other denoising methods, such as classical Wiener filter, the anisotropic diffusion filter, the total variation minimization and the nonlocal means filter. The visual and the diagnostic quality of the denoised image are well preserved.  相似文献   

20.
In this paper, a novel methodology for estimating the shape of human biconcave red blood cells (RBCs), using color scattering images, is presented. The information retrieval process includes, image normalization, features extraction using two-dimensional discrete transforms, such as angular radial transform (ART), Zernike moments and Gabor filters bank and features dimension reduction using both independent component analysis (ICA) and principal component analysis (PCA). A radial basis neural network (RBF-NN) estimates the RBC geometrical properties. The proposed method is evaluated in both regression and identification tasks by processing images of a simulated device used to acquire scattering phenomena of moving RBCs. The simulated device consists of a tricolor light source (light emitting diode – LED) and moving RBCs in a thin glass. The evaluation database includes 23,625 scattering images, obtained by means of the boundary element method. The regression and identification accuracy of the actual RBC shape is estimated using three feature sets in the presence of additive white Gaussian noise from 60 to 10 dB SNR and systematic distortion, giving a mean error rate less than 1% of the actual RBC shape, and more than 99% mean identification rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号