首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
D. Koundal  S. Gupta  S. Singh 《IRBM》2018,39(1):43-53

Background

Neutrosophic based methods are becoming very popular in denoising of images due to the capability of handling indeterminacy. The main goal of denoising is to maintain balance between edge preservation and speckle reduction.

Methods

To achieve this, neutrosophic based total variation method using Nakagami statistics have been explored to develop an efficient speckle reduction method. The proposed Neutrosophic based Nakagami Total Variation (NNTV) method initially transforms the image into the neutrosophic domain and then employs the neutrosophic filtering process for speckle reduction. The NNTV quantifies the indeterminacy of image by determining the entropy of indeterminate set.

Results

The performance of the proposed method has been evaluated quantitatively by quality metrics on synthetic images, qualitatively using real thyroid ultrasound images through visual examination by medical experts and by Mean Opinion Score.

Conclusion

From results, it has been observed that NNTV method performed better than other speckle reduction methods in terms of both speckle suppression and edge preservation.  相似文献   

2.
This paper discusses the suitability, in terms of noise reduction, of various methods which can be applied to an image type often used in radiation therapy: the portal image. Among these methods, the analysis focuses on those operating in the wavelet domain. Wavelet-based methods tested on natural images – such as the thresholding of the wavelet coefficients, the minimization of the Stein unbiased risk estimator on a linear expansion of thresholds (SURE-LET), and the Bayes least-squares method using as a prior a Gaussian scale mixture (BLS-GSM method) – are compared with other methods that operate on the image domain – an adaptive Wiener filter and a nonlocal mean filter (NLM). For the assessment of the performance, the peak signal-to-noise ratio (PSNR), the structural similarity index (SSIM), the Pearson correlation coefficient, and the Spearman rank correlation (ρ) coefficient are used. The performance of the wavelet filters and the NLM method are similar, but wavelet filters outperform the Wiener filter in terms of portal image denoising. It is shown how BLS-GSM and NLM filters produce the smoothest image, while keeping soft-tissue and bone contrast. As for the computational cost, filters using a decimated wavelet transform (decimated thresholding and SURE-LET) turn out to be the most efficient, with calculation times around 1 s.  相似文献   

3.
Denoising is a fundamental early stage in 2‐DE image analysis strongly influencing spot detection or pixel‐based methods. A novel nonlinear adaptive spatial filter (median‐modified Wiener filter, MMWF), is here compared with five well‐established denoising techniques (Median, Wiener, Gaussian, and Polynomial‐Savitzky–Golay filters; wavelet denoising) to suggest, by means of fuzzy sets evaluation, the best denoising approach to use in practice. Although median filter and wavelet achieved the best performance in spike and Gaussian denoising respectively, they are unsuitable for contemporary removal of different types of noise, because their best setting is noise‐dependent. Vice versa, MMWF that arrived second in each single denoising category, was evaluated as the best filter for global denoising, being its best setting invariant of the type of noise. In addition, median filter eroded the edge of isolated spots and filled the space between close‐set spots, whereas MMWF because of a novel filter effect (drop‐off‐effect) does not suffer from erosion problem, preserves the morphology of close‐set spots, and avoids spot and spike fuzzyfication, an aberration encountered for Wiener filter. In our tests, MMWF was assessed as the best choice when the goal is to minimize spot edge aberrations while removing spike and Gaussian noise.  相似文献   

4.
In susceptibility-weighted imaging (SWI), the high resolution required to obtain a proper contrast generation leads to a reduced signal-to-noise ratio (SNR). The application of a denoising filter to produce images with higher SNR and still preserve small structures from excessive blurring is therefore extremely desirable. However, as the distributions of magnitude and phase noise may introduce biases during image restoration, the application of a denoising filter is non-trivial. Taking advantage of the potential multispectral nature of MR images, a multicomponent approach using a Non-Local Means (MNLM) denoising filter may perform better than a component-by-component image restoration method. Here we present a new MNLM-based method (Multicomponent-Imaginary-Real-SWI, hereafter MIR-SWI) to produce SWI images with high SNR and improved conspicuity. Both qualitative and quantitative comparisons of MIR-SWI with the original SWI scheme and previously proposed SWI restoring pipelines showed that MIR-SWI fared consistently better than the other approaches. Noise removal with MIR-SWI also provided improvement in contrast-to-noise ratio (CNR) and vessel conspicuity at higher factors of phase mask multiplications than the one suggested in the literature for SWI vessel imaging. We conclude that a proper handling of noise in the complex MR dataset may lead to improved image quality for SWI data.  相似文献   

5.
In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach.  相似文献   

6.
王小兵  孙久运 《生物磁学》2011,(20):3954-3957
目的:医学影像在获取、存储、传输过程中会不同程度地受到噪声污染,这极大影像了其在临床诊疗中的应用。为了有效地滤除医学影像噪声,提出了一种混合滤波算法。方法:该算法首先将含有高斯和椒盐噪声的图像进行形态学开运算,然后对开运算后的图像进行二维小波分解,得到高频和低频小波分解系数。保留低频系数不变,将高频系数经过维纳滤波器进行滤波,最后进行小波系数重构。结果:采用该混合滤波算法、小波阚值去噪、中值滤波、维纳滤波分别对含有混合噪声的医学影像分别进行滤除噪声处理,该滤波算法去噪后影像的PSNR值明显高于其他三种方法。结论:该混合滤波算法是一种较为有效的医学影像噪声滤除方法。  相似文献   

7.
IntroductionMedical images are usually affected by biological and physical artifacts or noise, which reduces image quality and hence poses difficulties in visual analysis, interpretation and thus requires higher doses and increased radiographs repetition rate.ObjectivesThis study aims at assessing image quality during CT abdomen and brain examinations using filtering techniques as well as estimating the radiogenic risk associated with CT abdomen and brain examinations.Materials and MethodsThe data were collected from the Radiology Department at Royal Care International (RCI) Hospital, Khartoum, Sudan. The study included 100 abdominal CT images and 100 brain CT images selected from adult patients. Filters applied are namely: Mean filter, Gaussian filter, Median filter and Minimum filter. In this study, image quality after denoising is measured based on the Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and the Structural Similarity Index Metric (SSIM).ResultsThe results show that the images quality parameters become higher after applications of filters. Median filter showed improved image quality as interpreted by the measured parameters: PSNR and SSIM, and it is thus considered as a better filter for removing the noise from all other applied filters.DiscussionThe noise removed by the different filters applied to the CT images resulted in enhancing high quality images thereby effectively revealing the important details of the images without increasing the patients’ risks from higher doses.ConclusionsFiltering and image reconstruction techniques not only reduce the dose and thus the radiation risks, but also enhances high quality imaging which allows better diagnosis.  相似文献   

8.
基于小波神经网络的图像去噪算法   总被引:4,自引:1,他引:3  
生物医学图像在成像时不可避免地受到噪声影响,因此噪声去除是生物医学图像处理的一项重要研究课题。将小波神经网络引入图像去噪领域中,通过多种技术优化网络学习过程,最终建立一种图像去噪新算法。实验结果表明,该算法在去除噪声上优于传统的中值滤波等方法,并具有较强的鲁棒性;同时能够最大限度地保护图像的细节信息,具有很好的保真度。  相似文献   

9.
《Médecine Nucléaire》2007,31(5):219-234
Scintigraphic images are strongly affected by Poisson noise. This article presents the results of a comparison between denoising methods for Poisson noise according to different criteria: the gain in signal-to-noise ratio, the preservation of resolution and contrast, and the visual quality. The wavelet techniques recently developed to denoise Poisson noise limited images are divided into two groups based on: (1) the Haar representation, (2) the transformation of Poisson noise into white Gaussian noise by the Haar–Fisz transform followed by a denoising. In this study, three variants of the first group and three variants of the second, including the adaptative Wiener filter, four types of wavelet thresholdings and the Bayesian method of Pizurica were compared to Metz and Hanning filters and to Shine, a systematic noise elimination process. All these methods, except Shine, are parametric. For each of them, ranges of optimal values for the parameters were highlighted as a function of the aforementioned criteria. The intersection of ranges for the wavelet methods without thresholding was empty, and these methods were therefore not further compared quantitatively. The thresholding techniques and Shine gave the best results in resolution and contrast. The largest improvement in signal-to-noise ratio was obtained by the filters. Ideally, these filters should be accurately defined for each image. This is difficult in the clinical context. Moreover, they generate oscillation artefacts. In addition, the wavelet techniques did not bring significant improvements, and are rather slow. Therefore, Shine, which is fast and works automatically, appears to be an interesting alternative.  相似文献   

10.
One of the most commonly used methods for protein separation is 2‐DE. After 2‐DE gel scanning, images with a plethora of spot features emerge that are usually contaminated by inherent noise. The objective of the denoising process is to remove noise to the extent that the true spots are recovered correctly and accurately i.e. without introducing distortions leading to the detection of false‐spot features. In this paper we propose and justify the use of the contourlet transform as a tool for 2‐DE gel images denoising. We compare its effectiveness with state‐of‐the‐art methods such as wavelets‐based multiresolution image analysis and spatial filtering. We show that contourlets not only achieve better average S/N performance than wavelets and spatial filters, but also preserve better spot boundaries and faint spots and alter less the intensities of informative spot features, leading to more accurate spot volume estimation and more reliable spot detection, operations that are essential to differential expression proteomics for biomarkers discovery.  相似文献   

11.

Background

To perform a three-dimensional (3-D) reconstruction of electron cryomicroscopy (cryo-EM) images of viruses, it is necessary to determine the similarity of image blocks of the two-dimensional (2-D) projections of the virus. The projections containing high resolution information are typically very noisy. Instead of the traditional Euler metric, this paper proposes a new method, based on the geodesic metric, to measure the similarity of blocks.

Results

Our method is a 2-D image denoising approach. A data set of 2243 cytoplasmic polyhedrosis virus (CPV) capsid particle images in different orientations was used to test the proposed method. Relative to Block-matching and three-dimensional filtering (BM3D), Stein’s unbiased risk estimator (SURE), Bayes shrink and K-means singular value decomposition (K-SVD), the experimental results show that the proposed method can achieve a peak signal-to-noise ratio (PSNR) of 45.65. The method can remove the noise from the cryo-EM image and improve the accuracy of particle picking.

Conclusions

The main contribution of the proposed model is to apply the geodesic distance to measure the similarity of image blocks. We conclude that manifold learning methods can effectively eliminate the noise of the cryo-EM image and improve the accuracy of particle picking.
  相似文献   

12.
Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively.  相似文献   

13.
The low radiation conditions and the predominantly phase-object image formation of cryo-electron microscopy (cryo-EM) result in extremely high noise levels and low contrast in the recorded micrographs. The process of single particle or tomographic 3D reconstruction does not completely eliminate this noise and is even capable of introducing new sources of noise during alignment or when correcting for instrument parameters. The recently developed Digital Paths Supervised Variance (DPSV) denoising filter uses local variance information to control regional noise in a robust and adaptive manner. The performance of the DPSV filter was evaluated in this review qualitatively and quantitatively using simulated and experimental data from cryo-EM and tomography in two and three dimensions. We also assessed the benefit of filtering experimental reconstructions for visualization purposes and for enhancing the accuracy of feature detection. The DPSV filter eliminates high-frequency noise artifacts (density gaps), which would normally preclude the accurate segmentation of tomography reconstructions or the detection of alpha-helices in single-particle reconstructions. This collaborative software development project was carried out entirely by virtual interactions among the authors using publicly available development and file sharing tools.  相似文献   

14.
Software for the removal of noise from reaction curves usingthe principle of Fourier filtering has been written in BASICto execute on a PC. The program inputs reaction traces whichare subjected to a rotation -inversion process, to produce functionssuitable for Fourier analysis. Fourier transformation into thefrequency domain is followed by multiplication of the transformby a rectangular filter function, to remove the noise frequencies.Inverse transformation then yields a noise-reduced reactiontrace suitable for further analysis. The program is interactiveat each stage and could easily be modified to remove noise froma range of input data types. Received on October 20, 1988; accepted on January 10, 1989  相似文献   

15.
Micro-CT provides a high-resolution 3D imaging of micro-architecture in a non-invasive way, which becomes a significant tool in biomedical research and preclinical applications. Due to the limited power of micro-focus X-ray tube, photon starving occurs and noise is inevitable for the projection images, resulting in the degradation of spatial resolution, contrast and image details. In this paper, we propose a C-GAN (Conditional Generative Adversarial Nets) denoising algorithm in projection domain for Micro-CT imaging. The noise statistic property is utilized directly and a novel variance loss is developed to suppress the blurry effects during denoising procedure. Conditional Generative Adversarial Networks (C-GAN) is employed as a framework to implement the denoising task. To guarantee the pixelwised accuracy, fully convolutional network is served as the generator structure. During the alternative training of the generator and the discriminator, the network is able to learn noise distribution automatically. Moreover, residual learning and skip connection architecture are applied for faster network training and further feature fusion. To evaluate the denoising performance, mouse lung, milkvetch root and bamboo stick are imaged by micro-CT in the experiments. Compared with BM3D, CNN-MSE and CNN-VGG, the proposed method can suppress noise effectively and recover image details without introducing any artifacts or blurry effect. The result proves that our method is feasible, efficient and practical.  相似文献   

16.
In this work, we compare the merits of three temporal data deconvolution methods for use in the filtered backprojection algorithm for photoacoustic tomography (PAT). We evaluate the standard Fourier division technique, the Wiener deconvolution filter, and a Tikhonov L-2 norm regularized matrix inversion method. Our experiments were carried out on subjects of various appearances, namely a pencil lead, two man-made phantoms, an in vivo subcutaneous mouse tumor model, and a perfused and excised mouse brain. All subjects were scanned using an imaging system with a rotatable hemispherical bowl, into which 128 ultrasound transducer elements were embedded in a spiral pattern. We characterized the frequency response of each deconvolution method, compared the final image quality achieved by each deconvolution technique, and evaluated each method’s robustness to noise. The frequency response was quantified by measuring the accuracy with which each filter recovered the ideal flat frequency spectrum of an experimentally measured impulse response. Image quality under the various scenarios was quantified by computing noise versus resolution curves for a point source phantom, as well as the full width at half maximum (FWHM) and contrast-to-noise ratio (CNR) of selected image features such as dots and linear structures in additional imaging subjects. It was found that the Tikhonov filter yielded the most accurate balance of lower and higher frequency content (as measured by comparing the spectra of deconvolved impulse response signals to the ideal flat frequency spectrum), achieved a competitive image resolution and contrast-to-noise ratio, and yielded the greatest robustness to noise. While the Wiener filter achieved a similar image resolution, it tended to underrepresent the lower frequency content of the deconvolved signals, and hence of the reconstructed images after backprojection. In addition, its robustness to noise was poorer than that of the Tikhonov filter. The performance of the Fourier filter was found to be the poorest of all three methods, based on the reconstructed images’ lowest resolution (blurriest appearance), generally lowest contrast-to-noise ratio, and lowest robustness to noise. Overall, the Tikhonov filter was deemed to produce the most desirable image reconstructions.  相似文献   

17.
Positron emission tomography (PET) images have been incorporated into the radiotherapy process as a powerful tool to assist in the contouring of lesions, leading to the emergence of a broad spectrum of automatic segmentation schemes for PET images (PET-AS). However, not all proposed PET-AS algorithms take into consideration the previous steps of image preparation. PET image noise has been shown to be one of the most relevant affecting factors in segmentation tasks. This study demonstrates a nonlinear filtering method based on spatially adaptive wavelet shrinkage using three-dimensional context modelling that considers the correlation of each voxel with its neighbours. Using this noise reduction method, excellent edge conservation properties are obtained. To evaluate the influence in the segmentation schemes of this filter, it was compared with a set of Gaussian filters (the most conventional) and with two previously optimised edge-preserving filters. Five segmentation schemes were used (most commonly implemented in commercial software): fixed thresholding, adaptive thresholding, watershed, adaptive region growing and affinity propagation clustering. Segmentation results were evaluated using the Dice similarity coefficient and classification error. A simple metric was also included to improve the characterisation of the filters used for induced blurring evaluation, based on the measurement of the average edge width. The proposed noise reduction procedure improves the results of segmentation throughout the performed settings and was shown to be more stable in low-contrast and high-noise conditions. Thus, the capacity of the segmentation method is reinforced by the denoising plan used.  相似文献   

18.
Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices.  相似文献   

19.
Due to the sensitivity of biological sample to the radiation damage, the low dose imaging conditions used for electron microscopy result in extremely noisy images. The processes of digitization, image alignment, and 3D reconstruction also introduce additional sources of noise in the final 3D structure. In this paper, we investigate the effectiveness of a bilateral denoising filter in various biological electron microscopy applications. In contrast to the conventional low pass filters, which inevitably smooth out both noise and structural features simultaneously, we found that bilateral filter holds a distinct advantage in being capable of effectively suppressing noise without blurring the high resolution details. In as much, we have applied this technique to individual micrographs, entire 3D reconstructions, segmented proteins, and tomographic reconstructions.  相似文献   

20.
Advances in three-dimensional (3D) electron microscopy (EM) and image processing are providing considerable improvements in the resolution of subcellular volumes, macromolecular assemblies and individual proteins. However, the recovery of high-frequency information from biological samples is hindered by specimen sensitivity to beam damage. Low dose electron cryo-microscopy conditions afford reduced beam damage but typically yield images with reduced contrast and low signal-to-noise ratios (SNRs). Here, we describe the properties of a new discriminative bilateral (DBL) filter that is based upon the bilateral filter implementation of Jiang et al. (Jiang, W., Baker, M.L., Wu, Q., Bajaj, C., Chiu, W., 2003. Applications of a bilateral denoising filter in biological electron microscopy. J. Struc. Biol. 128, 82-97.). In contrast to the latter, the DBL filter can distinguish between object edges and high-frequency noise pixels through the use of an additional photometric exclusion function. As a result, high frequency noise pixels are smoothed, yet object edge detail is preserved. In the present study, we show that the DBL filter effectively reduces noise in low SNR single particle data as well as cellular tomograms of stained plastic sections. The properties of the DBL filter are discussed in terms of its usefulness for single particle analysis and for pre-processing cellular tomograms ahead of image segmentation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号