首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
《Médecine Nucléaire》2007,31(5):219-234
Scintigraphic images are strongly affected by Poisson noise. This article presents the results of a comparison between denoising methods for Poisson noise according to different criteria: the gain in signal-to-noise ratio, the preservation of resolution and contrast, and the visual quality. The wavelet techniques recently developed to denoise Poisson noise limited images are divided into two groups based on: (1) the Haar representation, (2) the transformation of Poisson noise into white Gaussian noise by the Haar–Fisz transform followed by a denoising. In this study, three variants of the first group and three variants of the second, including the adaptative Wiener filter, four types of wavelet thresholdings and the Bayesian method of Pizurica were compared to Metz and Hanning filters and to Shine, a systematic noise elimination process. All these methods, except Shine, are parametric. For each of them, ranges of optimal values for the parameters were highlighted as a function of the aforementioned criteria. The intersection of ranges for the wavelet methods without thresholding was empty, and these methods were therefore not further compared quantitatively. The thresholding techniques and Shine gave the best results in resolution and contrast. The largest improvement in signal-to-noise ratio was obtained by the filters. Ideally, these filters should be accurately defined for each image. This is difficult in the clinical context. Moreover, they generate oscillation artefacts. In addition, the wavelet techniques did not bring significant improvements, and are rather slow. Therefore, Shine, which is fast and works automatically, appears to be an interesting alternative.  相似文献   

2.
Positron emission tomography (PET) images have been incorporated into the radiotherapy process as a powerful tool to assist in the contouring of lesions, leading to the emergence of a broad spectrum of automatic segmentation schemes for PET images (PET-AS). However, not all proposed PET-AS algorithms take into consideration the previous steps of image preparation. PET image noise has been shown to be one of the most relevant affecting factors in segmentation tasks. This study demonstrates a nonlinear filtering method based on spatially adaptive wavelet shrinkage using three-dimensional context modelling that considers the correlation of each voxel with its neighbours. Using this noise reduction method, excellent edge conservation properties are obtained. To evaluate the influence in the segmentation schemes of this filter, it was compared with a set of Gaussian filters (the most conventional) and with two previously optimised edge-preserving filters. Five segmentation schemes were used (most commonly implemented in commercial software): fixed thresholding, adaptive thresholding, watershed, adaptive region growing and affinity propagation clustering. Segmentation results were evaluated using the Dice similarity coefficient and classification error. A simple metric was also included to improve the characterisation of the filters used for induced blurring evaluation, based on the measurement of the average edge width. The proposed noise reduction procedure improves the results of segmentation throughout the performed settings and was shown to be more stable in low-contrast and high-noise conditions. Thus, the capacity of the segmentation method is reinforced by the denoising plan used.  相似文献   

3.
王小兵  孙久运 《生物磁学》2011,(20):3954-3957
目的:医学影像在获取、存储、传输过程中会不同程度地受到噪声污染,这极大影像了其在临床诊疗中的应用。为了有效地滤除医学影像噪声,提出了一种混合滤波算法。方法:该算法首先将含有高斯和椒盐噪声的图像进行形态学开运算,然后对开运算后的图像进行二维小波分解,得到高频和低频小波分解系数。保留低频系数不变,将高频系数经过维纳滤波器进行滤波,最后进行小波系数重构。结果:采用该混合滤波算法、小波阚值去噪、中值滤波、维纳滤波分别对含有混合噪声的医学影像分别进行滤除噪声处理,该滤波算法去噪后影像的PSNR值明显高于其他三种方法。结论:该混合滤波算法是一种较为有效的医学影像噪声滤除方法。  相似文献   

4.
Denoising is a fundamental early stage in 2‐DE image analysis strongly influencing spot detection or pixel‐based methods. A novel nonlinear adaptive spatial filter (median‐modified Wiener filter, MMWF), is here compared with five well‐established denoising techniques (Median, Wiener, Gaussian, and Polynomial‐Savitzky–Golay filters; wavelet denoising) to suggest, by means of fuzzy sets evaluation, the best denoising approach to use in practice. Although median filter and wavelet achieved the best performance in spike and Gaussian denoising respectively, they are unsuitable for contemporary removal of different types of noise, because their best setting is noise‐dependent. Vice versa, MMWF that arrived second in each single denoising category, was evaluated as the best filter for global denoising, being its best setting invariant of the type of noise. In addition, median filter eroded the edge of isolated spots and filled the space between close‐set spots, whereas MMWF because of a novel filter effect (drop‐off‐effect) does not suffer from erosion problem, preserves the morphology of close‐set spots, and avoids spot and spike fuzzyfication, an aberration encountered for Wiener filter. In our tests, MMWF was assessed as the best choice when the goal is to minimize spot edge aberrations while removing spike and Gaussian noise.  相似文献   

5.
In this paper, a new filtering method is presented to remove the Rician noise from magnetic resonance images (MRI) acquired using single coil MRI acquisition system. This filter is based on nonlocal neutrosophic set (NLNS) approach of Wiener filtering. A neutrosophic set (NS), a part of neutrosophy theory, studies the origin, nature, and scope of neutralities, as well as their interactions with different ideational spectra. Now, we apply the neutrosophic set into image domain and define some concepts and operators for image denoising. First, the nonlocal mean is applied to the noisy MRI. The resultant image is transformed into NS domain, described using three membership sets: true (T), indeterminacy (I) and false (F). The entropy of the neutrosophic set is defined and employed to measure the indeterminacy. The ω-Wiener filtering operation is used on T and F to decrease the set indeterminacy and to remove the noise. The experiments have been conducted on simulated MR images from Brainweb database and clinical MR images. The results show that the NLNS Wiener filter produces better denoising results in terms of qualitative and quantitative measures compared with other denoising methods, such as classical Wiener filter, the anisotropic diffusion filter, the total variation minimization and the nonlocal means filter. The visual and the diagnostic quality of the denoised image are well preserved.  相似文献   

6.
The light-growth response of the Phycomyces sporangiophore is a transient change of elongation rate in response to changes in ambient blue-light intensity. The white-noise method of nonlinear system identification (Wiener-Lee-Schetzen theory) has been applied to this response, and the results have been interpreted by system analysis methods in the frequency domain. Experiments were performed on the Phycomyces tracking machine. Gaussian white-noise stimulus patterns were applied to the logarithm of the light intensity. The log-mean intensity of the broadband blue illumination was 0.1 W m-2 and the standard deviation of the Gaussian white-noise was 0.58 decades. The results, in the form of temporal functions called Wiener kernels, represent the input-output relation of the light-growth response system. The transfer function, which was obtained as the Fourier transform of the first-order kernel, was analyzed in the frequency domain in terms of a dynamic model that consisted of a first-order high-pass filter, two secondorder low-pass filters, a delay element, and a gain factor. Parameters in the model (cutoff frequencies, damping coefficients, latency, and gain constant) were evaluated by nonlinear least-squares methods applied to the complex-valued transfer function. Analysis of the second-order kernel in the frequency domain suggests that the residual nonlinearity of the system lies close to the input.  相似文献   

7.
OBJECTIVE: To develop a semiautomated, quantitative techniquefor the assessment of vascular density in immunohistochemically stained tissue sections using diaminobenzidine tetrahydrochloride (DAB) and hematoxylin as chromagens. STUDY DESIGN: A semiautomated thresholding technique was developed to quantitate vascular density in tissue sections stained with anti-CD31 (1 degrees antibody). The immunohistochemically stained specimens were digitally imaged using a 24-bit color camera. The blue component of the RGB image was segmented using a variable high-pass filter. After thresholding, the segmented areas (CD31 positive) were quantified and vascular density determined. The validity of the method was verified by calculating the precision of the technique using the coefficient of repeatability and by quantifying its agreement with manual analysis according to the Bland-Altman approach. RESULTS: Vascular endothelial cells were specifically selected using anti-CD31 as the primary antibody and the appropriate horseradish peroxidase-conjugated secondary antibody. Utilizing the semiautomated thresholding technique, the separation of DAB-stained tissuefrom non-DAB-stained tissue was achieved. The method developed possesses a low coefficient of repeatability (0.49%), agrees well with manual assessment (mean difference = -0.29 +/- 0.92%), is highly automated and is user friendly. CONCLUSION: A novel semiautomated technique for the quantification of vascular density was developed. This technique provides a method for reproducible measurement of immunostaining procedures (immunohistochemistry, immunocytochemistry and in situ hybridization) utilizing immunoperoxidase techniques with DAB as a chromagen.  相似文献   

8.
In this work, we compare the merits of three temporal data deconvolution methods for use in the filtered backprojection algorithm for photoacoustic tomography (PAT). We evaluate the standard Fourier division technique, the Wiener deconvolution filter, and a Tikhonov L-2 norm regularized matrix inversion method. Our experiments were carried out on subjects of various appearances, namely a pencil lead, two man-made phantoms, an in vivo subcutaneous mouse tumor model, and a perfused and excised mouse brain. All subjects were scanned using an imaging system with a rotatable hemispherical bowl, into which 128 ultrasound transducer elements were embedded in a spiral pattern. We characterized the frequency response of each deconvolution method, compared the final image quality achieved by each deconvolution technique, and evaluated each method’s robustness to noise. The frequency response was quantified by measuring the accuracy with which each filter recovered the ideal flat frequency spectrum of an experimentally measured impulse response. Image quality under the various scenarios was quantified by computing noise versus resolution curves for a point source phantom, as well as the full width at half maximum (FWHM) and contrast-to-noise ratio (CNR) of selected image features such as dots and linear structures in additional imaging subjects. It was found that the Tikhonov filter yielded the most accurate balance of lower and higher frequency content (as measured by comparing the spectra of deconvolved impulse response signals to the ideal flat frequency spectrum), achieved a competitive image resolution and contrast-to-noise ratio, and yielded the greatest robustness to noise. While the Wiener filter achieved a similar image resolution, it tended to underrepresent the lower frequency content of the deconvolved signals, and hence of the reconstructed images after backprojection. In addition, its robustness to noise was poorer than that of the Tikhonov filter. The performance of the Fourier filter was found to be the poorest of all three methods, based on the reconstructed images’ lowest resolution (blurriest appearance), generally lowest contrast-to-noise ratio, and lowest robustness to noise. Overall, the Tikhonov filter was deemed to produce the most desirable image reconstructions.  相似文献   

9.
PurposePositron emission tomography (PET) images tend to be significantly degraded by the partial volume effect (PVE) resulting from the limited spatial resolution of the reconstructed images. Our purpose is to propose a partial volume correction (PVC) method to tackle this issue.MethodsIn the present work, we explore a voxel-based PVC method under the least squares framework (LS) employing anatomical non-local means (NLMA) regularization. The well-known non-local means (NLM) filter utilizes the high degree of information redundancy that typically exists in images, and is typically used to directly reduce image noise by replacing each voxel intensity with a weighted average of its non-local neighbors. Here we explore NLM as a regularization term within iterative-deconvolution model to perform PVC. Further, an anatomical-guided version of NLM was proposed that incorporates MRI information into NLM to improve resolution and suppress image noise. The proposed approach makes subtle usage of the accompanying MRI information to define a more appropriate search space within the prior model. To optimize the regularized LS objective function, we used the Gauss-Seidel (GS) algorithm with the one-step-late (OSL) technique.ResultsAfter the import of NLMA, the visual and quality results are all improved. With a visual check, we notice that NLMA reduce the noise compared to other PVC methods. This is also validated in bias-noise curve compared to non-MRI-guided PVC framework. We can see NLMA gives better bias-noise trade-off compared to other PVC methods.ConclusionsOur efforts were evaluated in the base of amyloid brain PET imaging using the BrainWeb phantom and in vivo human data. We also compared our method with other PVC methods. Overall, we demonstrated the value of introducing subtle MRI-guidance in the regularization process, the proposed NLMA method resulting in promising visual as well as quantitative performance improvements.  相似文献   

10.
The Wiener filter is a standard means of optimizing the signal in sums of aligned, noisy images obtained by electron cryo-microscopy (cryo-EM). However, estimation of the resolution-dependent (“spectral”) signal-to-noise ratio (SSNR) from the input data has remained problematic, and error reduction due to specific application of the SSNR term within a Wiener filter has not been reported. Here we describe an adjustment to the Wiener filter for optimal summation of images of isolated particles surrounded by large regions of featureless background, as is typically the case in single-particle cryo-EM applications. We show that the density within the particle area can be optimized, in the least-squares sense, by scaling the SSNR term found in the conventional Wiener filter by a factor that reflects the fraction of the image field occupied by the particle. We also give related expressions that allow the SSNR to be computed for application in this new filter, by incorporating a masking step into a Fourier Ring Correlation (FRC), a standard resolution measure. Furthermore, we show that this masked FRC estimation scheme substantially improves on the accuracy of conventional SSNR estimation methods. We demonstrate the validity of our new approach in numeric tests with simulated data corresponding to realistic cryo-EM imaging conditions. This variation of the Wiener filter and accompanying derivation should prove useful for a variety of single-particle cryo-EM applications, including 3D reconstruction.  相似文献   

11.
一种滤除医学影像噪声的混合滤波算法   总被引:1,自引:0,他引:1       下载免费PDF全文
目的:医学影像在获取、存储、传输过程中会不同程度地受到噪声污染,这极大影像了其在临床诊疗中的应用。为了有效地滤除医学影像噪声,提出了一种混合滤波算法。方法:该算法首先将含有高斯和椒盐噪声的图像进行形态学开运算,然后对开运算后的图像进行二维小波分解,得到高频和低频小波分解系数。保留低频系数不变,将高频系数经过维纳滤波器进行滤波,最后进行小波系数重构。结果:采用该混合滤波算法、小波阈值去噪、中值滤波、维纳滤波分别对含有混合噪声的医学影像分别进行滤除噪声处理,该滤波算法去噪后影像的PSNR值明显高于其他三种方法。结论:该混合滤波算法是一种较为有效的医学影像噪声滤除方法。  相似文献   

12.
In this paper, a novel wavelet transform based blood vessel distortion measure (WBVDM) is proposed to assess the image quality of blood vessels in the processed retinal images. The wavelet analysis of retinal image shows that different wavelet subbands carry different information about the blood vessels. The WBVDM is defined as the sum of wavelet weighted root of the normalized mean square error of subbands expressed in percentage. The proposed WBVDM is compared with other wavelet based distortion measures such as wavelet mean square error(WMSE), Relative WMSE(Rel WMSE) and root of the normalized WMSE(RNWMSE). The results show that WBVDM performs better in capturing the blood vessel distortion. For distortion in clinically nonsignificant regions, the proposed WBVDM shows a low value of 1.1676 compared to a large mean square error value of 7.9909. The evaluation of correlation using Pearson linear correlation coefficient (PLCC) and Spearman rank order correlation coefficient (SROCC) shows a higher value for the correlation between WBVDM and subjective score. The experimental observations show that WBVDM is able to capture the distortion in blood vessels more effectively and responds weakly to the distortion inherent in the other retinal features.  相似文献   

13.
This paper presents a new ECG denoising approach based on noise reduction algorithms in empirical mode decomposition (EMD) and discrete wavelet transform (DWT) domains. Unlike the conventional EMD based ECG denoising approaches that neglect a number of initial intrinsic mode functions (IMFs) containing the QRS complex as well as noise, we propose to perform windowing in the EMD domain in order to reduce the noise from the initial IMFs instead of discarding them completely thus preserving the QRS complex and yielding a relatively cleaner ECG signal. The signal thus obtained is transformed in the DWT domain, where an adaptive soft thresholding based noise reduction algorithm is employed considering the advantageous properties of the DWT compared to that of the EMD in preserving the energy in the presence of noise and in reconstructing the original ECG signal with a better time resolution. Extensive simulations are carried out using the MIT-BIH arrythmia database and the performance of the proposed method is evaluated in terms of several standard metrics. The simulation results show that the proposed method is able to reduce noise from the noisy ECG signals more accurately and consistently in comparison to some of the stateof-the-art methods.  相似文献   

14.
A method for the preparation of solvent-free, phospholipid-impregnated filters is described. Polycarbonate filters of 13 mm diameter, 5 μm thickness and 0.1 μm pore size were employed, and 20 to 48 nmol phospholipid per filter was incorporated. Passive permeation of polar substances across the filter was determined by a flow-dialysis procedure. The presence of phospholipid led to a decrease in passive permeability by 96 – 99%. Due to their size and stability, and a partially preferential lipid orientation, these phospholipid-impregnated filters may serve as an alternative type of model membrane.  相似文献   

15.
De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series'' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed.  相似文献   

16.
The Gaussian derivative model for spatial vision: I. Retinal mechanisms   总被引:5,自引:0,他引:5  
R A Young 《Spatial Vision》1987,2(4):273-293
Physiological evidence is presented that visual receptive fields in the primate eye are shaped like the sum of a Gaussian function and its Laplacian. A new 'difference-of-offset-Gaussians' or DOOG neural mechanism was identified, which provided a plausible neural mechanism for generating such Gaussian derivative-like fields. The DOOG mechanism and the associated Gaussian derivative model provided a better approximation to the data than did the Gabor or other competing models. A model-free Wiener filter analysis provided independent confirmation of these results. A machine vision system was constructed to simulate human foveal retinal vision, based on Gaussian derivative filters. It provided edge and line enhancement (deblurring) and noise suppression, while retaining all the information in the original image.  相似文献   

17.
A method to denoise single-molecule fluorescence resonance energy (smFRET) trajectories using wavelet detail thresholding and Bayesian inference is presented. Bayesian methods are developed to identify fluorophore photoblinks in the time trajectories. Simulated data are used to quantify the improvement in static and dynamic data analysis. Application of the method to experimental smFRET data shows that it distinguishes photoblinks from large shifts in smFRET efficiency while maintaining the important advantage of an unbiased approach. Known sources of experimental noise are examined and quantified as a means to remove their contributions via soft thresholding of wavelet coefficients. A wavelet decomposition algorithm is described, and thresholds are produced through the knowledge of noise parameters in the discrete-time photon signals. Reconstruction of the signals from thresholded coefficients produces signals that contain noise arising only from unquantifiable parameters. The method is applied to simulated and observed smFRET data, and it is found that the denoised data retain their underlying dynamic properties, but with increased resolution.  相似文献   

18.
本文首次以平均ERP和单次ERP小波变换系数相关性为基础,设计了小波时频滤波器,可以将单次事件关联电位的P3波从眼动、自发脑电等干扰中提取出来  相似文献   

19.
Electrocardiogram (ECG) is a vital sign monitoring measurement of the cardiac activity. One of the main problems in biomedical signals like electrocardiogram is the separation of the desired signal from noises caused by power line interference, muscle artifacts, baseline wandering and electrode artifacts. Different types of digital filters are used to separate signal components from unwanted frequency ranges. Adaptive filter is one of the primary methods to filter, because it does not need the signal statistic characteristics. In contrast with Fourier analysis and wavelet methods, a new technique called EMD, a fully data-driven technique is used. It is an adaptive method well suited to analyze biomedical signals. This paper foregrounds an empirical mode decomposition based two-weight adaptive filter structure to eliminate the power line interference in ECG signals. This paper proposes four possible methods and each have less computational complexity compared to other methods. These methods of filtering are fully a signal-dependent approach with adaptive nature, and hence it is best suited for denoising applications. Compared to other proposed methods, EMD based direct subtraction method gives better SNR irrespective of the level of noises.  相似文献   

20.
Advances in digital technologies have allowed us to generate more images than ever. Images of scanned documents are examples of these images that form a vital part in digital libraries and archives. Scanned degraded documents contain background noise and varying contrast and illumination, therefore, document image binarisation must be performed in order to separate foreground from background layers. Image binarisation is performed using either local adaptive thresholding or global thresholding; with local thresholding being generally considered as more successful. This paper presents a novel method to global thresholding, where a neural network is trained using local threshold values of an image in order to determine an optimum global threshold value which is used to binarise the whole image. The proposed method is compared with five local thresholding methods, and the experimental results indicate that our method is computationally cost-effective and capable of binarising scanned degraded documents with superior results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号