首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 795 毫秒
1.
This paper presents a total variation (TV) regularized reconstruction algorithm for 3D positron emission tomography (PET). The proposed method first employs the Fourier rebinning algorithm (FORE), rebinning the 3D data into a stack of ordinary 2D data sets as sinogram data. Then, the resulted 2D sinogram are ready to be reconstructed by conventional 2D reconstruction algorithms. Given the locally piece-wise constant nature of PET images, we introduce the total variation (TV) based reconstruction schemes. More specifically, we formulate the 2D PET reconstruction problem as an optimization problem, whose objective function consists of TV norm of the reconstructed image and the data fidelity term measuring the consistency between the reconstructed image and sinogram. To solve the resulting minimization problem, we apply an efficient methods called the Bregman operator splitting algorithm with variable step size (BOSVS). Experiments based on Monte Carlo simulated data and real data are conducted as validations. The experiment results show that the proposed method produces higher accuracy than conventional direct Fourier (DF) (bias in BOSVS is 70% of ones in DF, variance of BOSVS is 80% of ones in DF).  相似文献   

2.
Although iterative reconstruction is widely applied in SPECT/PET, its introduction in clinical CT is quite recent, in the past the demand for extensive computer power and long image reconstruction times have stopped the diffusion of this technique. Recently Iterative Reconstruction in Image Space (IRIS) has been introduced on Siemens top CT scanners. This recon method works on image data area, reducing the time-consuming loops on raw data and noise removal is obtained in subsequent iterative steps with a smoothing process. We evaluated image noise, low contrast resolution, CT number linearity and accuracy, transverse and z-axis spatial resolution using some dedicated phantoms in single, dual source and cardiac mode. We reconstructed images with a traditional filtered back-projection algorithm and with IRIS. The iterative procedure preserves spatial resolution, CT number accuracy and linearity moreover decreases image noise. These preliminary results support the idea that dose reduction with preserved image quality is possible with IRIS, even if studies on patients are necessary to confirm these data.  相似文献   

3.
After a brief review of the history of time-of-flight (TOF) positron emission tomography (PET) instrumentation from the 1980s to present, the principles of TOF PET are introduced, the concept of time resolution and its effect on TOF gain in signal-to-noise ratio (SNR) are discussed. The factors influencing the time resolution of a TOF PET scanner are presented, with focus on the intrinsic properties of scintillators of particular interest for TOF PET. Finally, some open issues, challenges and achievements of today's TOF PET reconstruction are reviewed: the structure of the data organization, the choice of analytical or iterative method, the recent experimental assessment of TOF image quality, and the most promising applications of TOF PET.  相似文献   

4.
PurposeTo assess the influence of reconstruction algorithms and parameters on the PET image quality of brain phantoms in order to optimize reconstruction for clinical PET brain studies in a new generation PET/CT.MethodsThe 3D Hoffman phantom that simulates 18F-fluorodeoxyglucose (FDG) distribution was imaged in a Siemens Biograph mCT TrueV PET/CT with Time of Flight (TOF) and Point Spread Function (PSF) modelling. Contrast-to-Noise Ratio (CNR), contrast and noise were studied for different reconstruction models: OSEM, OSEM + TOF, OSEM + PSF and OSEM + PSF + TOF.The 2D multi-compartment Hoffman phantom was filled to simulate 4 different tracers' spatial distribution: FDG, 11C-flumazenil (FMZ), 11C-Methionine (MET) and 6-18F-fluoro-l-dopa (FDOPA). The best algorithm for each tracer was selected by visual inspection. The maximization of CNR determined the optimal parameters for each reconstruction.ResultsIn the 3D Hoffman phantom, both noise and contrast increased with increasing number of iterations and decreased with increasing FWHM. OSEM + PSF + TOF reconstruction was generally superior to other reconstruction models. Visual analysis of the 2D Hoffman brain phantom suggested that OSEM + PSF + TOF is the optimum algorithm for tracers with focal uptake, such as MET or FDOPA, and OSEM + TOF for tracers with diffuse cortical uptake (i.e. FDG and FMZ). Optimization of CNR demonstrated that OSEM + TOF reconstruction must be performed with 2 iterations and a filter FWHM of 3 mm, and OSEM + PSF + TOF reconstruction with 4 iterations and 1 mm FWHM filter.ConclusionsOptimization of reconstruction algorithm and parameters has been performed to take particular advantage of the last generation PET scanner, recommending specific settings for different brain PET radiotracers.  相似文献   

5.
Liu H  Wang S  Gao F  Tian Y  Chen W  Hu Z  Shi P 《PloS one》2012,7(3):e32224
In Positron Emission Tomography (PET), an optimal estimate of the radioactivity concentration is obtained from the measured emission data under certain criteria. So far, all the well-known statistical reconstruction algorithms require exactly known system probability matrix a priori, and the quality of such system model largely determines the quality of the reconstructed images. In this paper, we propose an algorithm for PET image reconstruction for the real world case where the PET system model is subject to uncertainties. The method counts PET reconstruction as a regularization problem and the image estimation is achieved by means of an uncertainty weighted least squares framework. The performance of our work is evaluated with the Shepp-Logan simulated and real phantom data, which demonstrates significant improvements in image quality over the least squares reconstruction efforts.  相似文献   

6.
X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing norm of the patch basis functions coefficients, and a coefficient multiplying the norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography.  相似文献   

7.
The large amount of image data necessary for high-resolution 3D reconstruction of macromolecular assemblies leads to significant increases in the computational time. One of the most time consuming operations is 3D density map reconstruction, and software optimization can greatly reduce the time required for any given structural study. The majority of algorithms proposed for improving the computational effectiveness of a 3D reconstruction are based on a ray-by-ray projection of each image into the reconstructed volume. In this paper, we propose a novel fast implementation of the "filtered back-projection" algorithm based on a voxel-by-voxel principle. Our version of this implementation has been exhaustively tested using both model and real data. We compared 3D reconstructions obtained by the new approach with results obtained by the filtered Back-Projections algorithm and the Fourier-Bessel algorithm commonly used for reconstructing icosahedral viruses. These computational experiments demonstrate the robustness, reliability, and efficiency of this approach.  相似文献   

8.
IntroductionThe aim of this study was to determine the optimal image matrix and half-width of the Gaussian filter after iterative reconstruction of the PET image with point-spread function (PSF) and time-of-flight (TOF) correction, based on measuring the recovery coefficient (RC) curves. The measured RC curves were compared to those from an older system which does not use PSF and TOF corrections.Materials and methodsThe measurements were carried out on a NEMA IEC Body Phantom. We measured the RC curves based on SUVmax and SUVA50 in source spheres with different diameters. The change in noise level for different reconstruction parameter settings and the relation between RC curves and the administered activity were also evaluated.ResultsWith an increasing size of image matrix and reduction in the half-width of the post-reconstruction Gaussian filter, there was a significant increase in image noise and overestimation of the SUV. The local increase in SUV, observed for certain filtrations and objects with a diameter below 13 mm, was caused by PSF correction. The decrease in administered activity, while maintaining the same conditions of acquisition and reconstruction, also led to overestimation of readings of the SUV and additionally to deterioration in reproducibility.ConclusionThis study proposes a suitable size for the image matrix and filtering for displaying PET and SUV measurements. The benefits were demonstrated as improved image parameters for the newer instrument, these even being found using relatively strong filtration of the reconstructed images.  相似文献   

9.
PurposeThe Bayesian penalized-likelihood reconstruction algorithm (BPL), Q.Clear, uses relative difference penalty as a regularization function to control image noise and the degree of edge-preservation in PET images. The present study aimed to determine the effects of suppression on edge artifacts due to point-spread-function (PSF) correction using a Q.Clear.MethodsSpheres of a cylindrical phantom contained a background of 5.3 kBq/mL of [18F]FDG and sphere-to-background ratios (SBR) of 16, 8, 4 and 2. The background also contained water and spheres containing 21.2 kBq/mL of [18F]FDG as non-background. All data were acquired using a Discovery PET/CT 710 and were reconstructed using three-dimensional ordered-subset expectation maximization with time-of-flight (TOF) and PSF correction (3D-OSEM), and Q.Clear with TOF (BPL). We investigated β-values of 200–800 using BPL. The PET images were analyzed using visual assessment and profile curves, edge variability and contrast recovery coefficients were measured.ResultsThe 38- and 27-mm spheres were surrounded by higher radioactivity concentration when reconstructed with 3D-OSEM as opposed to BPL, which suppressed edge artifacts. Images of 10-mm spheres had sharper overshoot at high SBR and non-background when reconstructed with BPL. Although contrast recovery coefficients of 10-mm spheres in BPL decreased as a function of increasing β, higher penalty parameter decreased the overshoot.ConclusionsBPL is a feasible method for the suppression of edge artifacts of PSF correction, although this depends on SBR and sphere size. Overshoot associated with BPL caused overestimation in small spheres at high SBR. Higher penalty parameter in BPL can suppress overshoot more effectively.  相似文献   

10.
This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L 1-L 2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed.  相似文献   

11.
A new algorithm for computing electron microscopy tomograms which combines iterative methods with dual-axis geometry is presented. Initial modelling using test data shows several improvements over both the weighted back-projection and simultaneous iterative reconstruction technique methods, with increased stability and tomogram fidelity under high-noise conditions. Preliminary experimental dual-axis reconstructions confirm the viability of the new algorithm.  相似文献   

12.
Projection and back-projection are the most computationally intensive parts in Computed Tomography (CT) reconstruction, and are essential to acceleration of CT reconstruction algorithms. Compared to back-projection, parallelization efficiency in projection is highly limited by racing condition and thread unsynchronization. In this paper, a strategy of Fixed Sampling Number Projection (FSNP) is proposed to ensure the operation synchronization in the ray-driven projection with Graphical Processing Unit (GPU). Texture fetching is also used utilized to further accelerate the interpolations in both projection and back-projection. We validate the performance of this FSNP approach using both simulated and real cone-beam CT data. Experimental results show that compare to the conventional approach, the proposed FSNP method together with texture fetching is 10~16 times faster than the conventional approach based on global memory, and thus leads to more efficient iterative algorithm in CT reconstruction.  相似文献   

13.
PurposeTo study the feasibility of using an iterative reconstruction algorithm to improve previously reconstructed CT images which are judged to be non-diagnostic on clinical review. A novel rapidly converging, iterative algorithm (RSEMD) to reduce noise as compared with standard filtered back-projection algorithm has been developed.Materials and methodsThe RSEMD method was tested on in-silico, Catphan®500, and anthropomorphic 4D XCAT phantoms. The method was applied to noisy CT images previously reconstructed with FBP to determine improvements in SNR and CNR. To test the potential improvement in clinically relevant CT images, 4D XCAT phantom images were used to simulate a small, low contrast lesion placed in the liver.ResultsIn all of the phantom studies the images proved to have higher resolution and lower noise as compared with images reconstructed by conventional FBP. In general, the values of SNR and CNR reached a plateau at around 20 iterations with an improvement factor of about 1.5 for in noisy CT images. Improvements in lesion conspicuity after the application of RSEMD have also been demonstrated. The results obtained with the RSEMD method are in agreement with other iterative algorithms employed either in image space or with hybrid reconstruction algorithms.ConclusionsIn this proof of concept work, a rapidly converging, iterative deconvolution algorithm with a novel resolution subsets-based approach that operates on DICOM CT images has been demonstrated. The RSEMD method can be applied to sub-optimal routine-dose clinical CT images to improve image quality to potentially diagnostically acceptable levels.  相似文献   

14.
Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications.  相似文献   

15.
PurposeSimulating low-dose Computed Tomography (CT) facilitates in-silico studies into the required dose for a diagnostic task. Conventionally, low-dose CT images are created by adding noise to the projection data. However, in practice the raw data is often simply not available. This paper presents a new method for simulating patient-specific, low-dose CT images without the need of the original projection data.MethodsThe low-dose CT simulation method included the following: (1) computation of a virtual sinogram from a high dose CT image through a radon transform; (2) simulation of a ‘reduced’-dose sinogram with appropriate amounts of noise; (3) subtraction of the high-dose virtual sinogram from the reduced-dose sinogram; (4) reconstruction of a noise volume via filtered back-projection; (5) addition of the noise image to the original high-dose image. The required scanner-specific parameters, such as the apodization window, bowtie filter, the X-ray tube output parameter (reflecting the photon flux) and the detector read-out noise, were retrieved from calibration images of a water cylinder. The low-dose simulation method was evaluated by comparing the noise characteristics in simulated images with experimentally acquired data.ResultsThe models used to recover the scanner-specific parameters fitted accurately to the calibration data, and the values of the parameters were comparable to values reported in literature. Finally, the simulated low-dose images accurately reproduced the noise characteristics in experimentally acquired low-dose-volumes.ConclusionThe developed methods truthfully simulate low-dose CT imaging for a specific scanner and reconstruction using filtered backprojection. The scanner-specific parameters can be estimated from calibration data.  相似文献   

16.
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and 1-regularized parallel imaging methods.  相似文献   

17.
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.  相似文献   

18.
PurposePositron emission tomography (PET) images tend to be significantly degraded by the partial volume effect (PVE) resulting from the limited spatial resolution of the reconstructed images. Our purpose is to propose a partial volume correction (PVC) method to tackle this issue.MethodsIn the present work, we explore a voxel-based PVC method under the least squares framework (LS) employing anatomical non-local means (NLMA) regularization. The well-known non-local means (NLM) filter utilizes the high degree of information redundancy that typically exists in images, and is typically used to directly reduce image noise by replacing each voxel intensity with a weighted average of its non-local neighbors. Here we explore NLM as a regularization term within iterative-deconvolution model to perform PVC. Further, an anatomical-guided version of NLM was proposed that incorporates MRI information into NLM to improve resolution and suppress image noise. The proposed approach makes subtle usage of the accompanying MRI information to define a more appropriate search space within the prior model. To optimize the regularized LS objective function, we used the Gauss-Seidel (GS) algorithm with the one-step-late (OSL) technique.ResultsAfter the import of NLMA, the visual and quality results are all improved. With a visual check, we notice that NLMA reduce the noise compared to other PVC methods. This is also validated in bias-noise curve compared to non-MRI-guided PVC framework. We can see NLMA gives better bias-noise trade-off compared to other PVC methods.ConclusionsOur efforts were evaluated in the base of amyloid brain PET imaging using the BrainWeb phantom and in vivo human data. We also compared our method with other PVC methods. Overall, we demonstrated the value of introducing subtle MRI-guidance in the regularization process, the proposed NLMA method resulting in promising visual as well as quantitative performance improvements.  相似文献   

19.
20.
PurposeLimited-angle CT imaging is an effective technique to reduce radiation. However, existing image reconstruction methods can effectively reduce streak artifacts but fail to suppress those artifacts around edges due to incomplete projection data. Thus, a modified NLM (mNLM) based reconstruction method is proposed.MethodsSince the artifacts around edges mainly exist in local position, it is possible to restore the true pixels in artifacts using pixels located in artifacts-free regions. In each iteration, mNLM is performed on image reconstructed by ART followed by positivity constraint. To solve the problem caused by ART-mNLM that there is undesirable information that may appear in the image, ART-TV is then utilized in the following iterative process after ART-mNLM iterates for a number of iterations. The proposed algorithm is named as ART-mNLM/TV.ResultsSimulation experiments are performed to validate the feasibility of algorithm. When the scanning range is [0, 150°], our algorithm outperforms the ART-NLM and ART-TV with more than 40% and 29% improvement in terms of SNR and with more than 58% and 49% reduction in terms of MAE. Consistently, reconstructed images from real projection data also demonstrate the effectiveness of presented algorithm.ConclusionThis paper uses mNLM which benefits from redundancy of information across the whole image, to recover the true value of pixels in artifacts region by utilizing pixels from artifact-free regions, and artifacts around the edges can be mitigated effectively. Experiments show that the proposed ART-mNLM/TV is able to achieve better performances compared to traditional methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号