首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

Image registration is to produce an entire scene by aligning all the acquired image sequences. A registration algorithm is necessary to tolerance as much as possible for intensity and geometric variation among images. However, captured image views of real scene usually produce unexpected distortions. They are generally derived from the optic characteristics of image sensors or caused by the specific scenes and objects.

Methods and Findings

An analytic registration algorithm considering the deformation is proposed for scenic image applications in this study. After extracting important features by the wavelet-based edge correlation method, an analytic registration approach is then proposed to achieve deformable and accurate matching of point sets. Finally, the registration accuracy is further refined to obtain subpixel precision by a feature-based Levenberg-Marquardt (FLM) method. It converges evidently faster than most other methods because of its feature-based characteristic.

Conclusions

We validate the performance of proposed method by testing with synthetic and real image sequences acquired by a hand-held digital still camera (DSC) and in comparison with an optical flow-based motion technique in terms of the squared sum of intensity differences (SSD) and correlation coefficient (CC). The results indicate that the proposed method is satisfactory in the registration accuracy and quality of DSC images.  相似文献   

2.
Optical-CT dual-modality imaging requires the mapping between 2D fluorescence images and 3D body surface light flux. In this paper, we proposed an optical-CT dual-modality image mapping algorithm based on the Digitally Reconstructed Radiography (DRR) registration. In the process of registration, a series of DRR images were computed from CT data using the ray casting algorithm. Then, the improved HMNI similarity strategy based on Hausdorff distance was used to complete the registration of the white-light optical images and DRR virtual images. According to the corresponding relationship obtained by the image registration and the Lambert’s cosine law based on the pin-hole imaging model, the 3D light intensity distribution on the surface of the object could be solved. The feasibility and effectiveness of the mapping algorithm are verified by the irregular phantom and mouse experiments.  相似文献   

3.
IntroductionOur markerless tumor tracking algorithm requires 4DCT data to train models. 4DCT cannot be used for markerless tracking for respiratory-gated treatment due to inaccuracies and a high radiation dose. We developed a deep neural network (DNN) to generate 4DCT from 3DCT data.MethodsWe used 2420 thoracic 4DCT datasets from 436 patients to train a DNN, designed to export 9 deformation vector fields (each field representing one-ninth of the respiratory cycle) from each CT dataset based on a 3D convolutional autoencoder with shortcut connections using deformable image registration. Then 3DCT data at exhale were transformed using the predicted deformation vector fields to obtain simulated 4DCT data. We compared markerless tracking accuracy between original and simulated 4DCT datasets for 20 patients. Our tracking algorithm used a machine learning approach with patient-specific model parameters. For the training stage, a pair of digitally reconstructed radiography images was generated using 4DCT for each patient. For the prediction stage, the tracking algorithm calculated tumor position using incoming fluoroscopic image data.ResultsDiaphragmatic displacement averaged over 40 cases for the original 4DCT were slightly higher (<1.3 mm) than those for the simulated 4DCT. Tracking positional errors (95th percentile of the absolute value of displacement, “simulated 4DCT” minus “original 4DCT”) averaged over the 20 cases were 0.56 mm, 0.65 mm, and 0.96 mm in the X, Y and Z directions, respectively.ConclusionsWe developed a DNN to generate simulated 4DCT data that are useful for markerless tumor tracking when original 4DCT is not available. Using this DNN would accelerate markerless tumor tracking and increase treatment accuracy in thoracoabdominal treatment.  相似文献   

4.
Biplane 2D-3D registration approaches have been used for measuring 3D, in vivo glenohumeral (GH) joint kinematics. Computed tomography (CT) has become the gold standard for reconstructing 3D bone models, as it provides high geometric accuracy and similar tissue contrast to video-radiography. Alternatively, magnetic resonance imaging (MRI) would not expose subjects to radiation and provides the ability to add cartilage and other soft tissues to the models. However, the accuracy of MRI-based 2D-3D registration for quantifying glenohumeral kinematics is unknown. We developed an automatic 2D-3D registration program that works with both CT- and MRI-based image volumes for quantifying joint motions. The purpose of this study was to use the proposed 2D-3D auto-registration algorithm to describe the humerus and scapula tracking accuracy of CT- and MRI-based registration relative to radiostereometric analysis (RSA) during dynamic biplanar video-radiography. The GH kinematic accuracy (RMS error) was 0.6–1.0 mm and 0.6–2.2° for the CT-based registration and 1.4–2.2 mm and 1.2–2.6° for MRI-based registration. Higher kinematic accuracy of CT-based registration was expected as MRI provides lower spatial resolution and bone contrast as compared to CT and suffers from spatial distortions. However, the MRI-based registration is within an acceptable accuracy for many clinical research questions.  相似文献   

5.
Peptide receptor radionuclide therapy (PRRT) is an effective MRT (molecular radiotherapy) treatment, which consists of multiple administrations of a radiopharmaceutical labelled with 177Lu or 90Y. Through sequential functional imaging a patient specific 3D dosimetry can be derived. Multiple scans should be previously co-registered to allow accurate absorbed dose calculations. The purpose of this study is to evaluate the impact of image registration algorithms on 3D absorbed dose calculation.A cohort of patients was extracted from the database of a clinical trial in PRRT. They were administered with a single administration of 177Lu-DOTATOC. All patients underwent 5 SPECT/CT sequential scans at 1 h, 4 h, 24 h, 40 h, 70 h post-injection that were subsequently registered using rigid and deformable algorithms. A similarity index was calculated to compare rigid and deformable registration algorithms. 3D absorbed dose calculation was carried out with the Raydose Monte Carlo code.The similarity analysis demonstrated the superiority of the deformable registrations (p < .001).Average absorbed dose to the kidneys calculated using rigid image registration was consistently lower than the average absorbed dose calculated using the deformable algorithm (90% of cases), with percentage differences in the range [−19; +4]%. Absorbed dose to lesions were also consistently lower (90% of cases) when calculated with rigid image registration with absorbed dose differences in the range [−67.2; 100.7]%. Deformable image registration had a significant role in calculating 3D absorbed dose to organs or lesions with volumes smaller than 100 mL.Image based 3D dosimetry for 177Lu-DOTATOC PRRT is significantly affected by the type of algorithm used to register sequential SPECT/CT scans.  相似文献   

6.
BackgroundReliable image comparisons, based on fast and accurate deformable registration methods, are recognized as key steps in the diagnosis and follow-up of cancer as well as for radiation therapy planning or surgery. In the particular case of abdominal images, the images to compare often differ widely from each other due to organ deformation, patient motion, movements of gastrointestinal tract or breathing. As a consequence, there is a need for registration methods that can cope with both local and global large and highly non-linear deformations.MethodDeformable registration of medical images traditionally relies on the iterative minimization of a cost function involving a large number of parameters. For complex deformations and large datasets, this process is computationally very demanding, leading to processing times that are incompatible with the clinical routine workflow. Moreover, the highly non-convex nature of these optimization problems leads to a high risk of convergence toward local minima. Recently, deep learning approaches using Convolutional Neural Networks (CNN) have led to major breakthroughs by providing computationally fast unsupervised methods for the registration of 2D and 3D images within seconds. Among all the proposed approaches, the VoxelMorph learning-based framework pioneered to learn in an unsupervised way the complex mapping, parameterized using a CNN, between every couple of 2D or 3D pairs of images and the corresponding deformation field by minimizing a standard intensity-based similarity metrics over the whole learning database. Voxelmorph has so far only been evaluated on brain images. The present study proposes to evaluate this method in the context of inter-subject registration of abdominal CT images, which present a greater challenge in terms of registration than brain images, due to greater anatomical variability and significant organ deformations.ResultsThe performances of VoxelMorph were compared with the current top-performing non-learning-based deformable registration method “Symmetric Normalization” (SyN), implemented in ANTs, on two representative databases: LiTS and 3D-IRCADb-01. Three different experiments were carried out on 2D or 3D data, the atlas-based or pairwise registration, using two different similarity metrics, namely (MSE and CC). Accuracy of the registration was measured by the Dice score, which quantifies the volume overlap for the selected anatomical region.All the three experiments exhibit that the two deformable registration methods significantly outperform the affine registration and that VoxelMorph accuracy is comparable or even better than the reference non-learning based registration method ANTs (SyN), with a drastically reduced computation time.ConclusionBy substituting a time consuming optimization problem, VoxelMorph has made an outstanding achievement in learning-based registration algorithm, where a registration function is trained and thus, able to perform deformable registration almost accurately on abdominal images, while reducing the computation time from minutes to seconds and from seconds to milliseconds in comparison to ANTs (SyN) on a CPU.  相似文献   

7.
PurposeIn this study, a 3D phase correlation algorithm was investigated to test feasibility for use in determining the anatomical changes that occur throughout a patient's radiotherapy treatment. The algorithm determines the transformations between two image volumes through analysis in the Fourier domain and has not previously been used in radiotherapy for 3D registration of CT and CBCT volumes.MethodsVarious known transformations were applied to a patient's prostate CT image volume to create 12 different test cases. The mean absolute error and standard deviation were determined by evaluating the difference between the known contours and those calculated from the registration process on a point-by-point basis. Similar evaluations were performed on images with increasing levels of noise added. The improvement in structure overlap offered by the algorithm in registering clinical CBCT to CT images was evaluated using the Dice Similarity Coefficient (DSC).ResultsA mean error of 2.35 (σ = 1.54) mm was calculated for the 12 deformations applied. When increasing levels of noise were introduced to the images, the mean errors were observed to rise up to a maximum increase of 1.77 mm. For CBCT to CT registration, maximum improvements in the DSC of 0.09 and 0.46 were observed for the bladder and rectum, respectively.ConclusionsThe Fourier-based 3D phase correlation registration algorithm investigated displayed promising results in CT to CT and CT to CBCT registration, offers potential in terms of efficiency and robustness to noise, and is suitable for use in radiotherapy for monitoring patient anatomy throughout treatment.  相似文献   

8.
Image registration, the process of optimally aligning homologous structures in multiple images, has recently been demonstrated to support automated pixel-level analysis of pedobarographic images and, subsequently, to extract unique and biomechanically relevant information from plantar pressure data. Recent registration methods have focused on robustness, with slow but globally powerful algorithms. In this paper, we present an alternative registration approach that affords both speed and accuracy, with the goal of making pedobarographic image registration more practical for near-real-time laboratory and clinical applications. The current algorithm first extracts centroid-based curvature trajectories from pressure image contours, and then optimally matches these curvature profiles using optimization based on dynamic programming. Special cases of disconnected images (that occur in high-arched subjects, for example) are dealt with by introducing an artificial spatially linear bridge between adjacent image clusters. Two registration algorithms were developed: a ‘geometric’ algorithm, which exclusively matched geometry, and a ‘hybrid’ algorithm, which performed subsequent pseudo-optimization. After testing the two algorithms on 30 control image pairs considered in a previous study, we found that, when compared with previously published results, the hybrid algorithm improved overlap ratio (p=0.010), but both current algorithms had slightly higher mean-squared error, assumedly because they did not consider pixel intensity. Nonetheless, both algorithms greatly improved the computational efficiency (25±8 and 53±9 ms per image pair for geometric and hybrid registrations, respectively). These results imply that registration-based pixel-level pressure image analyses can, eventually, be implemented for practical clinical purposes.  相似文献   

9.
Respiratory motion blurs the standardized uptake value (SUV) and leads to a further signal reduction and changes in the SUV maxima. 4D PET can provide accurate tumor localization as a function of the respiratory phase in PET/CT imaging. We investigated thoracic tumor motion by respiratory 4D CT and assessed its deformation effect on the SUV changes in 4D PET imaging using clinical patient data. Twelve radiation oncology patients with thoracic cancer, including five lung cancer patients and seven esophageal cancer patients, were recruited to the present study. The 4D CT and PET image sets were acquired and reconstructed for 10 respiratory phases across the whole respiratory cycle. The optical flow method was applied to the 4D CT data to calculate the maximum displacements of the tumor motion in respiration. Our results show that increased tumor motion has a significant degree of association with the SUVmax loss for lung cancer. The results also show that the SUVmax loss has a higher correlation with tumors located at lower lobe of lung or at lower regions of esophagus.  相似文献   

10.
Several Finite Element (FE) models of the pelvis have been developed to comprehensively assess the onset of pathologies and for clinical and industrial applications. However, because of the difficulties associated with the creation of subject-specific FE mesh from CT scan and MR images, most of the existing models rely on the data of one given individual. Moreover, although several fast and robust methods have been developed for automatically generating tetrahedral meshes of arbitrary geometries, hexahedral meshes are still preferred today because of their distinct advantages but their generation remains an open challenge. Recently, approaches have been proposed for fast 3D reconstruction of bones based on X-ray imaging. In this study, we adapted such an approach for the fast and automatic generation of all-hexahedral subject-specific FE models of the pelvis based on the elastic registration of a generic mesh to the subject-specific target in conjunction with element regularity and quality correction. The technique was successfully tested on a database of 120 3D reconstructions of pelvises from biplanar X-ray images. For each patient, a full hexahedral subject-specific FE mesh was generated with an accurate surface representation.  相似文献   

11.
PurposeThe aim of this study is to present a short and comprehensive review of the methods of medical image registration, their conditions and applications in radiotherapy. A particular focus was placed on the methods of deformable image registration.MethodsTo structure and deepen the knowledge on medical image registration in radiotherapy, a medical literature analysis was made using the Google Scholar browser and the medical database of the PubMed library.ResultsChronological review of image registration methods in radiotherapy based on 34 selected articles. A particular attention was given to show: (i) potential regions of the application of different methods of registration, (ii) mathematical basis of the deformable methods and (iii) the methods of quality control for the registration process.ConclusionsThe primary aim of the medical image registration process is to connect the contents of images. What we want to achieve is a complementary or extended knowledge that can be used for more precise localisation of pathogenic lesions and continuous improvement of patient treatment. Therefore, the choice of imaging mode is dependent on the type of clinical study. It is impossible to visualise all anatomical details or functional changes using a single modality machine. Therefore, fusion of various modality images is of great clinical relevance. A natural problem in analysing the fusion of medical images is geographical errors related to displacement. The registered images are performed not at the same time and, very often, at different respiratory phases.  相似文献   

12.
The early symptom of lung tumor is always appeared as nodule on CT scans, among which 30% to 40% are malignant according to statistics studies. Therefore, early detection and classification of lung nodules are crucial to the treatment of lung cancer. With the increasing prevalence of lung cancer, large amount of CT images waiting for diagnosis are huge burdens to doctors who may missed or false detect abnormalities due to fatigue. Methods: In this study, we propose a novel lung nodule detection method based on YOLOv3 deep learning algorithm with only one preprocessing step is needed. In order to overcome the problem of less training data when starting a new study of Computer Aided Diagnosis (CAD), we firstly pick up a small number of diseased regions to simulate a limited datasets training procedure: 5 nodule patterns are selected and deformed into 110 nodules by random geometric transformation before fusing into 10 normal lung CT images using Poisson image editing. According to the experimental results, the Poisson fusion method achieves a detection rate of about 65.24% for testing 100 new images. Secondly, 419 slices from common database RIDER are used to train and test our YOLOv3 network. The time of lung nodule detection by YOLOv3 is shortened by 2–3 times compared with the mainstream algorithm, with the detection accuracy rate of 95.17%. Finally, the configuration of YOLOv3 is optimized by the learning data sets. The results show that YOLOv3 has the advantages of high speed and high accuracy in lung nodule detection, and it can access a large amount of CT image data within a short time to meet the huge demand of clinical practice. In addition, the use of Poisson image editing algorithms to generate data sets can reduce the need for raw training data and improve the training efficiency.  相似文献   

13.
PurposeTo develop an automatic multimodal method for segmentation of parotid glands (PGs) from pre-registered computed tomography (CT) and magnetic resonance (MR) images and compare its results to the results of an existing state-of-the-art algorithm that segments PGs from CT images only.MethodsMagnetic resonance images of head and neck were registered to the accompanying CT images using two different state-of-the-art registration procedures. The reference domains of registered image pairs were divided on the complementary PG regions and backgrounds according to the manual delineation of PGs on CT images, provided by a physician. Patches of intensity values from both image modalities, centered around randomly sampled voxels from the reference domain, served as positive or negative samples in the training of the convolutional neural network (CNN) classifier. The trained CNN accepted a previously unseen (registered) image pair and classified its voxels according to the resemblance of its patches to the patches used for training. The final segmentation was refined using a graph-cut algorithm, followed by the dilate-erode operations.ResultsUsing the same image dataset, segmentation of PGs was performed using the proposed multimodal algorithm and an existing monomodal algorithm, which segments PGs from CT images only. The mean value of the achieved Dice overlapping coefficient for the proposed algorithm was 78.8%, while the corresponding mean value for the monomodal algorithm was 76.5%.ConclusionsAutomatic PG segmentation on the planning CT image can be augmented with the MR image modality, leading to an improved RT planning of head and neck cancer.  相似文献   

14.

Purpose

To develop a robust tool for quantitative in situ pathology that allows visualization of heterogeneous tissue morphology and segmentation and quantification of image features.

Materials and Methods

Tissue excised from a genetically engineered mouse model of sarcoma was imaged using a subcellular resolution microendoscope after topical application of a fluorescent anatomical contrast agent: acriflavine. An algorithm based on sparse component analysis (SCA) and the circle transform (CT) was developed for image segmentation and quantification of distinct tissue types. The accuracy of our approach was quantified through simulations of tumor and muscle images. Specifically, tumor, muscle, and tumor+muscle tissue images were simulated because these tissue types were most commonly observed in sarcoma margins. Simulations were based on tissue characteristics observed in pathology slides. The potential clinical utility of our approach was evaluated by imaging excised margins and the tumor bed in a cohort of mice after surgical resection of sarcoma.

Results

Simulation experiments revealed that SCA+CT achieved the lowest errors for larger nuclear sizes and for higher contrast ratios (nuclei intensity/background intensity). For imaging of tumor margins, SCA+CT effectively isolated nuclei from tumor, muscle, adipose, and tumor+muscle tissue types. Differences in density were correctly identified with SCA+CT in a cohort of ex vivo and in vivo images, thus illustrating the diagnostic potential of our approach.

Conclusion

The combination of a subcellular-resolution microendoscope, acriflavine staining, and SCA+CT can be used to accurately isolate nuclei and quantify their density in anatomical images of heterogeneous tissue.  相似文献   

15.
Rationale and objectivesDedicated breast CT and PET/CT scanners provide detailed 3D anatomical and functional imaging data sets and are currently being investigated for applications in breast cancer management such as diagnosis, monitoring response to therapy and radiation therapy planning. Our objective was to evaluate the performance of the diffeomorphic demons (DD) non-rigid image registration method to spatially align 3D serial (pre- and post-contrast) dedicated breast computed tomography (CT), and longitudinally-acquired dedicated 3D breast CT and positron emission tomography (PET)/CT images.MethodsThe algorithmic parameters of the DD method were optimized for the alignment of dedicated breast CT images using training data and fixed. The performance of the method for image alignment was quantitatively evaluated using three separate data sets; (1) serial breast CT pre- and post-contrast images of 20 women, (2) breast CT images of 20 women acquired before and after repositioning the subject on the scanner, and (3) dedicated breast PET/CT images of 7 women undergoing neo-adjuvant chemotherapy acquired pre-treatment and after 1 cycle of therapy.ResultsThe DD registration method outperformed no registration (p < 0.001) and conventional affine registration (p ≤ 0.002) for serial and longitudinal breast CT and PET/CT image alignment. In spite of the large size of the imaging data, the computational cost of the DD method was found to be reasonable (3–5 min).ConclusionsCo-registration of dedicated breast CT and PET/CT images can be performed rapidly and reliably using the DD method. This is the first study evaluating the DD registration method for the alignment of dedicated breast CT and PET/CT images.  相似文献   

16.
WY Hsu 《PloS one》2012,7(7):e40558

Background

A common registration problem for the application of consumer device is to align all the acquired image sequences into a complete scene. Image alignment requires a registration algorithm that will compensate as much as possible for geometric variability among images. However, images captured views from a real scene usually produce different distortions. Some are derived from the optic characteristics of image sensors, and others are caused by the specific scenes and objects.

Methodology/Principal Findings

An image registration algorithm considering the perspective projection is proposed for the application of consumer devices in this study. It exploits a multiresolution wavelet-based method to extract significant features. An analytic differential approach is then proposed to achieve fast convergence of point matching. Finally, the registration accuracy is further refined to obtain subpixel precision by a feature-based modified Levenberg-Marquardt method. Due to its feature-based and nonlinear characteristic, it converges considerably faster than most other methods. In addition, vignette compensation and color difference adjustment are also performed to further improve the quality of registration results.

Conclusions/Significance

The performance of the proposed method is evaluated by testing the synthetic and real images acquired by a hand-held digital still camera and in comparison with two registration techniques in terms of the squared sum of intensity differences (SSD) and correlation coefficient (CC). The results indicate that the proposed method is promising in registration accuracy and quality, which are statistically significantly better than other two approaches.  相似文献   

17.
Histology volume reconstruction facilitates the study of 3D shape and volume change of an organ at the level of macrostructures made up of cells. It can also be used to investigate and validate novel techniques and algorithms in volumetric medical imaging and therapies. Creating 3D high-resolution atlases of different organs1,2,3 is another application of histology volume reconstruction. This provides a resource for investigating tissue structures and the spatial relationship between various cellular features. We present an image registration approach for histology volume reconstruction, which uses a set of optical blockface images. The reconstructed histology volume represents a reliable shape of the processed specimen with no propagated post-processing registration error. The Hematoxylin and Eosin (H&E) stained sections of two mouse mammary glands were registered to their corresponding blockface images using boundary points extracted from the edges of the specimen in histology and blockface images. The accuracy of the registration was visually evaluated. The alignment of the macrostructures of the mammary glands was also visually assessed at high resolution.This study delineates the different steps of this image registration pipeline, ranging from excision of the mammary gland through to 3D histology volume reconstruction. While 2D histology images reveal the structural differences between pairs of sections, 3D histology volume provides the ability to visualize the differences in shape and volume of the mammary glands.  相似文献   

18.
As biomedical images and volumes are being collected at an increasing speed, there is a growing demand for efficient means to organize spatial information for comparative analysis. In many scenarios, such as determining gene expression patterns by in situ hybridization, the images are collected from multiple subjects over a common anatomical region, such as the brain. A fundamental challenge in comparing spatial data from different images is how to account for the shape variations among subjects, which make direct image-to-image comparisons meaningless. In this paper, we describe subdivision meshes as a geometric means to efficiently organize 2D images and 3D volumes collected from different subjects for comparison. The key advantages of a subdivision mesh for this purpose are its light-weight geometric structure and its explicit modeling of anatomical boundaries, which enable efficient and accurate registration. The multi-resolution structure of a subdivision mesh also allows development of fast comparison algorithms among registered images and volumes.  相似文献   

19.
PurposeAn investigation was carried out into the effect of three image registration techniques on the diagnostic image quality of contrast-enhanced magnetic resonance angiography (CE-MRA) images.MethodsWhole-body CE-MRA data from the lower legs of 27 patients recruited onto a study of asymptomatic atherosclerosis were processed using three deformable image registration algorithms. The resultant diagnostic image quality was evaluated qualitatively in a clinical evaluation by four expert observers, and quantitatively by measuring contrast-to-noise ratios and volumes of blood vessels, and assessing the techniques' ability to correct for varying degrees of motion.ResultsThe first registration algorithm (‘AIR’) introduced significant stenosis-mimicking artefacts into the blood vessels' appearance, observed both qualitatively (clinical evaluation) and quantitatively (vessel volume measurements). The two other algorithms (‘Slicer’ and ‘SEMI’), based on the normalised mutual information (NMI) concept and designed specifically to deal with variations in signal intensity as found in contrast-enhanced image data, did not suffer from this serious issue but were rather found to significantly improve the diagnostic image quality both qualitatively and quantitatively, and demonstrated a significantly improved ability to deal with the common problem of patient motion.ConclusionsThis work highlights both the significant benefits to be gained through the use of suitable registration algorithms and the deleterious effects of an inappropriate choice of algorithm for contrast-enhanced MRI data. The maximum benefit was found in the lower legs, where the small arterial vessel diameters and propensity for leg movement during image acquisitions posed considerable problems in making accurate diagnoses from the un-registered images.  相似文献   

20.
《IRBM》2014,35(4):202-213
Speckle has been widely considered a noisy feature in ultrasound images, thus it is intended to be suppressed and eliminated. On the other hand, speckle can be studied as a signal modeled by various statistical distributions or by analyzing its intensity with spatial relations in image space that characterize its nature, and hence, the nature of the underlying tissue. This knowledge can then be used in order to classify the different speckle regions into anatomical structures. In fact, speckle characterization in echocardiography and other ultrasonic images is important for motion tracking, tissue characterization, image segmentation, registration, and other medical applications for diagnosis, therapy planning and decision making. In this paper, we review and discuss various speckle characterization methods, which are often applied to confirm the speckle nature of the elements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号