首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
MOTIVATION: The discrimination and measurement of fluorescent-labeled vesicles using microscopic analysis of fixed cells presents a challenge for biologists interested in quantifying the abundance, size and distribution of such vesicles in normal and abnormal cellular situations. In the specific application reported here, we were interested in quantifying changes to the population of a major organelle, the peroxisome, in cells from normal control patients and from patients with a defect in peroxisome biogenesis. In the latter, peroxisomes are present as larger vesicular structures with a more restricted cytoplasmic distribution. Existing image processing methods for extracting fluorescent cell puncta do not provide useful results and therefore, there is a need to develop some new approaches for dealing with such a task effectively. RESULTS: We present an effective implementation of the fuzzy c-means algorithm for extracting puncta (spots), representing fluorescent-labeled peroxisomes, which are subject to low contrast. We make use of the quadtree partition to enhance the fuzzy c-means based segmentation and to disregard regions which contain no target objects (peroxisomes) in order to minimize considerable time taken by the iterative process of the fuzzy c-means algorithm. We finally isolate touching peroxisomes by an aspect-ratio criterion. The proposed approach has been applied to extract peroxisomes contained in several sets of color images and the results are superior to those obtained from a number of standard techniques for spot extraction. AVAILABILITY: Image data and computer codes written in Matlab are available upon request from the first author.  相似文献   

2.
Mutual information (MI)-based registration, which uses MI as the similarity measure, is a representative method in medical image registration. It has an excellent robustness and accuracy, but with the disadvantages of a large amount of calculation and a long processing time. In this paper, by computing the medical image moments, the centroid is acquired. By applying fuzzy c-means clustering, the coordinates of the medical image are divided into two clusters to fit a straight line, and the rotation angles of the reference and floating images are computed, respectively. Thereby, the initial values for registering the images are determined. When searching the optimal geometric transformation parameters, we put forward the two new concepts of fuzzy distance and fuzzy signal-to-noise ratio (FSNR), and we select FSNR as the similarity measure between the reference and floating images. In the experiments, the Simplex method is chosen as multi-parameter optimisation. The experimental results show that this proposed method has a simple implementation, a low computational cost, a fast registration and good registration accuracy. Moreover, it can effectively avoid trapping into the local optima. It is adapted to both mono-modality and multi-modality image registrations.  相似文献   

3.
PurposeTo develop an automatic multimodal method for segmentation of parotid glands (PGs) from pre-registered computed tomography (CT) and magnetic resonance (MR) images and compare its results to the results of an existing state-of-the-art algorithm that segments PGs from CT images only.MethodsMagnetic resonance images of head and neck were registered to the accompanying CT images using two different state-of-the-art registration procedures. The reference domains of registered image pairs were divided on the complementary PG regions and backgrounds according to the manual delineation of PGs on CT images, provided by a physician. Patches of intensity values from both image modalities, centered around randomly sampled voxels from the reference domain, served as positive or negative samples in the training of the convolutional neural network (CNN) classifier. The trained CNN accepted a previously unseen (registered) image pair and classified its voxels according to the resemblance of its patches to the patches used for training. The final segmentation was refined using a graph-cut algorithm, followed by the dilate-erode operations.ResultsUsing the same image dataset, segmentation of PGs was performed using the proposed multimodal algorithm and an existing monomodal algorithm, which segments PGs from CT images only. The mean value of the achieved Dice overlapping coefficient for the proposed algorithm was 78.8%, while the corresponding mean value for the monomodal algorithm was 76.5%.ConclusionsAutomatic PG segmentation on the planning CT image can be augmented with the MR image modality, leading to an improved RT planning of head and neck cancer.  相似文献   

4.
本文描述一种基于知识的三维医学图像自动分割方法,用于进行人体颅内出血(Intracranial Hemorrhage,ICH)的分割和分析。首先,数字化CT胶片,并自动对数字化后的胶片按照有无异常分类。然后,阀值结合模糊C均值聚类算法将图像分类成多个具有统一亮度的区域。最后,在先验知识以及预定义的规则的基础上,借助基于知识的专家系统将各个区域标记为背景、钙化点、血肿、颅骨、脑干。  相似文献   

5.
Edge detection has beneficial applications in the fields such as machine vision, pattern recognition and biomedical imaging etc. Edge detection highlights high frequency components in the image. Edge detection is a challenging task. It becomes more arduous when it comes to noisy images. This study focuses on fuzzy logic based edge detection in smooth and noisy clinical images. The proposed method (in noisy images) employs a 3×3 mask guided by fuzzy rule set. Moreover, in case of smooth clinical images, an extra mask of contrast adjustment is integrated with edge detection mask to intensify the smooth images. The developed method was tested on noise-free, smooth and noisy images. The results were compared with other established edge detection techniques like Sobel, Prewitt, Laplacian of Gaussian (LOG), Roberts and Canny. When the developed edge detection technique was applied to a smooth clinical image of size 270×290 pixels having 24 dB ‘salt and pepper’ noise, it detected very few (22) false edge pixels, compared to Sobel (1931), Prewitt (2741), LOG (3102), Roberts (1451) and Canny (1045) false edge pixels. Therefore it is evident that the developed method offers improved solution to the edge detection problem in smooth and noisy clinical images.  相似文献   

6.
Deep learning is a powerful approach for distinguishing classes of images, and there is a growing interest in applying these methods to delimit species, particularly in the identification of mosquito vectors. Visual identification of mosquito species is the foundation of mosquito-borne disease surveillance and management, but can be hindered by cryptic morphological variation in mosquito vector species complexes such as the malaria-transmitting Anopheles gambiae complex. We sought to apply Convolutional Neural Networks (CNNs) to images of mosquitoes as a proof-of-concept to determine the feasibility of automatic classification of mosquito sex, genus, species, and strains using whole-body, 2D images of mosquitoes. We introduce a library of 1, 709 images of adult mosquitoes collected from 16 colonies of mosquito vector species and strains originating from five geographic regions, with 4 cryptic species not readily distinguishable morphologically even by trained medical entomologists. We present a methodology for image processing, data augmentation, and training and validation of a CNN. Our best CNN configuration achieved high prediction accuracies of 96.96% for species identification and 98.48% for sex. Our results demonstrate that CNNs can delimit species with cryptic morphological variation, 2 strains of a single species, and specimens from a single colony stored using two different methods. We present visualizations of the CNN feature space and predictions for interpretation of our results, and we further discuss applications of our findings for future applications in malaria mosquito surveillance.  相似文献   

7.
Segmentation is an important step for the diagnosis of multiple sclerosis (MS). This paper presents a new approach to the fully automatic segmentation of MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) Magnetic Resonance (MR) images. With the aim of increasing the contrast of the FLAIR MR images with respect to the MS lesions, the proposed method first estimates the fuzzy memberships of brain tissues (i.e., the cerebrospinal fluid (CSF), the normal-appearing brain tissue (NABT), and the lesion). The procedure for determining the fuzzy regions of their member functions is performed by maximizing fuzzy entropy through Genetic Algorithm. Research shows that the intersection points of the obtained membership functions are not accurate enough to segment brain tissues. Then, by extracting the structural similarity (SSIM) indices between the FLAIR MR image and its lesions membership image, a new contrast-enhanced image is created in which MS lesions have high contrast against other tissues. Finally, the new contrast-enhanced image is used to segment MS lesions. To evaluate the result of the proposed method, similarity criteria from all slices from 20 MS patients are calculated and compared with other methods, which include manual segmentation. The volume of segmented lesions is also computed and compared with Gold standard using the Intraclass Correlation Coefficient (ICC) and paired samples t test. Similarity index for the patients with small lesion load, moderate lesion load and large lesion load was 0.7261, 0.7745 and 0.8231, respectively. The average overall similarity index for all patients is 0.7649. The t test result indicates that there is no statistically significant difference between the automatic and manual segmentation. The validated results show that this approach is very promising.  相似文献   

8.
Inspection of insect sticky paper traps is an essential task for an effective integrated pest management (IPM) programme. However, identification and counting of the insect pests stuck on the traps is a very cumbersome task. Therefore, an efficient approach is needed to alleviate the problem and to provide timely information on insect pests. In this research, an automatic method for the multi-class recognition of small-size greenhouse insect pests on sticky paper trap images acquired by wireless imaging devices is proposed. The developed algorithm features a cascaded approach that uses a convolutional neural network (CNN) object detector and CNN image classifiers, separately. The object detector was trained for detecting objects in an image, and a CNN classifier was applied to further filter out non-insect objects from the detected objects in the first stage. The obtained insect objects were then further classified into flies (Diptera: Drosophilidae), gnats (Diptera: Sciaridae), thrips (Thysanoptera: Thripidae) and whiteflies (Hemiptera: Aleyrodidae), using a multi-class CNN classifier in the second stage. Advantages of this approach include flexibility in adding more classes to the multi-class insect classifier and sample control strategies to improve classification performance. The algorithm was developed and tested for images taken by multiple wireless imaging devices installed in several greenhouses under natural and variable lighting environments. Based on the testing results from long-term experiments in greenhouses, it was found that the algorithm could achieve average F1-scores of 0.92 and 0.90 and mean counting accuracies of 0.91 and 0.90, as tested on a separate 6-month image data set and on an image data set from a different greenhouse, respectively. The proposed method in this research resolves important problems for the automated recognition of insect pests and provides instantaneous information of insect pest occurrences in greenhouses, which offers vast potential for developing more efficient IPM strategies in agriculture.  相似文献   

9.
In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover change analysis over very large regions.  相似文献   

10.

Background

Cervical cancer is the fifth most common cancer among women, which is the third leading cause of cancer death in women worldwide. Brachytherapy is the most effective treatment for cervical cancer. For brachytherapy, computed tomography (CT) imaging is necessary since it conveys tissue density information which can be used for dose planning. However, the metal artifacts caused by brachytherapy applicators remain a challenge for the automatic processing of image data for image-guided procedures or accurate dose calculations. Therefore, developing an effective metal artifact reduction (MAR) algorithm in cervical CT images is of high demand.

Methods

A novel residual learning method based on convolutional neural network (RL-ARCNN) is proposed to reduce metal artifacts in cervical CT images. For MAR, a dataset is generated by simulating various metal artifacts in the first step, which will be applied to train the CNN. This dataset includes artifact-insert, artifact-free, and artifact-residual images. Numerous image patches are extracted from the dataset for training on deep residual learning artifact reduction based on CNN (RL-ARCNN). Afterwards, the trained model can be used for MAR on cervical CT images.

Results

The proposed method provides a good MAR result with a PSNR of 38.09 on the test set of simulated artifact images. The PSNR of residual learning (38.09) is higher than that of ordinary learning (37.79) which shows that CNN-based residual images achieve favorable artifact reduction. Moreover, for a 512?×?512 image, the average removal artifact time is less than 1 s.

Conclusions

The RL-ARCNN indicates that residual learning of CNN remarkably reduces metal artifacts and improves critical structure visualization and confidence of radiation oncologists in target delineation. Metal artifacts are eliminated efficiently free of sinogram data and complicated post-processing procedure.
  相似文献   

11.
Spatially explicit data on heterogeneously distributed plant populations are difficult to quantify using either traditional field-based methods or remote sensing techniques alone. Unmanned Aerial Vehicles (UAVs) offer new means and tools for baseline monitoring of such populations. We tested the use of vegetation classification of UAV-acquired photographs as a method to capture heterogeneously distributed plant populations, using Jacobaea vulgaris as a model species. Five sites, each containing 1–4 pastures with varying J. vulgaris abundance, were selected across Schleswig–Holstein, Germany. Surveys were conducted in July 2017 when J. vulgaris was at its flowering peak. We took aerial photographs at a 50 m altitude using three digital cameras (RGB, red-edge and near-infrared). Orthomosaics were created before a pixel-based supervised classification. Classification results were evaluated for accuracy; reliability was assessed with field data collected for ground verification. An ANOVA tested the relationship between field-based abundance estimations and the supervised classifications. Overall accuracy of the classification was very high (90.6%,?±?1.76 s.e.). Kappa coefficients indicated substantial agreement between field data and image classification (≥?0.65). Field-based estimations were a good predictor of the supervised classifications (F?=?7.91, df?=?4, P?=?0.007), resulting in similar rankings of J. vulgaris abundance. UAV-acquired images demonstrated the potential as an objective method for data collection and species monitoring. However, our method was more time consuming than field-based estimations due to challenges in image processing. Nonetheless, the increasing availability of low-cost consumer-grade UAVs is likely to increase the use of UAVs in plant ecological studies.  相似文献   

12.
《IRBM》2022,43(5):405-413
PurposeLeukaemia is diagnosed conventionally by observing the peripheral blood and bone marrow smear using a microscope and with the help of advanced laboratory tests. Image processing-based methods, which are simple, fast, and cheap, can be used to detect and classify leukemic cells by processing and analysing images of microscopic smear. The proposed study aims to classify Acute Lymphoblastic Leukaemia (ALL) by Deep Learning (DL) based techniques.ProceduresThe study used Deep Convolutional Neural Networks (DNNs) to classify ALL according to WHO classification scheme without using any image segmentation and feature extraction that involves intense computations. Images from an online image bank of American Society of Haematology (ASH) were used for the classification.FindingsA classification accuracy of 94.12% is achieved by the study in isolating the B-cell and T-cell ALL images using a pretrained CNN AlexNet as well as LeukNet, a custom-made deep learning network designed by the proposed work. The study also compared the classification performances using three different training algorithms.ConclusionsThe paper detailed the use of DNNs to classify ALL, without using any image segmentation and feature extraction techniques. Classification of ALL into subtypes according to the WHO classification scheme using image processing techniques is not available in literature to the best of the knowledge of the authors. The present study considered the classification of ALL only, and detection of other types of leukemic images can be attempted in future research.  相似文献   

13.
 Aims Desertification results in ecological and biological diminution of the earth, and can happen naturally or cause by anthropogenic activities. This process especially affects arid and semi-arid regions, such as the Isfahan region, where the spread of desertification is reaching critical proportions. The aim of this study is to use remotely sensed data to review the trend of desertification in the northern of Isfahan, Iran. Methods Multi-temporal images were employed to evaluate the trend of desertification, specifically the TM and ETM+ data of September, 1990 and September, 2001. Geometric and radiometric corrections were applied to each image prior to image processing and supervised classification, and vegetation indices were applied to produce a land use map of each image in nine classes. The land use classification s in the two map images were compared and changes between land use classes were detected over the 11 year period using a fuzzy and post-classification technique. Important findings The maps and their comparison with false color composite images showed the differences efficiently. With the fuzzy and post-classification method the land use changes were sited on the map. Fuzzy confirmed 53% changed area and 47% unchanged areas in the study region. The results verify the desertification expansion in the study areas. Because of poor land management, agricultural lands converted to desert and abandoned areas, and some marginal pasture lands had to be changed to agricultural land which are desertification spreading according to United Nations Conference on Desertification (UNCOD). Also farmland and pastures have been converted to urban and industrial areas, and the rangelands have been spoiled due to opencast mine excavations. With the mine margins eroding as well as their debris accumulating on the pasture lands, desertification has become worse. Three areas of less-elevated mountains have remained unchanged. This study confirmed that the anthropogenic activities accelerated the desertification process and severely endangered the remaining areas.  相似文献   

14.
15.
This paper proposes a novel multi-label classification method for resolving the spacecraft electrical characteristics problems which involve many unlabeled test data processing, high-dimensional features, long computing time and identification of slow rate. Firstly, both the fuzzy c-means (FCM) offline clustering and the principal component feature extraction algorithms are applied for the feature selection process. Secondly, the approximate weighted proximal support vector machine (WPSVM) online classification algorithms is used to reduce the feature dimension and further improve the rate of recognition for electrical characteristics spacecraft. Finally, the data capture contribution method by using thresholds is proposed to guarantee the validity and consistency of the data selection. The experimental results indicate that the method proposed can obtain better data features of the spacecraft electrical characteristics, improve the accuracy of identification and shorten the computing time effectively.  相似文献   

16.

Background

Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue.

Methods

The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time.

Results

We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time.

Conclusions

This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.
  相似文献   

17.
A common feature of morphogenesis is the formation of three-dimensional structures from the folding of two-dimensional epithelial sheets, aided by cell shape changes at the cellular-level. Changes in cell shape must be studied in the context of cell-polarised biomechanical processes within the epithelial sheet. In epithelia with highly curved surfaces, finding single-cell alignment along a biological axis can be difficult to automate in silico. We present ‘Origami’, a MATLAB-based image analysis pipeline to compute direction-variant cell shape features along the epithelial apico-basal axis. Our automated method accurately computed direction vectors denoting the apico-basal axis in regions with opposing curvature in synthetic epithelia and fluorescence images of zebrafish embryos. As proof of concept, we identified different cell shape signatures in the developing zebrafish inner ear, where the epithelium deforms in opposite orientations to form different structures. Origami is designed to be user-friendly and is generally applicable to fluorescence images of curved epithelia.  相似文献   

18.
Primary crop losses in agriculture are due to leaf diseases, which farmers cannot identify early. If the diseases are not detected early and correctly, then the farmer will have to undergo huge losses. Therefore, in the field of agriculture, the detection of leaf diseases in tomato crops plays a vital role. Recent advances in computer vision and deep learning techniques have made disease prediction easy in agriculture. Tomato crop front side leaf images are considered for research due to their high exposure to diseases. The image segmentation process assumes a significant role in identifying disease affected areas on tomato leaf images. Therefore, this paper develops an efficient tomato crop leaf disease segmentation model using an enhanced radial basis function neural network (ERBFNN). The proposed ERBFNN is enhanced using the modified sunflower optimization (MSFO) algorithm. Initially, the noise present in the images is removed by a Gaussian filter followed by CLAHE (contrast-limited adaptive histogram equalization) based on contrast enhancement and un-sharp masking. Then, color features are extracted from each leaf image and given to the segmentation stage to segment the disease portion of the input image. The performance of the proposed ERBFNN approach is estimated using different metrics such as accuracy, Jaccard coefficient (JC), Dice's coefficient (DC), precision, recall, F-Measure, sensitivity, specificity, and mean intersection over union (MIoU) and are compared with existing state-of-the-art methods of radial basis function (RBF), fuzzy c-means (FCM), and region growing (RG). The experimental results show that the proposed ERBFNN segmentation model outperformed with an accuracy of 98.92% compared to existing state-of-the-art methods like RBFNN, FCM, and RG, as well as previous research work.  相似文献   

19.
Plant diseases cause significant food loss and hence economic loss around the globe. Therefore, automatic plant disease identification is a primary task to take proper medications for controlling the spread of the diseases. Large variety of plants species and their dissimilar phytopathological symptoms call for the implementation of supervised machine learning techniques for efficient and reliable disease identification and classification. With the development of deep learning strategies, convolutional neural network (CNN) has paved its way for classification of multiple plant diseases by extracting rich features. However, several characteristics of the input images especially captured in real world environment, viz. complex or indistinguishable background, presence of multiple leaves with the diseased leaf, small lesion area, solemnly affect the robustness and accuracy of the CNN modules. Available strategies usually applied standard CNN architectures on the images captured in the laboratory environment and very few have considered practical in-field leaf images for their studies. However, those studies are limited with very limited number of plant species. Therefore, there is need of a robust CNN module which can successfully recognize and classify the dissimilar leaf health conditions of non-identical plants from the in-field RGB images. To achieve the above goal, an attention dense learning (ADL) mechanism is proposed in this article by merging mixed sigmoid attention learning with the basic dense learning process of deep CNN. The basic dense learning process derives new features at higher layer considering all lower layer features and that provides fast and efficient training process. Further, the attention learning process amplifies the learning ability of the dense block by discriminating the meaningful lesion portions of the images from the background areas. Other than adding an extra layer for attention learning, in the proposed ADL block the output features from higher layer dense learning are used as an attention mask to the lower layers. For an effective and fast classification process, five ADL blocks are stacked to build a new CNN architecture named DADCNN-5 for obtaining classification robustness and higher testing accuracy. Initially, the proposed DADCNN-5 module is applied on publicly available extended PlantVillage dataset to classify 38 different health conditions of 14 plant species from 54,305 images. Classification accuracy of 99.93% proves that the proposed CNN module can be used for successful leaf disease identification. Further, the efficacy of the DADCNN-5 model is checked after performing stringent experiments on a new real world plant leaf database, created by the authors. The new leaf database contains 10,851 real-world RGB leaf images of 17 plant species for classifying their 44 distinguished health conditions. Experimental outcomes reveal that the proposed DADCNN-5 outperforms the existing machine learning and standard CNN architectures, and achieved 97.33% accuracy. The obtained sensitivity, specificity and false positive rate values are 96.57%, 99.94% and 0.063% respectively. The module takes approximately 3235 min for training process and achieves 99.86% of training accuracy. Visualization of Class activation mapping (CAM) depicts that DADCNN-5 is able to learn distinguishable features from semantically important regions (i.e. lesion regions) on the leaves. Further, the robustness of the DADCNN-5 is established after experimenting with augmented and noise contaminated images of the practical database.  相似文献   

20.
Mutual information (MI)-based registration, which uses MI as the similarity measure, is a representative method in medical image registration. It has an excellent robustness and accuracy, but with the disadvantages of a large amount of calculation and a long processing time. In this paper, by computing the medical image moments, the centroid is acquired. By applying fuzzy c-means clustering, the coordinates of the medical image are divided into two clusters to fit a straight line, and the rotation angles of the reference and floating images are computed, respectively. Thereby, the initial values for registering the images are determined. When searching the optimal geometric transformation parameters, we put forward the two new concepts of fuzzy distance and fuzzy signal-to-noise ratio (FSNR), and we select FSNR as the similarity measure between the reference and floating images. In the experiments, the Simplex method is chosen as multi-parameter optimisation. The experimental results show that this proposed method has a simple implementation, a low computational cost, a fast registration and good registration accuracy. Moreover, it can effectively avoid trapping into the local optima. It is adapted to both mono-modality and multi-modality image registrations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号