首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
噬菌斑平板计数是微生物学理论研究与实际应用中常用的方法之一。但由于平板上噬菌斑与背景反差小,而且往往出现几个噬菌斑相连,计算机识别时出现较大的误差,因此噬菌斑计数目前仍为人工方法。本文以λ噬菌体为材料,感染E.coli宿主细胞,获得噬菌斑。然后将噬菌斑制成电子图像。抽取图像中有代表性的区域,利用分水岭算法对图像进行分割处理,将相连的噬菌斑分割成单独的噬菌斑,然后利用基于区域生长法进行计数,结果与人工计数完全相同,表明我们建立的新方法可以用于噬菌斑计算机自动计数。  相似文献   

2.
【目的】具有复杂背景的蝴蝶图像前背景分割难度大。本研究旨在探索基于深度学习显著性目标检测的蝴蝶图像自动分割方法。【方法】应用DUTS-TR数据集训练F3Net显著性目标检测算法构建前背景预测模型,然后将模型用于具有复杂背景的蝴蝶图像数据集实现蝴蝶前背景自动分割。在此基础上,采用迁移学习方法,保持ResNet骨架不变,利用蝴蝶图像及其前景蒙板数据,使用交叉特征模块、级联反馈解码器和像素感知损失方法重新训练优化模型参数,得到更优的自动分割模型。同时,将其他5种基于深度学习显著性检测算法也用于自动分割,并比较了这些算法和F3Net算法的性能。【结果】所有算法均获得了很好的蝴蝶图像前背景分割效果,其中,F3Net是更优的算法,其7个指标S测度、E测度、F测度、平均绝对误差(MAE)、精度、召回率和平均IoU值分别为0.940, 0.945, 0.938, 0.024, 0.929,0.978和0.909。迁移学习则进一步提升了F3Net的上述指标值,分别为0.961, 0.964, 0.963, 0.013, 0.965, 0.967和0.938。【结论】研究结果证明结合迁移学习的F3Net算法是其中最优的分割方法。本研究提出的方法可用于野外调查中拍摄的昆虫图像的自动分割,并拓展了显著性目标检测方法的应用范围。  相似文献   

3.
基于高光谱成像和主成分分析的水稻茎叶分割   总被引:2,自引:0,他引:2  
在单株水稻表型测量研究中,为了实现绿叶面积和茎叶相关表型参数的准确计算提供技术保障,茎叶的分割是非常重要的一步。传统的人工测量方法费时费力,且主观性较强,而基于普通相机拍摄的彩色图像进行分割效果很差。本研究介绍了一种使用可见光-近红外高光谱成像系统自动区分单株盆栽水稻茎叶的方法。首先将各波长下的图像从原始二进制数据中提取出来,接着使用主成分分析所有波长下的图像,并提取出主要的主成分图像,再基于数字图像处理技术将茎叶区分开。实验结果表明,本系统以及文中所用方法对分蘖盛期的水稻茎叶有很好的分割效果,这为后续水稻茎叶表型性状高通量、数字化、无损准确提取提供了重要的技术保障,并进一步促进植物表型组学的发展。  相似文献   

4.
为分割出眼底图像中的视盘,构建基于眼底图像的计算机辅助诊断系统,提出了一种基于视网膜主血管方向的视盘定位及提取方法。首先,利用Otsu阈值分割眼底图像R通道获取视盘候选区域;然后利用彩色眼底图像的HSV空间的H通道提取视网膜主血管并确定主血管方向;在此基础上,通过在方向图内寻找出对加权匹配滤波器响应值最高的点确定视盘中心位置;最后,利用该位置信息从视盘候选区域中"挑选"出真正的视盘。利用该方法对100幅不同颜色、不同亮度的眼底图像进行视盘分割,得到准确率98%,平均每幅图像处理时间1.3 s。结果表明:该方法稳定可靠,能快速、有效分割出眼底图像中的视盘。  相似文献   

5.
利用GIS对吉林针阔混交林TM遥感图像分类方法的初探   总被引:4,自引:1,他引:3  
为提高林区TM遥感图像自动分类识别精度,在GIS技术辅助下,以吉林省汪清林业局针阔混交林TM遥感图像为例,对研究区DEM、坡向等地理因子和土壤类型等环境因子与森林植被分布之间的内在规律进行了定量分析,并结合对遥感图像预分类的定性分析,形成分类知识库,建立了适用于针阔混交林的自动分类识别专家系统.分类试验证明,该系统能比较明显地削弱混合像元和地形阴影的影响,分类精度较无监督分类法提高了14.22%,Kappa指数为0.7556,达到区别森林类型的分类目的.将GIS数据引入专家系统,应用先验知识建立推理机制,可以解决遥感图像中云区和云阴影区由于不能接收到正确的光谱值而无法进行分类的问题.  相似文献   

6.
在本文中,我们提出了一种自动视网膜分割方法,以评估光学相干断层扫描(OCT)图像中黄斑水肿(ME)在视网膜特定层上的投影面积。首先使用基于权重矩阵的优化的最短路径最快算法对十个视网膜层边界进行分割,这有效降低了算法对血管阴影的敏感性。然而,ME的存在将导致水肿区域的分割不准确。因此,我们使用强度阈值方法提取每个OCT图像中的水肿区域,并将该区域中的值设置为零,并确保获得的分割边界可以自动穿过而不是绕过水肿区域。我们使用最小值投影来计算ME在不同层的投影面积。为了测试我们的方法,我们使用了从Topcon的OCT机器收集的数据。在轴向和B扫描方向上测得的黄斑区域分辨率分别为11.7微米和46.8微米。与手动分割相比,视网膜层边界分割的平均绝对误差和标准偏差为4.5±3.2微米。因此,所提出的方法为评估水肿提供了一种自动,无创和定量的工具。  相似文献   

7.
基于Snake模型的图像分割技术是近年来图像处理领域的研究热点之一。Snake模型承载上层先验知识并融合了图像的底层特征,针对医学图像的特殊性,能有效地应用于医学图像的分割中。本文对各种基于Snake模型的改进算法和进化模型进行了研究,并重点梳理了最新的研究成果,以利于把握基于Snake模型的医学图像分割方法的脉络和发展方向。  相似文献   

8.
【目的】油茶树害虫的种类较多,其中油茶毒蛾Euproctis pseudoconspersa幼虫是危害较大的害虫之一。为完成油茶毒蛾幼虫的自动检测需要对其图像进行分割,油茶毒蛾幼虫图像的分割效果直接影响到图像的自动识别。【方法】本文提出了基于邻域最大差值与区域合并的油茶毒蛾幼虫图像分割算法,该方法主要是对相邻像素RGB的3个分量进行差值运算,最大差值若为0,则进行相邻像素合并得出初始的分割图像,根据合并准则进一步合并,得到最终分割结果。【结果】实验结果表明,该算法可以快速有效地将油茶毒蛾幼虫图像中的背景和虫体分割开来。【结论】使用JSEG分割算法、K均值聚类分割算法、快速几何可变形分割算法和本文算法对油茶毒蛾幼虫图像进行分割,将结果进行对比发现本文方法的分割效果最佳,且处理时间较短。  相似文献   

9.
昆虫图像分割方法及其应用   总被引:1,自引:0,他引:1  
王江宁  纪力强 《昆虫学报》2011,54(2):211-217
昆虫图像自动鉴定是一种快速鉴定昆虫的方法,图像分割则是其中关键步骤。通过搜集和整理国内外近年来针对昆虫图像的分割方法和研究,发现对昆虫图像分割的研究日趋增多。随着计算机图像技术的发展,昆虫图像分割方法吸收了许多图像分割领域中新兴的方法, 诸如采用水平集、边缘流以及结合形状、纹理、色彩等多种要素的智能分割(如JSEG方法)等。虽然大量的图像分割方法被引入到昆虫图像研究中,但是目前分割技术依然是阻碍昆虫图像广泛应用的关键。本文经过总结和分析,发现目前昆虫图像分割研究的往往在各自的测试集上有良好表现, 但是缺乏统一的评价标准, 因此很多方法在昆虫图像中应用难以推广。针对研究中的存在的这些问题,需建立良好的昆虫图像分割评价体系,本文建议通过建立统一的昆虫图像库以及对昆虫图像分割的评价方法深入研究,并且这些工作是当前昆虫图像分割研究亟待完善任务。  相似文献   

10.
提出一种基于局部调整动态轮廓模型提取超声图像乳腺肿瘤边缘的算法。该算法在Chan—Vese(CV)模型基础上,定义了一个局部调整项,采用基于水平集的动态轮廓模型提取超声图像乳腺肿瘤边缘。将该算法应用于89例临床超声图像乳腺肿瘤的边缘提取实验,结果表明:该算法比CV模型更适用于具有区域非同质性的超声图像的分割,可有效实现超声图像乳腺肿瘤边缘的提取。  相似文献   

11.
PurposeTo develop an automatic multimodal method for segmentation of parotid glands (PGs) from pre-registered computed tomography (CT) and magnetic resonance (MR) images and compare its results to the results of an existing state-of-the-art algorithm that segments PGs from CT images only.MethodsMagnetic resonance images of head and neck were registered to the accompanying CT images using two different state-of-the-art registration procedures. The reference domains of registered image pairs were divided on the complementary PG regions and backgrounds according to the manual delineation of PGs on CT images, provided by a physician. Patches of intensity values from both image modalities, centered around randomly sampled voxels from the reference domain, served as positive or negative samples in the training of the convolutional neural network (CNN) classifier. The trained CNN accepted a previously unseen (registered) image pair and classified its voxels according to the resemblance of its patches to the patches used for training. The final segmentation was refined using a graph-cut algorithm, followed by the dilate-erode operations.ResultsUsing the same image dataset, segmentation of PGs was performed using the proposed multimodal algorithm and an existing monomodal algorithm, which segments PGs from CT images only. The mean value of the achieved Dice overlapping coefficient for the proposed algorithm was 78.8%, while the corresponding mean value for the monomodal algorithm was 76.5%.ConclusionsAutomatic PG segmentation on the planning CT image can be augmented with the MR image modality, leading to an improved RT planning of head and neck cancer.  相似文献   

12.
ABSTRACT: BACKGROUND: In sparse-view CT imaging, strong streak artifacts may appear around bony structures and they often compromise the image readability. Compressed sensing (CS) or total variation (TV) minimization-based image reconstruction method has reduced the streak artifacts to a great extent, but, sparse-view CT imaging still suffers from residual streak artifacts. We introduce a new bone-induced streak artifact reduction method in the CS-based image reconstruction. METHODS: We firstly identify the high-intensity bony regions from the image reconstructed by the filtered backprojection (FBP) method, and we calculate the sinogram stemming from the bony regions only. Then, we subtract the calculated sinogram, which stands for the bony regions, from the measured sinogram before performing the CS-based image reconstruction. The image reconstructed from the subtracted sinogram will stand for the soft tissues with little streak artifacts on it. To restore the original image intensity in the bony regions, we add the bony region image, which has been identified from the FBP image, to the soft tissue image to form a combined image. Then, we perform the CS-based image reconstruction again on the measured sinogram using the combined image as the initial condition of the iteration. For experimental validation of the proposed method, we take images of a contrast phantom and a rat using a micro-CT and we evaluate the reconstructed images based on two figures of merit, relative mean square error and total variation caused by the streak artifacts. RESULTS: The images reconstructed by the proposed method have been found to have smaller streak artifacts than the ones reconstructed by the original CS-based method when visually inspected. The quantitative image evaluation studies have also shown that the proposed method outperforms the conventional CS-based method. CONCLUSIONS: The proposed method can effectively suppress streak artifacts stemming from bony structures in sparse-view CT imaging.  相似文献   

13.
Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method''s performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (P<0.05) in the FDDNP DVR and FDG Ki values in the parietal and temporal regions after MC. In conclusion, MC applied to dynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers.  相似文献   

14.
The Brown-Roberts-Wells (BRW) computer tomography (CT) stereotactic guidance system has been modified to accommodate magnetic resonance imaging (MRI). A smaller head ring, which fits in standard MRI head coils, is constructed of a non-ferromagnetic aluminum ring that is split to prevent eddy currents and anodized to prevent MRI image distortion and resolution degradation. A new localizing device has been designed in a box configuration, which allows BRW stereotactic coordinates to be calculated from coronal and sagittal MRI images, in addition to axial images. The system was tested utilizing a phantom and T1- and T2-weighted images. Using 5-mm MRI scan slices, targets were localized accurately to a 5-mm cube in three combined planes. Optimized calibration of both low field strength (0.3 T) and high field strength (1.5 T) MRI systems is necessary to obtain thin slice (5 mm) images with acceptable image resolution. To date, 10 patients have had MRI stereotactic localization of brain lesions that were better defined by MRI than CT.  相似文献   

15.
PurposeAnti-scatter grids suppress the scatter substantially thus improving image contrast in radiography. However, its active use in cone-beam CT for the purpose of improving contrast-to-noise ratio (CNR) has not been successful mainly due to the increased noise related to Poisson statistics of photons. This paper proposes a sparse-view scanning approach to address the above issue.MethodCompared to the conventional cone-beam CT imaging framework, the proposed method reduces the number of projections and increases exposure in each projection to enhance image quality without an additional cost of radiation dose to patients. For image reconstruction from sparse-view data, an adaptive-steepest-descent projection-onto-convex-sets (ASD POCS) algorithm regularized by total-variation (TV) minimization was adopted. Contrast and CNR with various scattering conditions were evaluated in projection domain by a simulation study using GATE. Then we evaluated contrast, resolution, and image uniformity in CT image domain with Catphan phantom. A head phantom with soft-tissue structures was also employed for demonstrating a realistic application. A virtual grid-based estimation and reduction of scatter has also been implemented for comparison with the real anti-scatter grid.ResultsIn the projection domain evaluation, contrast and CNR enhancement was observed when using an anti-scatter grid compared to the virtual grid. In the CT image domain, the proposed method produced substantially higher contrast and CNR of the low-contrast structures with much improved image uniformity.ConclusionWe have shown that the proposed method can provide high-quality CBCT images particularly with an increased contrast of soft-tissue at a neutral dose for image-guidance.  相似文献   

16.
PurposeTo study the feasibility of using an iterative reconstruction algorithm to improve previously reconstructed CT images which are judged to be non-diagnostic on clinical review. A novel rapidly converging, iterative algorithm (RSEMD) to reduce noise as compared with standard filtered back-projection algorithm has been developed.Materials and methodsThe RSEMD method was tested on in-silico, Catphan®500, and anthropomorphic 4D XCAT phantoms. The method was applied to noisy CT images previously reconstructed with FBP to determine improvements in SNR and CNR. To test the potential improvement in clinically relevant CT images, 4D XCAT phantom images were used to simulate a small, low contrast lesion placed in the liver.ResultsIn all of the phantom studies the images proved to have higher resolution and lower noise as compared with images reconstructed by conventional FBP. In general, the values of SNR and CNR reached a plateau at around 20 iterations with an improvement factor of about 1.5 for in noisy CT images. Improvements in lesion conspicuity after the application of RSEMD have also been demonstrated. The results obtained with the RSEMD method are in agreement with other iterative algorithms employed either in image space or with hybrid reconstruction algorithms.ConclusionsIn this proof of concept work, a rapidly converging, iterative deconvolution algorithm with a novel resolution subsets-based approach that operates on DICOM CT images has been demonstrated. The RSEMD method can be applied to sub-optimal routine-dose clinical CT images to improve image quality to potentially diagnostically acceptable levels.  相似文献   

17.
Using magnetic resonance imaging (MRI) as the sole imaging modality for patient modeling in radiation therapy (RT) is a challenging task due to the need to derive electron density information from MRI and construct a so-called pseudo-computed tomography (pCT) image. We have previously published a new method to derive pCT images from head T1-weighted (T1-w) MR images using a single-atlas propagation scheme followed by a post hoc correction of the mapped CT numbers using local intensity information. The purpose of this study was to investigate the performance of our method with head zero echo time (ZTE) MR images. To evaluate results, the mean absolute error in bins of 20 HU was calculated with respect to the true planning CT scan of the patient. We demonstrated that applying our method using ZTE MR images instead of T1-w improved the correctness of the pCT in case of bone resection surgery prior to RT (that is, an example of large anatomical difference between the atlas and the patient).  相似文献   

18.
In recent years one of the areas of interest in radiotherapy has been adaptive radiation therapy (ART), with the most efficient way of performing ART being the use of deformable image registration (DIR). In this paper we use the distances between points of interest (POIs) in the computed tomography (CT) and the cone beam computed tomography (CBCT) acquisition images and the inverse consistence (IC) property to validate the RayStation treatment planning system (TPS) DIR algorithm. This study was divided into two parts: Firstly the distance-accuracy of the TPS DIR algorithm was ascertained by placing POIs on anatomical features in the CT and CBCT images from five head and neck cancer patients. Secondly, a method was developed for studying the implication of these distances on the dose by using the IC. This method compared the dose received by the structures in the CT, and the structures that were quadruply-deformed. The accuracy of the TPS was 1.7 ± 0.8 mm, and the distance obtained with the quadruply-deformed IC method was 1.7 ± 0.9 mm, i.e. the difference between the IC method multiplied by two, and that of the TPS validation method, was negligible. Moreover, the IC method shows very little variation in the dose-volume histograms when comparing the original and quadruply-deformed structures. This indicates that this algorithm is useful for planning adaptive radiation treatments using CBCT in head and neck cancer patients, although these variations must be taken into account when making a clinical decision to adapt a treatment plan.  相似文献   

19.
The early symptom of lung tumor is always appeared as nodule on CT scans, among which 30% to 40% are malignant according to statistics studies. Therefore, early detection and classification of lung nodules are crucial to the treatment of lung cancer. With the increasing prevalence of lung cancer, large amount of CT images waiting for diagnosis are huge burdens to doctors who may missed or false detect abnormalities due to fatigue. Methods: In this study, we propose a novel lung nodule detection method based on YOLOv3 deep learning algorithm with only one preprocessing step is needed. In order to overcome the problem of less training data when starting a new study of Computer Aided Diagnosis (CAD), we firstly pick up a small number of diseased regions to simulate a limited datasets training procedure: 5 nodule patterns are selected and deformed into 110 nodules by random geometric transformation before fusing into 10 normal lung CT images using Poisson image editing. According to the experimental results, the Poisson fusion method achieves a detection rate of about 65.24% for testing 100 new images. Secondly, 419 slices from common database RIDER are used to train and test our YOLOv3 network. The time of lung nodule detection by YOLOv3 is shortened by 2–3 times compared with the mainstream algorithm, with the detection accuracy rate of 95.17%. Finally, the configuration of YOLOv3 is optimized by the learning data sets. The results show that YOLOv3 has the advantages of high speed and high accuracy in lung nodule detection, and it can access a large amount of CT image data within a short time to meet the huge demand of clinical practice. In addition, the use of Poisson image editing algorithms to generate data sets can reduce the need for raw training data and improve the training efficiency.  相似文献   

20.

Background

Despite its superb lateral resolution, flat-panel-detector (FPD) based tomosynthesis suffers from low contrast and inter-plane artifacts caused by incomplete cancellation of the projection components stemming from outside the focal plane. The incomplete cancellation of the projection components, mostly due to the limited scan angle in the conventional tomosynthesis scan geometry, often makes the image contrast too low to differentiate the malignant tissues from the background tissues with confidence.

Methods

In this paper, we propose a new method to suppress the inter-plane artifacts in FPD-based tomosynthesis. If 3D whole volume CT images are available before the tomosynthesis scan, the CT image data can be incorporated into the tomosynthesis image reconstruction to suppress the inter-plane artifacts, hence, improving the image contrast. In the proposed technique, the projection components stemming from outside the region-of-interest (ROI) are subtracted from the measured tomosynthesis projection data to suppress the inter-plane artifacts. The projection components stemming from outside the ROI are calculated from the 3D whole volume CT images which usually have lower lateral resolution than the tomosynthesis images. The tomosynthesis images are reconstructed from the subtracted projection data which account for the x-ray attenuation through the ROI. After verifying the proposed method by simulation, we have performed both CT scan and tomosynthesis scan on a phantom and a sacrificed rat using a FPD-based micro-CT.

Results

We have measured contrast-to-noise ratio (CNR) from the tomosynthesis images which is an indicator of the residual inter-plane artifacts on the focal-plane image. In both cases of the simulation and experimental imaging studies of the contrast evaluating phantom, CNRs have been significantly improved by the proposed method. In the rat imaging also, we have observed better visual contrast from the tomosynthesis images reconstructed by the proposed method.

Conclusions

The proposed tomosynthesis technique can improve image contrast with aids of 3D whole volume CT images. Even though local tomosynthesis needs extra 3D CT scanning, it may find clinical applications in special situations in which extra 3D CT scan is already available or allowed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号