首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
本文提出了基于最大熵和改进的PCNN(Pulse Coupled Neural Network)相结合的新方法,采用最大熵确定PCNN网络的循环迭代次数.提出的方法无需考虑PCNN参数的选择,可有效的自动分割各种医学图像,同时利用最大熵得到最优分割结果.该方法对于PCNN理论在医学图像分割领域的应用有着重要的意义.  相似文献   

2.
在本文中,我们提出了一种自动视网膜分割方法,以评估光学相干断层扫描(OCT)图像中黄斑水肿(ME)在视网膜特定层上的投影面积。首先使用基于权重矩阵的优化的最短路径最快算法对十个视网膜层边界进行分割,这有效降低了算法对血管阴影的敏感性。然而,ME的存在将导致水肿区域的分割不准确。因此,我们使用强度阈值方法提取每个OCT图像中的水肿区域,并将该区域中的值设置为零,并确保获得的分割边界可以自动穿过而不是绕过水肿区域。我们使用最小值投影来计算ME在不同层的投影面积。为了测试我们的方法,我们使用了从Topcon的OCT机器收集的数据。在轴向和B扫描方向上测得的黄斑区域分辨率分别为11.7微米和46.8微米。与手动分割相比,视网膜层边界分割的平均绝对误差和标准偏差为4.5±3.2微米。因此,所提出的方法为评估水肿提供了一种自动,无创和定量的工具。  相似文献   

3.
太湖水污染防治计划实施效率评估   总被引:2,自引:2,他引:0  
贺桂珍  吕永龙  陈英旭 《生态学报》2008,28(12):6348-6354
数据包络分析方法是评价非赢利组织绩效的有效方法之一。研究采用数据包络分析,从绩效审计的观点探讨沿太湖6市环境保护局(以下简称环保局)2001~2005年执行太湖水污染防治“十五”规划的相对效率。结果表明:从技术效率来看,40%的决策单位为相对有效率;对于规模效率,有36.7%的决策单元为相对有效率,各环保局的效率不能令人满意。外部影响因素分析结果表明,人口和经济发展水平对环保局的总技术效率不具有明显影响,而企业数与环保局的整体技术效率具有显著差异性。在此基础上提出了促进环保局效率的对策建议。  相似文献   

4.
提出一种基于局部调整动态轮廓模型提取超声图像乳腺肿瘤边缘的算法。该算法在Chan—Vese(CV)模型基础上,定义了一个局部调整项,采用基于水平集的动态轮廓模型提取超声图像乳腺肿瘤边缘。将该算法应用于89例临床超声图像乳腺肿瘤的边缘提取实验,结果表明:该算法比CV模型更适用于具有区域非同质性的超声图像的分割,可有效实现超声图像乳腺肿瘤边缘的提取。  相似文献   

5.
基于不同决策树的面向对象林区遥感影像分类比较   总被引:1,自引:0,他引:1  
陈丽萍  孙玉军 《生态学杂志》2018,29(12):3995-4003
面向地理对象影像分析技术(GEOBIA)是影像分辨率越来越高的背景下的产物.如何提高高分辨率影像分类精度和分类效率是影像处理的重要议题之一.本研究对QuickBird影像多尺度分割后的对象进行分类,分析了C5.0、C4.5、CART决策树算法在林区面向对象分类中的效率,并与kNN算法的分类精度进行比较.利用eCognition软件对遥感影像进行多尺度分割,分析得到最佳尺度为90和40.在90尺度下分离出植被和非植被后,在40尺度下提取不同类别植被的光谱、纹理、形状等共21个特征,并利用C5.0、C4.5、CART决策树算法分别对其进行知识挖掘,自动建立分类规则.最后利用建立的分类规则分别对植被区域进行分类,并比较分析其精度.结果表明: 基于决策树的分类精度均高于传统的kNN法.其中,C5.0方法的精度最高,其总体分类精度为90.0%,Kappa系数0.87.决策树算法能有效提高林区树种分类精度,且C5.0决策树的Boosting算法对该分类效果具有最明显的提升.  相似文献   

6.
温室白粉虱自动计数技术研究初报   总被引:11,自引:0,他引:11  
应用计算机视觉技术对温室白粉虱自动计数技术进行了研究。采用胶卷照相机和家用摄像机对田间温室白粉虱等生的叶片进行拍摄,以获得其数字图象,对白粉虱图象的分割采用Johannsen基于熵的分割算法,对分割后的二值图象利用区域标记算法得到白粉虱个体的数量。对叶片挨在一起的白粉虱个体采用数学形态学算法进行了分离。用19个虫叶片样本的统计结果表明,直接利用分割图象进行白粉虱个体计数的累积准确率达91.99%,而分离处理的算法则需要改进,因此,这一技术具有进一步在生态研究和IPM实践中推广的可能性,这将使田间微小昆虫的种群数量监测和测查的工作量大幅度降低,而铉得到显著提高。  相似文献   

7.
目的 评价湖北省县级公立医院综合改革试点医院运行效率,为后续改革提供相应政策建议。方法 运用数据包络方法,分别使用CCR、BCC模型测量2014年湖北省73家县级公立医院运行效率。结果 总体有效的医院有15家(20.5%),平均总体效率值是0.814;技术有效的医院有33家(45.2%),规模有效的医院15家(20.5%);规模报酬递增的有4家(5.5%),规模报酬递减的有54家(74.0%)。结论 湖北省县级公立医院总体效率有待提高,特别是需进一步控制医院规模,从而提高整体运行效率。  相似文献   

8.
目的:采用MR脑肿瘤图像分割与矩方法进行结合,以获取特定器官及组织的轮廓。方法:对MR脑肿瘤图像进行分割,并对分割的结果进行矩描述。通过分析当前常用的医学图像分割方法,采用了一种基于形变模型的医学图像分割方法,并按照相应的理论算法模型和实现步骤对医学图像进行了处理,最后用Visual C 6.0编程,并对MR脑肿瘤图像进行分割实验。结果:从切割的图形中可以看出,本分割方法分割边界清晰,总体不确定性较小,利用矩技术所提取的图像特征在基于内容的图像检索中是有效的。结论:本分割方法切实可行,分割效果较好,为进一步的MR脑肿瘤图像分析和研究提供了一种有效工具。  相似文献   

9.
视网膜是层状结构,临床上可以根据视网膜层厚度改变对一些疾病进行预测和诊断.为了快速且准确地分割出视网膜的不同层带,本论文提出一种基于主成分分析的随机森林视网膜光学相干断层扫描技术(optical coherence tomography,OCT)图像分层算法.该方法使用主成分分析(principal component analysis,PCA)法对随机森林采集到的特征进行重采样,保留重采样后权重大的特征信息维度,从而消除特征维度间的关联性和信息冗余.结果表明,总特征维度在29维的情况下,保留前18维度训练速度提高了23.20%,14维度训练速度提高了42.38%,而对图像分割精度方面影响较小,实验表明该方法有效地提高了算法的效率.  相似文献   

10.
临床路径作为一项有效的管理方法,在保障医疗安全和提高医疗效率中发挥了重要作用。以医院开展临床路径工作的实践经验为基础,从组织管理和执行实施两个方面,介绍了以加强环节质量控制及提高执行效率为目的的临床路径建设及持续优化策略,探讨完善临床路径管理的有效方法。  相似文献   

11.
Multiple sequence alignment (MSA) is one of the most fundamental problems in computational molecular biology. The running time of the best known scheme for finding an optimal alignment, based on dynamic programming, increases exponentially with the number of input sequences. Hence, many heuristics were suggested for the problem. We consider a version of the MSA problem where the goal is to find an optimal alignment in which matches are restricted to positions in predefined matching segments. We present several techniques for making the dynamic programming algorithm more efficient, while still finding an optimal solution under these restrictions. We prove that it suffices to find an optimal alignment of the predefined sequence segments, rather than single letters, thereby reducing the input size and thus improving the running time. We also identify "shortcuts" that expedite the dynamic programming scheme. Empirical study shows that, taken together, these observations lead to an improved running time over the basic dynamic programming algorithm by 4 to 12 orders of magnitude, while still obtaining an optimal solution. Under the additional assumption that matches between segments are transitive, we further improve the running time for finding the optimal solution by restricting the search space of the dynamic programming algorithm  相似文献   

12.
This paper presents a new module for heart sounds segmentation based on S-transform. The heart sounds segmentation process segments the PhonoCardioGram (PCG) signal into four parts: S1 (first heart sound), systole, S2 (second heart sound) and diastole. It can be considered one of the most important phases in the auto-analysis of PCG signals. The proposed segmentation module can be divided into three main blocks: localization of heart sounds, boundaries detection of the localized heart sounds and classification block to distinguish between S1 and S2. An original localization method of heart sounds are proposed in this study. The method named SSE calculates the Shannon energy of the local spectrum calculated by the S-transform for each sample of the heart sound signal. The second block contains a novel approach for the boundaries detection of S1 and S2. The energy concentrations of the S-transform of localized sounds are optimized by using a window width optimization algorithm. Then the SSE envelope is recalculated and a local adaptive threshold is applied to refine the estimated boundaries. To distinguish between S1 and S2, a feature extraction method based on the singular value decomposition (SVD) of the S-matrix is applied in this study. The proposed segmentation module is evaluated at each block according to a database of 80 sounds, including 40 sounds with cardiac pathologies.  相似文献   

13.
For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.  相似文献   

14.
The Lightweight Design of Low RCS Pylon Based on Structural Bionics   总被引:1,自引:0,他引:1  
<正> A concept of Specific Structure Efficiency (SSE) was proposed that can be used in the lightweight effect evaluation ofstructures.The main procedures of bionic structure design were introduced systematically.The parameter relationship betweenhollow stem of plant and the minimum weight was deduced in detail.In order to improve SSE of pylons, the structural characteristicsof hollow stem were investigated and extracted.Bionic pylon was designed based on analogous biological structuralcharacteristics.Using finite element method based simulation, the displacements and stresses in the bionic pylon were comparedwith those of the conventional pylon.Results show that the SSE of bionic pylon is improved obviously.Static, dynamic andelectromagnetism tests were carried out on conventional and bionic pylons.The weight, stress, displacement and Radar CrossSection (RCS) of both pylons were measured.Experimental results illustrate that the SSE of bionic pylon is markedly improvedthat specific strength efficiency and specific stiffness efficiency of bionic pylon are increased by 52.9% and 43.6% respectively.The RCS of bionic pylon is reduced significantly.  相似文献   

15.
Fragment assembly is one of the most important problems of sequence assembly. Algorithms for DNA fragment assembly using de Bruijn graph have been widely used. These algorithms require a large amount of memory and running time to build the de Bruijn graph. Another drawback of the conventional de Bruijn approach is the loss of information. To overcome these shortcomings, this paper proposes a parallel strategy to construct de Bruijin graph. Its main characteristic is to avoid the division of de Bruijin graph. A novel fragment assembly algorithm based on our parallel strategy is implemented in the MapReduce framework. The experimental results show that the parallel strategy can effectively improve the computational efficiency and remove the memory limitations of the assembly algorithm based on Euler superpath. This paper provides a useful attempt to the assembly of large-scale genome sequence using Cloud Computing.  相似文献   

16.
ABSTRACT: BACKGROUND: Aligning short DNA reads to a reference sequence alignment is a prerequisite fordetecting their biological origin and analyzing them in a phylogenetic context. With thePaPaRa tool we introduced a dedicated dynamic programming algorithm forsimultaneously aligning short reads to reference alignments and correspondingevolutionary reference trees. The algorithm aligns short reads to phylogenetic profiles thatcorrespond to the branches of such a reference tree. The algorithm needs to perform animmense number of pairwise alignments. Therefore, we explore vector intrinsics andGPUs to accelerate the PaPaRa alignment kernel. RESULTS: We optimized and parallelized PaPaRa on CPUs and GPUs. Via SSE 4.1 SIMD (SingleInstruction, Multiple Data) intrinsics for x86 SIMD architectures and multi-threading, weobtained a 9-fold acceleration on a single core as well as linear speedups with respect tothe number of cores. The peak CPU performance amounts to 18.1 GCUPS (Giga CellUpdates per Second) using all four physical cores on an Intel i7 2600 CPU running at 3.4GHz. The average CPU performance (averaged over all test runs) is 12.33 GCUPS. Wealso used OpenCL to execute PaPaRa on a GPU SIMT (Single Instruction, MultipleThreads) architecture. A NVIDIA GeForce 560 GPU delivered peak and averageperformance of 22.1 and 18.4 GCUPS respectively. Finally, we combined the SIMD andSIMT implementations into a hybrid CPU-GPU system that achieved an accumulatedpeak performance of 33.8 GCUPS. CONCLUSIONS: This accelerated version of PaPaRa (available at www.exelixis-lab.org/software.html)provides a significant performance improvement that allows for analyzing larger datasetsin less time. We observe that state-of-the-art SIMD and SIMT architectures delivercomparable performance for this dynamic programming kernel when the "competingprogrammer approach" is deployed. Finally, we show that overall performance can besubstantially increased by designing a hybrid CPU-GPU system with appropriate loaddistribution mechanisms.  相似文献   

17.
Finding the common substructures shared by two proteins is considered as one of the central issues in computational biology because of its usefulness in understanding the structure-function relationship and application in drug and vaccine design. In this paper, we propose a novel algorithm called FAMCS (Finding All Maximal Common Substructures) for the common substructure identification problem. Our method works initially at the protein secondary structural element (SSE) level and starts with the identification of all structurally similar SSE pairs. These SSE pairs are then merged into sets using a modified Apriori algorithm, which will test the similarity of various sets of SSE pairs incrementally until all the maximal sets of SSE pairs that deemed to be similar are found. The maximal common substructures of the two proteins will be formed from these maximal sets. A refinement algorithm is also proposed to fine tune the alignment from the SSE level to the residue level. Comparison of FAMCS with other methods on various proteins shows that FAMCS can address all four requirements and infer interesting biological discoveries.  相似文献   

18.
The problem of finding an optimal structural alignment for a pair of superimposed proteins is often amenable to the Smith-Waterman dynamic programming algorithm, which runs in time proportional to the product of lengths of the sequences being aligned. While the quadratic running time is acceptable for computing a single alignment of two fixed protein structures, the time complexity becomes a bottleneck when running the Smith-Waterman routine multiple times in order to find a globally optimal superposition and alignment of the input proteins. We present a subquadratic running time algorithm capable of computing an alignment that optimizes one of the most widely used measures of protein structure similarity, defined as the number of pairs of residues in two proteins that can be superimposed under a predefined distance cutoff. The algorithm presented in this article can be used to significantly improve the speed-accuracy tradeoff in a number of popular protein structure alignment methods.  相似文献   

19.

Background

Expectation maximizing (EM) is one of the common approaches for image segmentation.

Methods

an improvement of the EM algorithm is proposed and its effectiveness for MRI brain image segmentation is investigated. In order to improve EM performance, the proposed algorithms incorporates neighbourhood information into the clustering process. At first, average image is obtained as neighbourhood information and then it is incorporated in clustering process. Also, as an option, user-interaction is used to improve segmentation results. Simulated and real MR volumes are used to compare the efficiency of the proposed improvement with the existing neighbourhood based extension for EM and FCM.

Results

the findings show that the proposed algorithm produces higher similarity index.

Conclusions

experiments demonstrate the effectiveness of the proposed algorithm in compare to other existing algorithms on various noise levels.  相似文献   

20.
Over the past few years, cluster/distributed computing has been gaining popularity. The proliferation of the cluster/distributed computing is due to the improved performance and increased reliability of these systems. Many parallel programming languages and related parallel programming models have become widely accepted. However, one of the major shortcomings of running parallel applications on cluster/distributed computing environments is the high communication overhead incurred. To reduce the communication overhead, and thus the completion time of a parallel application, this paper describes a simple, efficient and portable Key Message (KM) approach to support parallel computing on cluster/distributed computing environments. To demonstrate the advantage of the KM approach, a prototype runtime system has been implemented and evaluated. Our preliminary experimental results show that the KM approach has better improvement on communication of a parallel application when network background load increases or the computation to communication ratio of the application decreases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号