首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Current Particle Swarm Optimization (PSO) algorithms do not address problems with unknown dimensions, which arise in many applications that would benefit from the use of PSO. In this paper, we propose a new algorithm, called Dimension Adaptive Particle Swarm Optimization (DA-PSO) that can address problems with any number of dimensions. We also propose and compare three other PSO-based methods with DA-PSO. We apply our algorithms to solve the Weibull mixture model density estimation problem as an illustration. DA-PSO achieves better objective function values than other PSO-based algorithms on four simulated datasets and a real dataset. We also compare DA-PSO with the recursive Expectation-Maximization (EM) estimator, which is a non-PSO-based method, obtaining again very good results.  相似文献   

2.
The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches—Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims.  相似文献   

3.
Particle Swarm Optimization (PSO) is a stochastic optimization approach that originated from simulations of bird flocking, and that has been successfully used in many applications as an optimization tool. Estimation of distribution algorithms (EDAs) are a class of evolutionary algorithms which perform a two-step process: building a probabilistic model from which good solutions may be generated and then using this model to generate new individuals. Two distinct research trends that emerged in the past few years are the hybridization of PSO and EDA algorithms and the parallelization of EDAs to exploit the idea of exchanging the probabilistic model information. In this work, we propose the use of a cooperative PSO/EDA algorithm based on the exchange of heterogeneous probabilistic models. The model is heterogeneous because the cooperating PSO/EDA algorithms use different methods to sample the search space. Three different exchange approaches are tested and compared in this work. In all these approaches, the amount of information exchanged is adapted based on the performance of the two cooperating swarms. The performance of the cooperative model is compared to the existing state-of-the-art PSO cooperative approaches using a suite of well-known benchmark optimization functions.  相似文献   

4.

Background  

Particle Swarm Optimization (PSO) is an established method for parameter optimization. It represents a population-based adaptive optimization technique that is influenced by several "strategy parameters". Choosing reasonable parameter values for the PSO is crucial for its convergence behavior, and depends on the optimization task. We present a method for parameter meta-optimization based on PSO and its application to neural network training. The concept of the Optimized Particle Swarm Optimization (OPSO) is to optimize the free parameters of the PSO by having swarms within a swarm. We assessed the performance of the OPSO method on a set of five artificial fitness functions and compared it to the performance of two popular PSO implementations.  相似文献   

5.
Grid computing uses distributed interconnected computers and resources collectively to achieve higher performance computing and resource sharing. Task scheduling is one of the core steps to efficiently exploit the capabilities of Grid environment. Recently, heuristic algorithms have been successfully applied to solve task scheduling on computational Grids. In this paper, Gravitational Search Algorithm (GSA), as one of the latest population-based metaheuristic algorithms, is used for task scheduling on computational Grids. The proposed method employs GSA to find the best solution with the minimum makespan and flowtime. We evaluate this approach with Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) method. The results demonstrate that the benefit of the GSA is its speed of convergence and the capability to obtain feasible schedules.  相似文献   

6.
The early symptom of lung tumor is always appeared as nodule on CT scans, among which 30% to 40% are malignant according to statistics studies. Therefore, early detection and classification of lung nodules are crucial to the treatment of lung cancer. With the increasing prevalence of lung cancer, large amount of CT images waiting for diagnosis are huge burdens to doctors who may missed or false detect abnormalities due to fatigue. Methods: In this study, we propose a novel lung nodule detection method based on YOLOv3 deep learning algorithm with only one preprocessing step is needed. In order to overcome the problem of less training data when starting a new study of Computer Aided Diagnosis (CAD), we firstly pick up a small number of diseased regions to simulate a limited datasets training procedure: 5 nodule patterns are selected and deformed into 110 nodules by random geometric transformation before fusing into 10 normal lung CT images using Poisson image editing. According to the experimental results, the Poisson fusion method achieves a detection rate of about 65.24% for testing 100 new images. Secondly, 419 slices from common database RIDER are used to train and test our YOLOv3 network. The time of lung nodule detection by YOLOv3 is shortened by 2–3 times compared with the mainstream algorithm, with the detection accuracy rate of 95.17%. Finally, the configuration of YOLOv3 is optimized by the learning data sets. The results show that YOLOv3 has the advantages of high speed and high accuracy in lung nodule detection, and it can access a large amount of CT image data within a short time to meet the huge demand of clinical practice. In addition, the use of Poisson image editing algorithms to generate data sets can reduce the need for raw training data and improve the training efficiency.  相似文献   

7.
针对辽宁省农业产业结构中存在的问题,从经济、生态、社会三方面综合考察,建立了该区域的可持续农业产业结构优化模型,并利用改进的微粒群多目标优化算法对模型进行了求解,为辽宁省以及相似区域的农业产业结构调整提供了理论依据.  相似文献   

8.
Task scheduling is one of the most challenging aspects to improve the overall performance of cloud computing and optimize cloud utilization and Quality of Service (QoS). This paper focuses on Task Scheduling optimization using a novel approach based on Dynamic dispatch Queues (TSDQ) and hybrid meta-heuristic algorithms. We propose two hybrid meta-heuristic algorithms, the first one using Fuzzy Logic with Particle Swarm Optimization algorithm (TSDQ-FLPSO), the second one using Simulated Annealing with Particle Swarm Optimization algorithm (TSDQ-SAPSO). Several experiments have been carried out based on an open source simulator (CloudSim) using synthetic and real data sets from real systems. The experimental results demonstrate the effectiveness of the proposed approach and the optimal results is provided using TSDQ-FLPSO compared to TSDQ-SAPSO and other existing scheduling algorithms especially in a high dimensional problem. The TSDQ-FLPSO algorithm shows a great advantage in terms of waiting time, queue length, makespan, cost, resource utilization, degree of imbalance, and load balancing.  相似文献   

9.
《IRBM》2021,42(6):415-423
ObjectivesConvolutional neural networks (CNNs) have established state-of-the-art performance in computer vision tasks such as object detection and segmentation. One of the major remaining challenges concerns their ability to capture consistent spatial and anatomically plausible attributes in medical image segmentation. To address this issue, many works advocate to integrate prior information at the level of the loss function. However, prior-based losses often suffer from local solutions and training instability. The CoordConv layers are extensions of convolutional neural network wherein convolution is conditioned on spatial coordinates. The objective of this paper is to investigate CoordConv as a proficient substitute to convolutional layers for medical image segmentation tasks when trained under prior-based losses.MethodsThis work introduces CoordConv-Unet which is a novel structure that can be used to accommodate training under anatomical prior losses. The proposed architecture demonstrates a dual role relative to prior constrained CNN learning: it either demonstrates a regularizing role that stabilizes learning while maintaining system performance, or improves system performance by allowing the learning to be more stable and to evade local minima.ResultsTo validate the performance of the proposed model, experiments are conducted on two well-known public datasets from the Decathlon challenge: a mono-modal MRI dataset dedicated to segmentation of the left atrium, and a CT image dataset whose objective is to segment the spleen, an organ characterized with varying size and mild convexity issues.ConclusionResults show that, despite the inadequacy of CoordConv when trained with the regular dice baseline loss, the proposed CoordConv-Unet structure can improve significantly model performance when trained under anatomically constrained prior losses.  相似文献   

10.
This work presents a computerized method to identify, detect, evaluate, and, by colored overlay, present gold particle pairs in electron microscopy (EM), even in wide-field views. Double gold immunolabeled specimens were analyzed in a LEO 912 electron microscope equipped with a 2k x 2k-pixel slow-scan cooled CCD camera connected to a computer with analySIS 3.1 PRO image processing software. The acquisition of a high-resolution and high-dynamic-range image by the camera allowed correct segmentation of the gold particles, separating them from other cell structures and from the substrate. Particle identification was performed by a classification module designed by us. Based on shape and size, the computer recognized the group of small particles and classified them as either singular or clustered and differentiated these from the single bigger type. The final image shows the particle types separated and colored, and indicates the total number of objects encountered in the specific region of interest. Moreover, a montage tool allowed us to obtain final representative images of large microscopic fields, which on analysis by the Gold Finder module provided information on the distribution and localization of antigens comparable to that provided by the wide-field light microscope images.  相似文献   

11.
Computer-aided detection (CAD) technology has been developed and demonstrated its potential to assist radiologists in detecting pulmonary nodules especially at an early stage. In this paper, we present a novel scheme for automatic detection of pulmonary nodules in CT images based on a 3D tensor filtering algorithm and local image feature analysis. We first apply a series of preprocessing steps to segment the lung volume and generate the isotropic volumetric CT data. Next, a unique 3D tensor filtering approach and local image feature analysis are used to detect nodule candidates. A 3D level set segmentation method is used to correct and refine the boundaries of nodule candidates subsequently. Then, we extract the features of the detected candidates and select the optimal features by using a CFS (Correlation Feature Selection) subset evaluator attribute selection method. Finally, a random forest classifier is trained to classify the detected candidates. The performance of this CAD scheme is validated using two datasets namely, the LUNA16 (Lung Nodule Analysis 2016) database and the ANODE09 (Automatic Nodule Detection 2009) database. By applying a 10-fold cross-validation method, the CAD scheme yielded a sensitivity of 79.3% at an average of 4 false positive detections per scan (FP/Scan) for the former dataset, and a sensitivity of 84.62% and 2.8 FP/Scan for the latter dataset, respectively. Our detection results show that the use of 3D tensor filtering algorithm combined with local image feature analysis constitutes an effective approach to detect pulmonary nodules.  相似文献   

12.
通过引入粒子群算法(PSO)和最小二乘支持向量机(LSSVR),提出基于PSO-LSSVR的土壤肥力评价模型。选取有机质、全氮速效磷、速效钾、阳离子交换量、酸碱度、容重、黏粒、水稳性团聚体和分散率等10种评价指标,以吉林省黑地为例,建立土壤肥力评价模型。同时与物元可拓法、普通SVM模型的评价结果进行比较;3种方法的多数样本评价结果基本一致,对于样本2、样本13,PSO-LSSVR模型分别定为Ⅳ级、Ⅲ级,符合实际情况;表明PSO-LSSVR是一种适用且能准确反映土壤特性的土壤肥力评价模型。  相似文献   

13.
This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization.  相似文献   

14.
Molecular Biology makes it possible to express foreign genes in microorganism, plants and animals. To improve the heterologous expression, it is important that the codon usage of sequence be optimized to make it adaptive to host organism. In this paper, a novel method based on Quantum-behaved Particle Swarm Optimization (QPSO) algorithm is developed to optimize the codon usage of synthetic gene. Compared to the existing probability methods, QPSO is able to generate better results when DNA/RNA sequence length is less than 6 Kb which is the commonly used range. While the software or web service based on probability method may not exclude all defined restriction sites when there are many undesired sites in the sequence, our proposed method can remove the undesired site efficiently during the optimization process.  相似文献   

15.
16.
Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches.  相似文献   

17.
ObjectiveStudying the diagnostic value of CT imaging in non-small cell lung cancer (NSCLC), and establishing a prognosis model combined with clinical characteristics is the objective, so as to provide a reference for the survival prediction of NSCLC patients.MethodCT scan data of NSCLC 200 patients were taken as the research object. Through image segmentation, the radiology features of CT images were extracted. The reliability and performance of the prognosis model based on the optimal feature number of specific algorithm and the prognosis model based on the global optimal feature number were compared.Results30-RELF-NB (30 optimal features, RELF feature selection algorithm and NB classifier) has the highest accuracy and AUC (area under the subject characteristic curve) in the prognosis model based on the optimal features of specific algorithm. Among the prognosis models based on global optimal features, 25-NB (25 global optimal features, naive Bayes classification algorithm classifier) has the highest accuracy and AUC. Compared with the prediction model based on feature training of specific feature selection algorithm, the overall performance and stability of the prediction model based on global optimal feature are higher.ConclusionThe prognosis model based on the global optimal feature established in this paper has good reliability and performance, and can be applied to the CT radiology of NSCLC.  相似文献   

18.
MOTIVATION: We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the use of the bivariate t distribution for the foreground intensity provides a model that is less sensitive to extreme observations; (5) as a consequence of the aforementioned properties, it allows segmentation to be undertaken for a wide range of spot shapes, including doughnut, sickle shape and artifacts. RESULTS: We apply our method for gridding, segmentation and estimation to cDNA microarray real images and artificial data. Our method provides better segmentation results in spot shapes as well as intensity estimation than Spot and spotSegmentation R language softwares. It detected blank spots as well as bright artifact for the real data, and estimated spot intensities with high-accuracy for the synthetic data. AVAILABILITY: The algorithms were implemented in Matlab. The Matlab codes implementing both the gridding and segmentation/estimation are available upon request. SUPPLEMENTARY INFORMATION: Supplementary material is available at Bioinformatics online.  相似文献   

19.
20.
In this paper, we present a semi-supervised approach for liver segmentation from computed tomography (CT) scans, which is based on the graph cut model integrated with domain knowledge. Firstly, some hard constraints are obtained according to the knowledge of liver characteristic appearance and anatomical location. Secondly, the energy function is constructed via knowledge based similarity measure. A path-based spatial connectivity measure is applied for robust regional properties. Finally, the image is interpreted as a graph, afterwards the segmentation problem is casted as an optimal cut on it, which can be computed through the existing max-flow algorithm. The model is evaluated on MICCAI 2007 liver segmentation challenge datasets and some other CT volumes from the hospital. The experimental results show its effectiveness and efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号