首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.  相似文献   

2.
A great deal of recent research has focused on the challenging task of selecting differentially expressed genes from microarray data ("gene selection"). Numerous gene selection algorithms have been proposed in the literature, but it is often unclear exactly how these algorithms respond to conditions like small sample sizes or differing variances. Choosing an appropriate algorithm can therefore be difficult in many cases. In this paper we propose a theoretical analysis of gene selection, in which the probability of successfully selecting differentially expressed genes, using a given ranking function, is explicitly calculated in terms of population parameters. The theory developed is applicable to any ranking function which has a known sampling distribution, or one which can be approximated analytically. In contrast to methods based on simulation, the approach presented here is computationally efficient and can be used to examine the behavior of gene selection algorithms under a wide variety of conditions, even when the number of genes involved runs into the tens of thousands. The utility of our approach is illustrated by comparing three widely-used gene selection methods.  相似文献   

3.
4.
Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes.  相似文献   

5.
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.  相似文献   

6.
Beam tracking as a mitigation technique for treatment of intra-fractionally moving organs requires prediction to overcome latencies in the adaptation process. We implemented and experimentally tested a prediction method for scanned carbon beam tracking. Beam tracking parameters, i.e. the shift of the Bragg peak position in 3D, are determined prior to treatment in 4D treatment planning and applied during treatment delivery in dependence on the motion state of the target as well as on the scanning spot in the target. Hence, prediction is required for the organ motion trajectory as well as the scanning progress to achieve maximal performance. Prediction algorithms to determine beam displacements that overcome these latencies were implemented. Prediction times of 25 ms for target spot prediction were required for ~6 mm water-equivalent longitudinal beam shifts. The experimental tests proved feasibility of the implemented prediction algorithm.  相似文献   

7.
Three most simple Projection-Reconstruction algorithms, namely, the Lowest-Value, Additive Back-Projection and Hybrid Back-Projection/Lowest-Value algorithms, are analyzed. A new, also simple, algorithm that reconstructs the spectrum by utilizing the amplitude histogram at each reconstruction point, is explored. The algorithms are tested using simulated spectra. While all the algorithms considered can potentially result in substantial reduction of the amount of data needed for reconstruction, they can suffer from a number of drawbacks. In particular, they often fail when the spectra are noisy and/or contain overlapping peaks. When compared to the existing algorithms, the new, histogram-based algorithm has the potential advantage of being able to deal with spectra containing peaks of opposite phase.  相似文献   

8.
Successful prediction of the beta-hairpin motif will be helpful for understanding the of the fold recognition. Some algorithms have been proposed for the prediction of beta-hairpin motifs. However, the parameters used by these methods were primarily based on the amino acid sequences. Here, we proposed a novel model for predicting beta-hairpin structure based on the chemical shift. Firstly, we analyzed the statistical distribution of chemical shifts of six nuclei in not beta-hairpin and beta-hairpin motifs. Secondly, we used these chemical shifts as features combined with three algorithms to predict beta-hairpin structure. Finally, we achieved the best prediction, namely sensitivity of 92%, the specificity of 94% with 0.85 of Mathew’s correlation coefficient using quadratic discriminant analysis algorithm, which is clearly superior to the same method for the prediction of beta-hairpin structure from 20 amino acid compositions in the three-fold cross-validation. Our finding showed that the chemical shift is an effective parameter for beta-hairpin prediction, suggesting the quadratic discriminant analysis is a powerful algorithm for the prediction of beta-hairpin.  相似文献   

9.
《IRBM》2020,41(4):229-239
Feature selection algorithms are the cornerstone of machine learning. By increasing the properties of the samples and samples, the feature selection algorithm selects the significant features. The general name of the methods that perform this function is the feature selection algorithm. The general purpose of feature selection algorithms is to select the most relevant properties of data classes and to increase the classification performance. Thus, we can select features based on their classification performance. In this study, we have developed a feature selection algorithm based on decision support vectors classification performance. The method can work according to two different selection criteria. We tested the classification performances of the features selected with P-Score with three different classifiers. Besides, we assessed P-Score performance with 13 feature selection algorithms in the literature. According to the results of the study, the P-Score feature selection algorithm has been determined as a method which can be used in the field of machine learning.  相似文献   

10.
Identification of small nucleolar RNAs (snoRNAs) in genomic sequences has been challenging due to the relative paucity of sequence features. Many current prediction algorithms rely on detection of snoRNA motifs complementary to target sites in snRNAs and rRNAs. However, recent discovery of snoRNAs without apparent targets requires development of alternative prediction methods. We present an approach that combines rule-based filters and a Bayesian Classifier to identify a class of snoRNAs (H/ACA) without requiring target sequence information. It takes advantage of unique attributes of their genomic organization and improved species-specific motif characterization to predict snoRNAs that may otherwise be difficult to discover. Searches in the genomes of Caenorhabditis elegans and the closely related Caenorhabditis briggsae suggest that our method performs well compared to recent benchmark algorithms. Our results illustrate the benefits of training gene discovery engines on features restricted to particular phylogenetic groups and the utility of incorporating diverse data types in gene prediction.  相似文献   

11.
目的 长链非编码RNA在遗传、代谢和基因表达调控等方面发挥着重要作用。然而,传统的实验方法解析RNA的三级结构耗时长、费用高且操作要求高。此外,通过计算方法来预测RNA的三级结构在近十年来无突破性进展。因此,需要提出新的预测算法来准确的预测RNA的三级结构。所以,本文发展可以用于提高RNA三级结构预测准确性的碱基关联图预测方法。方法 为了利用RNA理化特征信息,本文应用多层全卷积神经网络和循环神经网络的深度学习算法来预测RNA碱基间的接触概率,并通过注意力机制处理RNA序列中碱基间相互依赖的特征。结果 通过多层神经网络与注意力机制结合,本文方法能够有效得到RNA特征值中局部和全局的信息,提高了模型的鲁棒性和泛化能力。检验计算表明,所提出模型对序列长度L的4种标准(L/10、L/5、L/2、L)碱基关联图的预测准确率分别达到0.84、0.82、0.82和0.75。结论 基于注意力机制的深度学习预测算法能够提高RNA碱基关联图预测的准确率,从而帮助RNA三级结构的预测。  相似文献   

12.
Prediction of both conserved and nonconserved microRNA targets in animals   总被引:2,自引:0,他引:2  
MOTIVATION: MicroRNAs (miRNAs) are involved in many diverse biological processes and they may potentially regulate the functions of thousands of genes. However, one major issue in miRNA studies is the lack of bioinformatics programs to accurately predict miRNA targets. Animal miRNAs have limited sequence complementarity to their gene targets, which makes it challenging to build target prediction models with high specificity. RESULTS: Here we present a new miRNA target prediction program based on support vector machines (SVMs) and a large microarray training dataset. By systematically analyzing public microarray data, we have identified statistically significant features that are important to target downregulation. Heterogeneous prediction features have been non-linearly integrated in an SVM machine learning framework for the training of our target prediction model, MirTarget2. About half of the predicted miRNA target sites in human are not conserved in other organisms. Our prediction algorithm has been validated with independent experimental data for its improved performance on predicting a large number of miRNA down-regulated gene targets. AVAILABILITY: All the predicted targets were imported into an online database miRDB, which is freely accessible at http://mirdb.org.  相似文献   

13.
Mourad R  Sinoquet C  Dina C  Leray P 《PloS one》2011,6(12):e27320
Linkage disequilibrium study represents a major issue in statistical genetics as it plays a fundamental role in gene mapping and helps us to learn more about human history. The linkage disequilibrium complex structure makes its exploratory data analysis essential yet challenging. Visualization methods, such as the triangular heat map implemented in Haploview, provide simple and useful tools to help understand complex genetic patterns, but remain insufficient to fully describe them. Probabilistic graphical models have been widely recognized as a powerful formalism allowing a concise and accurate modeling of dependences between variables. In this paper, we propose a method for short-range, long-range and chromosome-wide linkage disequilibrium visualization using forests of hierarchical latent class models. Thanks to its hierarchical nature, our method is shown to provide a compact view of both pairwise and multilocus linkage disequilibrium spatial structures for the geneticist. Besides, a multilocus linkage disequilibrium measure has been designed to evaluate linkage disequilibrium in hierarchy clusters. To learn the proposed model, a new scalable algorithm is presented. It constrains the dependence scope, relying on physical positions, and is able to deal with more than one hundred thousand single nucleotide polymorphisms. The proposed algorithm is fast and does not require phase genotypic data.  相似文献   

14.
Methods used for the detection and subtyping of Listeria monocytogenes   总被引:1,自引:0,他引:1  
Listeria monocytogenes is an important foodborne pathogen responsible for non-invasive and invasive diseases in the elderly, pregnant women, neonates and immunocompromised populations. This bacterium has many similarities with other non-pathogenic Listeria species which makes its detection from food and environmental samples challenging. Subtyping of L. monocytogenes strains can prove to be crucial in epidemiological investigations, source tracking contamination from food processing plants and determining evolutionary relationships between different strains. In recent years there has been a shift towards the use of molecular subtyping. This has led to the development of new subtyping techniques such as multi-locus variable number tandem repeat analysis (MLVA) and multi-locus sequence based typing (MLST). This review focuses on the available methods for Listeria detection including immuno-based techniques and the more recently developed molecular methods and analytical techniques such as matrix-assisted laser desorption/ionisation time-of-flight based mass spectrometry (MALDI-TOF MS). It also includes a comparison and critical analysis of the available phenotypic and genotypic subtyping techniques that have been investigated for L. monocytogenes.  相似文献   

15.
AimThe aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer.Background3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use.Materials and methodsOptimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed.ResultsThe quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed.ConclusionsOptimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively.  相似文献   

16.
Target tracking with wireless sensor networks (WSNs) has been a hot research topic recently. Many works have been done to improve the algorithms for localization and prediction of a moving target with smart sensors. However, the results are frequently difficult to implement because of hardware limitations. In this paper, we propose a practical distributed sensor activation algorithm (DSA2) that enables reliable tracking with the simplest binary-detection sensors. In this algorithm, all sensors in the field are activated with a probability to detect targets or sleep to save energy, the schedule of which depends on their neighbor sensors’ behaviors. Extensive simulations are also shown to demonstrate the effectiveness of the proposed algorithm. Great improvement in terms of energy-quality tradeoff and excellent robustness of the algorithm are also emphasized in the simulations.  相似文献   

17.
Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models.  相似文献   

18.
Population-Based Reversible Jump Markov Chain Monte Carlo   总被引:2,自引:0,他引:2  
We present an extension of population-based Markov chain MonteCarlo to the transdimensional case. A major challenge is thatof simulating from high- and transdimensional target measures.In such cases, Markov chain Monte Carlo methods may not adequatelytraverse the support of the target; the simulation results willbe unreliable. We develop population methods to deal with suchproblems, and give a result proving the uniform ergodicity ofthese population algorithms, under mild assumptions. This resultis used to demonstrate the superiority, in terms of convergencerate, of a population transition kernel over a reversible jumpsampler for a Bayesian variable selection problem. We also givean example of a population algorithm for a Bayesian multivariatemixture model with an unknown number of components. This isapplied to gene expression data of 1000 data points in six dimensionsand it is demonstrated that our algorithm outperforms some competingMarkov chain samplers. In this example, we show how to combinethe methods of parallel chains (Geyer, 1991), tempering (Geyer& Thompson, 1995), snooker algorithms (Gilks et al., 1994),constrained sampling and delayed rejection (Green & Mira,2001).  相似文献   

19.
In recent years, more and more high-throughput data sources useful for protein complex prediction have become available (e.g., gene sequence, mRNA expression, and interactions). The integration of these different data sources can be challenging. Recently, it has been recognized that kernel-based classifiers are well suited for this task. However, the different kernels (data sources) are often combined using equal weights. Although several methods have been developed to optimize kernel weights, no large-scale example of an improvement in classifier performance has been shown yet. In this work, we employ an evolutionary algorithm to determine weights for a larger set of kernels by optimizing a criterion based on the area under the ROC curve. We show that setting the right kernel weights can indeed improve performance. We compare this to the existing kernel weight optimization methods (i.e., (regularized) optimization of the SVM criterion or aligning the kernel with an ideal kernel) and find that these do not result in a significant performance improvement and can even cause a decrease in performance. Results also show that an expert approach of assigning high weights to features with high individual performance is not necessarily the best strategy.  相似文献   

20.
Complex proteoforms contain various primary structural alterations resulting from variations in genes, RNA, and proteins. Top‐down mass spectrometry is commonly used for analyzing complex proteoforms because it provides whole sequence information of the proteoforms. Proteoform identification by top‐down mass spectral database search is a challenging computational problem because the types and/or locations of some alterations in target proteoforms are in general unknown. Although spectral alignment and mass graph alignment algorithms have been proposed for identifying proteoforms with unknown alterations, they are extremely slow to align millions of spectra against tens of thousands of protein sequences in high throughput proteome level analyses. Many software tools in this area combine efficient protein sequence filtering algorithms and spectral alignment algorithms to speed up database search. As a result, the performance of these tools heavily relies on the sensitivity and efficiency of their filtering algorithms. Here, we propose two efficient approximate spectrum‐based filtering algorithms for proteoform identification. We evaluated the performances of the proposed algorithms and four existing ones on simulated and real top‐down mass spectrometry data sets. Experiments showed that the proposed algorithms outperformed the existing ones for complex proteoform identification. In addition, combining the proposed filtering algorithms and mass graph alignment algorithms identified many proteoforms missed by ProSightPC in proteome‐level proteoform analyses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号