首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases.  相似文献   

2.
Influenza viruses have been responsible for large losses of lives around the world and continue to present a great public health challenge. Antigenic characterization based on hemagglutination inhibition (HI) assay is one of the routine procedures for influenza vaccine strain selection. However, HI assay is only a crude experiment reflecting the antigenic correlations among testing antigens (viruses) and reference antisera (antibodies). Moreover, antigenic characterization is usually based on more than one HI dataset. The combination of multiple datasets results in an incomplete HI matrix with many unobserved entries. This paper proposes a new computational framework for constructing an influenza antigenic cartography from this incomplete matrix, which we refer to as Matrix Completion-Multidimensional Scaling (MC-MDS). In this approach, we first reconstruct the HI matrices with viruses and antibodies using low-rank matrix completion, and then generate the two-dimensional antigenic cartography using multidimensional scaling. Moreover, for influenza HI tables with herd immunity effect (such as those from Human influenza viruses), we propose a temporal model to reduce the inherent temporal bias of HI tables caused by herd immunity. By applying our method in HI datasets containing H3N2 influenza A viruses isolated from 1968 to 2003, we identified eleven clusters of antigenic variants, representing all major antigenic drift events in these 36 years. Our results showed that both the completed HI matrix and the antigenic cartography obtained via MC-MDS are useful in identifying influenza antigenic variants and thus can be used to facilitate influenza vaccine strain selection. The webserver is available at http://sysbio.cvm.msstate.edu/AntigenMap.  相似文献   

3.
Chinarov V  Menzinger M 《Bio Systems》2003,68(2-3):147-153
An attractor bistable gradient neural-like network (BGN) described in Chinarov and Menzinger (BioSystems 55 (2000) 137) is applied to restoration of unknown patterns, which have been highly corrupted by multiplicative and additive Gaussian white noise. This becomes possible due to competitive advantage of BGN that derives from it: nice generalization capabilities, existence of the unique attractor with the lowest energy that is worked out when several patterns are stored by the network, and fast guaranteed convergence to this attractor.  相似文献   

4.
Recently, there has been a growing interest in the sparse representation of signals over learned and overcomplete dictionaries. Instead of using fixed transforms such as the wavelets and its variants, an alternative way is to train a redundant dictionary from the image itself. This paper presents a novel de-speckling scheme for medical ultrasound and speckle corrupted photographic images using the sparse representations over a learned overcomplete dictionary. It is shown that the proposed algorithm can be used effectively for the removal of speckle by combining an existing pre-processing stage before an adaptive dictionary could be learned for sparse representation. Extensive simulations are carried out to show the effectiveness of the proposed filter for the removal of speckle noise both visually and quantitatively.  相似文献   

5.
In this paper, an image restoration algorithm is proposed to identify noncausal blur function. Image degradation processes include both linear and nonlinear phenomena. A neural network model combining an adaptive auto-associative network with a random Gaussian process is proposed to restore the blurred image and blur function simultaneously. The noisy and blurred images are modeled as continuous associative networks, whereas auto-associative part determines the image model coefficients and the hetero-associative part determines the blur function of the system. The self-organization like structure provides the potential solution of the blind image restoration problem. The estimation and restoration are implemented by using an iterative gradient based algorithm to minimize the error function.  相似文献   

6.
We have investigated the restoration of electron micrographs exhibiting blurring due to drift and rotation. Blurring due to drift arises in micrographs taken of a specimen which is moving relative to the image plane. A related problem is that of rotational blurring which arises in micrographs of thin sections of helical particles viewed in cross section. The twist of the particle within the finite thickness of the section causes the image to appear rotationally blurred about the helical axis. Restoration algorithms were evaluated by applying them to the restoration of blurred model images degraded by additive Gaussian noise. Model images were also used to investigate how an incorrect estimate of the point spread function describing the blur would effect the restoration. Images were, if necessary, geometrically transformed to a space in which the point spread function of the blur can be considered as linear and space invariant as, under these conditions, the restoration algorithms are greatly simplified. In the case of the rotationally blurred images this procedure was accomplished by transforming the image to polar coordinates. The restoration techniques were successfully applied to blurred micrographs of bacteriophage T4 and crystals of catalase. The quality of the restoration was judged by comparisons of the restored images to undegraded images. Application to micrographs of rotationally blurred cross sections of helical macrofibers of sickle hemoglobin resulted in a reduction in the amount of rotational blurring.  相似文献   

7.
Denoising is a fundamental early stage in 2‐DE image analysis strongly influencing spot detection or pixel‐based methods. A novel nonlinear adaptive spatial filter (median‐modified Wiener filter, MMWF), is here compared with five well‐established denoising techniques (Median, Wiener, Gaussian, and Polynomial‐Savitzky–Golay filters; wavelet denoising) to suggest, by means of fuzzy sets evaluation, the best denoising approach to use in practice. Although median filter and wavelet achieved the best performance in spike and Gaussian denoising respectively, they are unsuitable for contemporary removal of different types of noise, because their best setting is noise‐dependent. Vice versa, MMWF that arrived second in each single denoising category, was evaluated as the best filter for global denoising, being its best setting invariant of the type of noise. In addition, median filter eroded the edge of isolated spots and filled the space between close‐set spots, whereas MMWF because of a novel filter effect (drop‐off‐effect) does not suffer from erosion problem, preserves the morphology of close‐set spots, and avoids spot and spike fuzzyfication, an aberration encountered for Wiener filter. In our tests, MMWF was assessed as the best choice when the goal is to minimize spot edge aberrations while removing spike and Gaussian noise.  相似文献   

8.
MOTIVATION: The problem of phylogenetic inference from datasets including incomplete or uncertain entries is among the most relevant issues in systematic biology. In this paper, we propose a new method for reconstructing phylogenetic trees from partial distance matrices. The new method combines the usage of the four-point condition and the ultrametric inequality with a weighted least-squares approximation to solve the problem of missing entries. It can be applied to infer phylogenies from evolutionary data including some missing or uncertain information, for instance, when observed nucleotide or protein sequences contain gaps or missing entries. RESULTS: In a number of simulations involving incomplete datasets, the proposed method outperformed the well-known Ultrametric and Additive procedures. Generally, the new method also outperformed all the other competing approaches including Triangle and Fitch which is the most popular least-squares method for reconstructing phylogenies. We illustrate the usefulness of the introduced method by analyzing two well-known phylogenies derived from complete mammalian mtDNA sequences. Some interesting theoretical results concerning the NP-hardness of the ordinary and weighted least-squares fitting of a phylogenetic tree to a partial distance matrix are also established. AVAILABILITY: The T-Rex package including this method is freely available for download at http://www.info.uqam.ca/~makarenv/trex.html  相似文献   

9.
We present a method for early forest fire detection from a satellite image using the belonging probability matrix image. We have considered each satellite image matrix line as a realization of a nonstationary random process in the thermal infra‐red (TIR) spectral band and then divided each line into very small stationary and ergodic intervals to obtain an adequate mathematical model. Furthermore, the pixels of the satellite image are considered to be statistically independent; thus, any small interval of each line behaves, naturally, as a Gaussian stationary noise. In this work, we have, therefore, selected the latter as a mathematical model for modelling these intervals of a satellite image without fire, and then, we have determined the parameters of this Gaussian realization. So, when a fire occurs in this forest zone, we can use these parameters to calculate its belonging probability to the original image without fire. This probability should be very small because the fire, in any forest, can be considered as a rare event. As a consequence, we have presented a matrix image of the probability inverse of each interval for a better fire detection observation.  相似文献   

10.
Techniques for characterizing very small single-channel currents buried in background noise are described and tested on simulated data to give confidence when applied to real data. Single channel currents are represented as a discrete-time, finite-state, homogeneous, Markov process, and the noise that obscures the signal is assumed to be white and Gaussian. The various signal model parameters, such as the Markov state levels and transition probabilities, are unknown. In addition to white Gaussian noise, the signal can be corrupted by deterministic interferences of known form but unknown parameters, such as the sinusoidal disturbance stemming from AC interference and a drift of the base line owing to a slow development of liquid-junction potentials. To characterize the signal buried in such stochastic and deterministic interferences, the problem is first formulated in the framework of a Hidden Markov Model and then the Expectation Maximization algorithm is applied to obtain the maximum likelihood estimates of the model parameters (state levels, transition probabilities), signals, and the parameters of the deterministic disturbances. Using fictitious channel currents embedded in the idealized noise, we first show that the signal processing technique is capable of characterizing the signal characteristics quite accurately even when the amplitude of currents is as small as 5-10 fA. The statistics of the signal estimated from the processing technique include the amplitude, mean open and closed duration, open-time and closed-time histograms, probability of dwell-time and the transition probability matrix. With a periodic interference composed, for example, of 50 Hz and 100 Hz components, or a linear drift of the baseline added to the segment containing channel currents and white noise, the parameters of the deterministic interference, such as the amplitude and phase of the sinusoidal wave, or the rate of linear drift, as well as all the relevant statistics of the signal, are accurately estimated with the algorithm we propose. Also, if the frequencies of the periodic interference are unknown, they can be accurately estimated. Finally, we provide a technique by which channel currents originating from the sum of two or more independent single channels are decomposed so that each process can be separately characterized. This process is also formulated as a Hidden Markov Model problem and solved by applying the Expectation Maximization algorithm. The scheme relies on the fact that the transition matrix of the summed Markov process can be construed as a tensor product of the transition matrices of individual processes.  相似文献   

11.
目的 目前,如何从核磁共振(nuclear magnetic resonance,NMR)光谱实验中准确地确定蛋白质的三维结构是生物物理学中的一个热门课题,因为蛋白质是生物体的重要组成成分,了解蛋白质的空间结构对研究其功能至关重要,然而由于实验数据的严重缺乏使其成为一个很大的挑战。方法 在本文中,通过恢复距离矩阵的矩阵填充(matrix completion,MC)算法来解决蛋白质结构确定问题。首先,初始距离矩阵模型被建立,由于实验数据的缺乏,此时的初始距离矩阵为不完整矩阵,随后通过MC算法恢复初始距离矩阵的缺失数据,从而获得整个蛋白质三维结构。为了进一步测试算法的性能,本文选取了4种不同拓扑结构的蛋白质和6种现有的MC算法进行了测试,探究了算法在不同的采样率以及不同程度噪声的情况下算法的恢复效果。结果 通过分析均方根偏差(root-mean-square deviation,RMSD)和计算时间这两个重要指标的平均值及标准差评估了算法的性能,结果显示当采样率和噪声因子控制在一定范围内时,RMSD值和标准差都能达到很小的值。另外本文更加具体地比较了不同算法的特点和优势,在精确采样情况下...  相似文献   

12.
Optical coherence tomography angiography (OCTA) is a widely applied tool to image microvascular networks with high spatial resolution and sensitivity. Due to limited imaging speed, the artifacts caused by tissue motion can severely compromise visualization of the microvascular networks and quantification of OCTA images. In this article, we propose a deep-learning-based framework to effectively correct motion artifacts and retrieve microvascular architectures. This method comprised two deep neural networks in which the first subnet was applied to distinguish motion corrupted B-scan images from a volumetric dataset. Based on the classification results, the artifacts could be removed from the en face maximum-intensity-projection (MIP) OCTA image. To restore the disturbed vasculature induced by artifact removal, the second subnet, an inpainting neural network, was utilized to reconnect the broken vascular networks. We applied the method to postprocess OCTA images of the microvascular networks in mouse cortex in vivo. Both image comparison and quantitative analysis show that the proposed method can significantly improve OCTA image by efficiently recovering microvasculature from the overwhelming motion artifacts.  相似文献   

13.
Exemplar-based algorithms are a popular technique for image inpainting. They mainly have two important phases: deciding the filling-in order and selecting good exemplars. Traditional exemplar-based algorithms are to search suitable patches from source regions to fill in the missing parts, but they have to face a problem: improper selection of exemplars. To improve the problem, we introduce an independent strategy through investigating the process of patches propagation in this paper. We first define a new separated priority definition to propagate geometry and then synthesize image textures, aiming to well recover image geometry and textures. In addition, an automatic algorithm is designed to estimate steps for the new separated priority definition. Comparing with some competitive approaches, the new priority definition can recover image geometry and textures well.  相似文献   

14.
IntroductionThe aim of this study was to determine the optimal image matrix and half-width of the Gaussian filter after iterative reconstruction of the PET image with point-spread function (PSF) and time-of-flight (TOF) correction, based on measuring the recovery coefficient (RC) curves. The measured RC curves were compared to those from an older system which does not use PSF and TOF corrections.Materials and methodsThe measurements were carried out on a NEMA IEC Body Phantom. We measured the RC curves based on SUVmax and SUVA50 in source spheres with different diameters. The change in noise level for different reconstruction parameter settings and the relation between RC curves and the administered activity were also evaluated.ResultsWith an increasing size of image matrix and reduction in the half-width of the post-reconstruction Gaussian filter, there was a significant increase in image noise and overestimation of the SUV. The local increase in SUV, observed for certain filtrations and objects with a diameter below 13 mm, was caused by PSF correction. The decrease in administered activity, while maintaining the same conditions of acquisition and reconstruction, also led to overestimation of readings of the SUV and additionally to deterioration in reproducibility.ConclusionThis study proposes a suitable size for the image matrix and filtering for displaying PET and SUV measurements. The benefits were demonstrated as improved image parameters for the newer instrument, these even being found using relatively strong filtration of the reconstructed images.  相似文献   

15.
Given a distance matrix M that specifies the pairwise evolutionary distances between n species, the phylogenetic tree reconstruction problem asks for an edge-weighted phylogenetic tree that satisfies M, if one exists. We study some extensions of this problem to rooted phylogenetic networks. Our main result is an O(n(2) log n)-time algorithm for determining whether there is an ultrametric galled network that satisfies M, and if so, constructing one. In fact, if such an ultrametric galled network exists, our algorithm is guaranteed to construct one containing the minimum possible number of nodes with more than one parent (hybrid nodes). We also prove that finding a largest possible submatrix M' of M such that there exists an ultrametric galled network that satisfies M' is NP-hard. Furthermore, we show that given an incomplete distance matrix (i.e. where some matrix entries are missing), it is also NP-hard to determine whether there exists an ultrametric galled network which satisfies it.  相似文献   

16.
MOTIVATION: Inferring networks of proteins from biological data is a central issue of computational biology. Most network inference methods, including Bayesian networks, take unsupervised approaches in which the network is totally unknown in the beginning, and all the edges have to be predicted. A more realistic supervised framework, proposed recently, assumes that a substantial part of the network is known. We propose a new kernel-based method for supervised graph inference based on multiple types of biological datasets such as gene expression, phylogenetic profiles and amino acid sequences. Notably, our method assigns a weight to each type of dataset and thereby selects informative ones. Data selection is useful for reducing data collection costs. For example, when a similar network inference problem must be solved for other organisms, the dataset excluded by our algorithm need not be collected. RESULTS: First, we formulate supervised network inference as a kernel matrix completion problem, where the inference of edges boils down to estimation of missing entries of a kernel matrix. Then, an expectation-maximization algorithm is proposed to simultaneously infer the missing entries of the kernel matrix and the weights of multiple datasets. By introducing the weights, we can integrate multiple datasets selectively and thereby exclude irrelevant and noisy datasets. Our approach is favorably tested in two biological networks: a metabolic network and a protein interaction network. AVAILABILITY: Software is available on request.  相似文献   

17.
We measured recognition thresholds of incomplete figure perception (the Gollin test). This test we regarded as a visual masking problem. Digital image processing permits us to measure the spatial properties and spatial frequency spectrum of the absent part of the image as the mask. Using a noise paradigm, we have measured the signal/noise ratio for Incomplete Figure. Recognition was worse with better spectral "similarity" between the figure and the "invisible" mask. At threshold, the spectrum of the fragmented image was equally similar to that of the "invisible" mask and complete image. We think the recognition thresholds for Gollin stimuli reflect the signal/noise ratio.  相似文献   

18.
OBJECTIVE--To determine the extent to which symptom diaries of asthmatic patients are inaccurate or based on retrospective recall. DESIGN--Comparison of electronic and pencil and paper diaries. Both forms were completed twice daily at home for 14 days. SETTING--Outpatient clinic. SUBJECTS--24 asthmatic outpatients also tested for severity of asthma and for anxiety. RESULTS--More sessions were missed in the evening than in the morning for both types of diaries. Significantly more retrospective entries were made in the evening (26 entries, 14 patients) than in the morning (6 entries, 3 patients). Discrepant entries of peak expiratory flow accounted for 15% of those made on the appropriate day, and three quarters of patients made at least one discrepant entry. Variation in peak expiratory flow was significantly related to number of discrepancies and number of missing days, and anxiety score was significantly related to number of missing days. About a fifth of written entries may have errors. CONCLUSION--Poor diary completion may result from having unreasonable expectations of patients and giving incomplete instructions. Electronic, time coded diaries could ensure better quality of records.  相似文献   

19.
In this work, we compare the merits of three temporal data deconvolution methods for use in the filtered backprojection algorithm for photoacoustic tomography (PAT). We evaluate the standard Fourier division technique, the Wiener deconvolution filter, and a Tikhonov L-2 norm regularized matrix inversion method. Our experiments were carried out on subjects of various appearances, namely a pencil lead, two man-made phantoms, an in vivo subcutaneous mouse tumor model, and a perfused and excised mouse brain. All subjects were scanned using an imaging system with a rotatable hemispherical bowl, into which 128 ultrasound transducer elements were embedded in a spiral pattern. We characterized the frequency response of each deconvolution method, compared the final image quality achieved by each deconvolution technique, and evaluated each method’s robustness to noise. The frequency response was quantified by measuring the accuracy with which each filter recovered the ideal flat frequency spectrum of an experimentally measured impulse response. Image quality under the various scenarios was quantified by computing noise versus resolution curves for a point source phantom, as well as the full width at half maximum (FWHM) and contrast-to-noise ratio (CNR) of selected image features such as dots and linear structures in additional imaging subjects. It was found that the Tikhonov filter yielded the most accurate balance of lower and higher frequency content (as measured by comparing the spectra of deconvolved impulse response signals to the ideal flat frequency spectrum), achieved a competitive image resolution and contrast-to-noise ratio, and yielded the greatest robustness to noise. While the Wiener filter achieved a similar image resolution, it tended to underrepresent the lower frequency content of the deconvolved signals, and hence of the reconstructed images after backprojection. In addition, its robustness to noise was poorer than that of the Tikhonov filter. The performance of the Fourier filter was found to be the poorest of all three methods, based on the reconstructed images’ lowest resolution (blurriest appearance), generally lowest contrast-to-noise ratio, and lowest robustness to noise. Overall, the Tikhonov filter was deemed to produce the most desirable image reconstructions.  相似文献   

20.
In susceptibility-weighted imaging (SWI), the high resolution required to obtain a proper contrast generation leads to a reduced signal-to-noise ratio (SNR). The application of a denoising filter to produce images with higher SNR and still preserve small structures from excessive blurring is therefore extremely desirable. However, as the distributions of magnitude and phase noise may introduce biases during image restoration, the application of a denoising filter is non-trivial. Taking advantage of the potential multispectral nature of MR images, a multicomponent approach using a Non-Local Means (MNLM) denoising filter may perform better than a component-by-component image restoration method. Here we present a new MNLM-based method (Multicomponent-Imaginary-Real-SWI, hereafter MIR-SWI) to produce SWI images with high SNR and improved conspicuity. Both qualitative and quantitative comparisons of MIR-SWI with the original SWI scheme and previously proposed SWI restoring pipelines showed that MIR-SWI fared consistently better than the other approaches. Noise removal with MIR-SWI also provided improvement in contrast-to-noise ratio (CNR) and vessel conspicuity at higher factors of phase mask multiplications than the one suggested in the literature for SWI vessel imaging. We conclude that a proper handling of noise in the complex MR dataset may lead to improved image quality for SWI data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号