首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.  相似文献   

2.
In face recognition, most appearance-based methods require several images of each person to construct the feature space for recognition. However, in the real world it is difficult to collect multiple images per person, and in many cases there is only a single sample per person (SSPP). In this paper, we propose a method to generate new images with various illuminations from a single image taken under frontal illumination. Motivated by the integral image, which was developed for face detection, we extract the bidirectional integral feature (BIF) to obtain the characteristics of the illumination condition at the time of the picture being taken. The experimental results for various face databases show that the proposed method results in improved recognition performance under illumination variation.  相似文献   

3.
In this paper, some morphological transformations are used to detect the unevenly illuminated background of text images characterized by poor lighting, and to acquire illumination normalized result. Based on morphologic Top-Hat transform, the uneven illumination normalization algorithm has been carried out, and typically verified by three procedures. The first procedure employs the information from opening based Top-Hat operator, which is a classical method. In order to optimize and perfect the classical Top-Hat transform, the second procedure, featuring the definition of multi direction illumination notion, utilizes opening by reconstruction and closing by reconstruction based on multi direction structuring elements. Finally, multi direction images are merged to the final even illumination image. The performance of the proposed algorithm is illustrated and verified through the processing of different ideal synthetic and camera collected images, with backgrounds characterized by poor lighting conditions.  相似文献   

4.
Schlessinger A  Rost B 《Proteins》2005,61(1):115-126
Structural flexibility has been associated with various biological processes such as molecular recognition and catalytic activity. In silico studies of protein flexibility have attempted to characterize and predict flexible regions based on simple principles. B-values derived from experimental data are widely used to measure residue flexibility. Here, we present the most comprehensive large-scale analysis of B-values. We used this analysis to develop a neural network-based method that predicts flexible-rigid residues from amino acid sequence. The system uses both global and local information (i.e., features from the entire protein such as secondary structure composition, protein length, and fraction of surface residues, and features from a local window of sequence-consecutive residues). The most important local feature was the evolutionary exchange profile reflecting sequence conservation in a family of related proteins. To illustrate its potential, we applied our method to 4 different case studies, each of which related our predictions to aspects of function. The first 2 were the prediction of regions that undergo conformational switches upon environmental changes (switch II region in Ras) and the prediction of surface regions, the rigidity of which is crucial for their function (tunnel in propeller folds). Both were correctly captured by our method. The third study established that residues in active sites of enzymes are predicted by our method to have unexpectedly low B-values. The final study demonstrated how well our predictions correlated with NMR order parameters to reflect motion. Our method had not been set up to address any of the tasks in those 4 case studies. Therefore, we expect that this method will assist in many attempts at inferring aspects of function.  相似文献   

5.
To examine the effect of illumination direction on the ability of observers to discriminate between faces, we manipulated the direction of illumination on scanned 3D face models. In order to dissociate the surface reflectance and illumination components of front-view face images, we introduce a symmetry algorithm that can separate the symmetric and asymmetric components of the face in both low and high spatial frequency bands. Based on this approach, hybrid faces stimuli were constructed with different combinations of symmetric and asymmetric spatial content. Discrimination results with these images showed that asymmetric illumination information biased face perception toward the structure of the shading component, while the symmetric illumination information had little, if any, effect. Measures of perceived depth showed that this property increased systematically with the asymmetric but not the symmetric low spatial frequency component. Together, these results suggest that (1) the asymmetric 3D shading information dramatically affects both the perceived facial information and the perceived depth of the facial structure; and (2) these effects both increase as the illumination direction is shifted to the side. Thus, our results support the hypothesis that face processing has a strong 3D component.  相似文献   

6.
The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method.  相似文献   

7.
The cerebellum is the region most commonly used as a reference when normalizing the intensity of perfusion images acquired using magnetic resonance imaging (MRI) in Alzheimer’s disease (AD) studies. In addition, the cerebellum provides unbiased estimations with nuclear medicine techniques. However, no reports confirm the cerebellum as an optimal reference region in MRI studies or evaluate the consequences of using different normalization regions. In this study, we address the effect of using the cerebellum, whole-brain white matter, and whole-brain cortical gray matter in the normalization of cerebral blood flow (CBF) parametric maps by comparing patients with stable mild cognitive impairment (MCI), patients with AD and healthy controls. According to our results, normalization by whole-brain cortical gray matter enables more sensitive detection of perfusion abnormalities in AD patients and reveals a larger number of affected regions than data normalized by the cerebellum or whole-brain white matter. Therefore, the cerebellum is not the most valid reference region in MRI studies for early stages of AD. After normalization by whole-brain cortical gray matter, we found a significant decrease in CBF in both parietal lobes and an increase in CBF in the right medial temporal lobe. We found no differences in perfusion between patients with stable MCI and healthy controls either before or after normalization.  相似文献   

8.
In this paper, a novel watershed approach based on seed region growing and image entropy is presented which could improve the medical image segmentation. The proposed algorithm enables the prior information of seed region growing and image entropy in its calculation. The algorithm starts by partitioning the image into several levels of intensity using watershed multi-degree immersion process. The levels of intensity are the input to a computationally efficient seed region segmentation process which produces the initial partitioning of the image regions. These regions are fed to entropy procedure to carry out a suitable merging which produces the final segmentation. The latter process uses a region-based similarity representation of the image regions to decide whether regions can be merged. The region is isolated from the level and the residual pixels are uploaded to the next level and so on, we recall this process as multi-level process and the watershed is called multi-level watershed. The proposed algorithm is applied to challenging applications: grey matter–white matter segmentation in magnetic resonance images (MRIs). The established methods and the proposed approach are experimented by these applications to a variety of simulating immersion, multi-degree, multi-level seed region growing and multi-level seed region growing with entropy. It is shown that the proposed method achieves more accurate results for medical image oversegmentation.  相似文献   

9.
Color-to-Grayscale: Does the Method Matter in Image Recognition?   总被引:2,自引:0,他引:2  
Kanan C  Cottrell GW 《PloS one》2012,7(1):e29740
  相似文献   

10.
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases.  相似文献   

11.
D B Masters  C T Griggs  C B Berde 《BioTechniques》1992,12(6):902-6, 908-11
To increase sensitivity and to improve normalization of RNA levels in Northern blot analysis, a comparatively inexpensive optical scanner was utilized for digitizing photonegatives of ethidium bromide stained gels and autoradiograms. The optical scanner captures the image with a maximum resolution of 300 dots per inch by assigning one of 256 gray levels (8-bit) to each dot in the image. With the use of the public domain NIH Image program (requires a Macintosh II and an 8-bit video card), gel or autoradiogram bands in the digitized image are selected and their average gray scale density measured. We found that the digitized image of a photonegative of a TAE (Tris-acetate/EDTA) agarose gel, loaded incrementally with 50-1500 ng total RNA, produced a linear response over a 4-fold range down to 100 ng (R2 greater than 0.950). In utilizing "quantification" gels like this, RNA samples that are too dilute or too small for traditional spectrophotometric techniques can be normalized and loaded uniformly onto subsequent Northern gels. Results from autoradiogram scans demonstrate highly linear gray scale responses over a 4-fold range of total RNA (R2 greater than 0.950) that are reproducible with different blots and probe types (e.g., riboprobe, cDNA and oligonucleotide). In addition, we describe a normalization technique using a 30-mer oligonucleotide probe for rat 28S ribosomal RNA as a measure of total RNA loaded per gel lane. Altogether, this scanning, ribosomal RNA normalization system allows the measurement of relative changes between 20% and 400% using standard autoradiographic methods.  相似文献   

12.
13.
Intensity normalization is an important pre-processing step in the study and analysis of DaTSCAN SPECT imaging. As most automatic supervised image segmentation and classification methods base their assumptions regarding the intensity distributions on a standardized intensity range, intensity normalization takes on a very significant role. In this work, a comparison between different novel intensity normalization methods is presented. These proposed methodologies are based on Gaussian Mixture Model (GMM) image filtering and mean-squared error (MSE) optimization. The GMM-based image filtering method is achieved according to a probability threshold that removes the clusters whose likelihood are negligible in the non-specific regions. The MSE optimization method consists of a linear transformation that is obtained by minimizing the MSE in the non-specific region between the intensity normalized image and the template. The proposed intensity normalization methods are compared to: i) a standard approach based on the specific-to-non-specific binding ratio that is widely used, and ii) a linear approach based on the α-stable distribution. This comparison is performed on a DaTSCAN image database comprising analysis and classification stages for the development of a computer aided diagnosis (CAD) system for Parkinsonian syndrome (PS) detection. In addition, these proposed methods correct spatially varying artifacts that modulate the intensity of the images. Finally, using the leave-one-out cross-validation technique over these two approaches, the system achieves results up to a 92.91% of accuracy, 94.64% of sensitivity and 92.65 % of specificity, outperforming previous approaches based on a standard and a linear approach, which are used as a reference. The use of advanced intensity normalization techniques, such as the GMM-based image filtering and the MSE optimization improves the diagnosis of PS.  相似文献   

14.
Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one''s gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.  相似文献   

15.
The variance in intensities of MRI scans is a fundamental impediment for quantitative MRI analysis. Intensity values are not only highly dependent on acquisition parameters, but also on the subject and body region being scanned. This warrants the need for image normalization techniques to ensure that intensity values are consistent within tissues across different subjects and visits. Many intensity normalization methods have been developed and proven successful for the analysis of brain pathologies, but evaluation of these methods for images of the prostate region is lagging.In this paper, we compare four different normalization methods on 49 T2-w scans of prostate cancer patients: 1) the well-established histogram normalization, 2) the generalized scale normalization, 3) an extension of generalized scale normalization called generalized ball-scale normalization, and 4) a custom normalization based on healthy prostate tissue intensities. The methods are compared qualitatively and quantitatively in terms of behaviors of intensity distributions as well as impact on radiomic features.Our findings suggest that normalization based on prior knowledge of the healthy prostate tissue intensities may be the most effective way of acquiring the desired properties of normalized images. In addition, the histogram normalization method outperform the generalized scale and generalized ball-scale methods which have proven superior for other body regions.  相似文献   

16.
17.
18.
Dark and light adaptation of retinal neurons allow our vision to operate over an enormous light intensity range. Here we report a mechanism that controls the light sensitivity and operational range of rod-driven bipolar cells that mediate dim-light vision. Our data indicate that the light responses of these cells are enhanced by sustained chloride currents via GABA(C) receptor channels. This sensitizing GABAergic input is controlled by dopamine D1 receptors, with horizontal cells serving as a plausible source of GABA release. Our findings expand the role of dopamine in vision from its well-established function of suppressing rod-driven signals in bright light to enhancing the same signals under dim illumination. They further reveal a role for GABA in sensitizing the circuitry for dim-light vision, thereby complementing GABA's traditional role in providing dynamic feedforward and feedback inhibition in the retina.  相似文献   

19.
ObjectDynamic positron emission tomography (dyn-PET) acquisitions using the radiotracer 18F-deoxyglucose (18F-FDG) are mainly developed in research studies of brain PET in kinetic modelling to determine the local glucose consumption rate. This procedure is difficult to establish, due to its requirement for blood sampling. Here, we propose a simple approach to constructing time–activity curves (TACs) for four different brain structures (the arterial & venous regions and grey & white matter) based on direct image measurements on chronologically reconstructed image volumes of regions of interest.Materials and methodsWe applied our processing on 14 control subjects to extract their physiological state. We defined the reference 18F-FDG kinetic curves as a “population averaged TAC” for four structures. To increase the curves accuracy, our method included the evaluation of two normalization based on the integral of the activity curve in the arteries and the veins.ResultsThe method showed discrimination between artery, venous, grey and white matters. The two normalization methods significantly reduce the dispersion for the grey and white matter curves and that venous normalization showed the best overall efficiency.ConclusionWe have designed and evaluated an approach for directly defining PopAv_TACs which are representative of given anatomical structures.  相似文献   

20.
Y Xu 《PloS one》2012,7(8):e43493
Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号