首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Latent print examiners use their expertise to determine whether the information present in a comparison of two fingerprints (or palmprints) is sufficient to conclude that the prints were from the same source (individualization). When fingerprint evidence is presented in court, it is the examiner''s determination—not an objective metric—that is presented. This study was designed to ascertain the factors that explain examiners'' determinations of sufficiency for individualization. Volunteer latent print examiners (n = 170) were each assigned 22 pairs of latent and exemplar prints for examination, and annotated features, correspondence of features, and clarity. The 320 image pairs were selected specifically to control clarity and quantity of features. The predominant factor differentiating annotations associated with individualization and inconclusive determinations is the count of corresponding minutiae; other factors such as clarity provided minimal additional discriminative value. Examiners'' counts of corresponding minutiae were strongly associated with their own determinations; however, due to substantial variation of both annotations and determinations among examiners, one examiner''s annotation and determination on a given comparison is a relatively weak predictor of whether another examiner would individualize. The extensive variability in annotations also means that we must treat any individual examiner''s minutia counts as interpretations of the (unknowable) information content of the prints: saying “the prints had N corresponding minutiae marked” is not the same as “the prints had N corresponding minutiae.” More consistency in annotations, which could be achieved through standardization and training, should lead to process improvements and provide greater transparency in casework.  相似文献   

2.
Expert decision making often seems impressive, even miraculous. People with genuine expertise in a particular domain can perform quickly and accurately, and with little information. In the series of experiments presented here, we manipulate the amount of “information” available to a group of experts whose job it is to identify the source of crime scene fingerprints. In Experiment 1, we reduced the amount of information available to experts by inverting fingerprint pairs and adding visual noise. There was no evidence for an inversion effect—experts were just as accurate for inverted prints as they were for upright prints—but expert performance with artificially noisy prints was impressive. In Experiment 2, we separated matching and nonmatching print pairs in time. Experts were conservative, but they were still able to discriminate pairs of fingerprints that were separated by five-seconds, even though the task was quite different from their everyday experience. In Experiment 3, we separated the print pairs further in time to test the long-term memory of experts compared to novices. Long-term recognition memory for experts and novices was the same, with both performing around chance. In Experiment 4, we presented pairs of fingerprints quickly to experts and novices in a matching task. Experts were more accurate than novices, particularly for similar nonmatching pairs, and experts were generally more accurate when they had more time. It is clear that experts can match prints accurately when there is reduced visual information, reduced opportunity for direct comparison, and reduced time to engage in deliberate reasoning. These findings suggest that non-analytic processing accounts for a substantial portion of the variance in expert fingerprint matching accuracy. Our conclusion is at odds with general wisdom in fingerprint identification practice and formal training, and at odds with the claims and explanations that are offered in court during expert testimony.  相似文献   

3.
A combination method of multi-wavelength fingerprinting and multi-component quantification by high performance liquid chromatography (HPLC) coupled with diode array detector (DAD) was developed and validated to monitor and evaluate the quality consistency of herbal medicines (HM) in the classical preparation Compound Bismuth Aluminate tablets (CBAT). The validation results demonstrated that our method met the requirements of fingerprint analysis and quantification analysis with suitable linearity, precision, accuracy, limits of detection (LOD) and limits of quantification (LOQ). In the fingerprint assessments, rather than using conventional qualitative “Similarity” as a criterion, the simple quantified ratio fingerprint method (SQRFM) was recommended, which has an important quantified fingerprint advantage over the “Similarity” approach. SQRFM qualitatively and quantitatively offers the scientific criteria for traditional Chinese medicines (TCM)/HM quality pyramid and warning gate in terms of three parameters. In order to combine the comprehensive characterization of multi-wavelength fingerprints, an integrated fingerprint assessment strategy based on information entropy was set up involving a super-information characteristic digitized parameter of fingerprints, which reveals the total entropy value and absolute information amount about the fingerprints and, thus, offers an excellent method for fingerprint integration. The correlation results between quantified fingerprints and quantitative determination of 5 marker compounds, including glycyrrhizic acid (GLY), liquiritin (LQ), isoliquiritigenin (ILG), isoliquiritin (ILQ) and isoliquiritin apioside (ILA), indicated that multi-component quantification could be replaced by quantified fingerprints. The Fenton reaction was employed to determine the antioxidant activities of CBAT samples in vitro, and they were correlated with HPLC fingerprint components using the partial least squares regression (PLSR) method. In summary, the method of multi-wavelength fingerprints combined with antioxidant activities has been proved to be a feasible and scientific procedure for monitoring and evaluating the quality consistency of CBAT.  相似文献   

4.
《IRBM》2022,43(4):300-308
ObjectivesThis study investigates the performance of the Support Vector Machine (SVM) to classify non-real-time and real-time EMG signals. The study also compares training performance using personalized and generalized data from all subjects. Thus, an idea about the data sets to be used in the training of the real-time classification model has been put forward. In addition, real-time classification results were obtained for ten days, and it was observed how training oneself would affect the classification results.Material and methods:EMG data were acquired for 7 hand gestures from 8 healthy subjects to create the data set: fist, fingers spread, wave-in, wave-out, pronation, supination, and rest. Subjects repeated each gesture 30 times. The Myo armband with 8 dry surface electrodes was used for data acquisition.Results14 features of the EMG signals have been extracted and non-real-time classification has been made for each feature; the highest accuracy of 96.38% was obtained using root mean square (RMS) and integrated EMG features. Three (3) kernel functions of SVM were tested in non-real-time classification and the highest accuracy was obtained with Cubic SVM using 3rd order polynomial. For this reason, Cubic SVM was used for real-time classification using the features that gave the best results in non-real-time classification. A subject repeated the gestures and real-time classification was performed. The highest accuracy of 99.05% was obtained with the mean absolute value (MAV) feature. The real-time classification was undertaken on eight subjects using the MAV feature's best performance with an average accuracy of 95.83% using the personalized data set and 91.79% using the generalized data set.ConclusionThe greatest accuracy is obtained by training the classifier with the subject's own data. Thus, it can be said that EMG signals are personal, just like fingerprints and retina. In addition, as a result, the tests repeated for 10 days showed the repeatability of the activation of the relevant muscle set and the training takes place and how this can be applied to those who will use prosthetic hands to obtain certain gestures.  相似文献   

5.
Pathologists and radiologists spend years acquiring and refining their medically essential visual skills, so it is of considerable interest to understand how this process actually unfolds and what image features and properties are critical for accurate diagnostic performance. Key insights into human behavioral tasks can often be obtained by using appropriate animal models. We report here that pigeons (Columba livia)—which share many visual system properties with humans—can serve as promising surrogate observers of medical images, a capability not previously documented. The birds proved to have a remarkable ability to distinguish benign from malignant human breast histopathology after training with differential food reinforcement; even more importantly, the pigeons were able to generalize what they had learned when confronted with novel image sets. The birds’ histological accuracy, like that of humans, was modestly affected by the presence or absence of color as well as by degrees of image compression, but these impacts could be ameliorated with further training. Turning to radiology, the birds proved to be similarly capable of detecting cancer-relevant microcalcifications on mammogram images. However, when given a different (and for humans quite difficult) task—namely, classification of suspicious mammographic densities (masses)—the pigeons proved to be capable only of image memorization and were unable to successfully generalize when shown novel examples. The birds’ successes and difficulties suggest that pigeons are well-suited to help us better understand human medical image perception, and may also prove useful in performance assessment and development of medical imaging hardware, image processing, and image analysis tools.  相似文献   

6.
The dynamics of a spreading disease and individual behavioral changes are entangled processes that have to be addressed together in order to effectively manage an outbreak. Here, we relate individual risk perception to the adoption of a specific set of control measures, as obtained from an extensive large-scale survey performed via Facebook—involving more than 500,000 respondents from 64 countries—showing that there is a “one-to-one” relationship between perceived epidemic risk and compliance with a set of mitigation rules. We then develop a mathematical model for the spreading of a disease—sharing epidemiological features with COVID-19—that explicitly takes into account non-compliant individual behaviors and evaluates the impact of a population fraction of infectious risk-deniers on the epidemic dynamics. Our modeling study grounds on a wide set of structures, including both synthetic and more than 180 real-world contact patterns, to evaluate, in realistic scenarios, how network features typical of human interaction patterns impact the spread of a disease. In both synthetic and real contact patterns we find that epidemic spreading is hindered for decreasing population fractions of risk-denier individuals. From empirical contact patterns we demonstrate that connectivity heterogeneity and group structure significantly affect the peak of hospitalized population: higher modularity and heterogeneity of social contacts are linked to lower peaks at a fixed fraction of risk-denier individuals while, at the same time, such features increase the relative impact on hospitalizations with respect to the case where everyone correctly perceive the risks.  相似文献   

7.
Gel filtration (GF) is an excellent tool to acquire information about sizing (identity), purity, and the multimeric state of a protein of interest. Superdex™ 200 and Superdex 75 are outstanding GF media for such analysis. To speed up analysis and keep sample and buffer consumption at minimum, two prepacked short GF columns have been developed, Superdex™ 200 5/150 GL and Superdex 75 5/150 GL. With lengths of only 15 cm and volume of 3 ml, these columns allow rapid analysis (6–12 min/run) with minimal sample (4–50μl) and buffer consumption.Purification of antibodies often generates dimers and higher aggregated forms, and during optimization of purification protocols, many samples may need to be analyzed for aggregate content. Gel filtration (GF) provides reliable information about size and the purity of a protein of interest, especially when it is in a multimeric state. GF, however, is often time-consuming, and the analyses become a bottleneck. A new column format—3 ml bed vol and 15 cm long—has been developed, enabling rapid and reliable GF with low sample and buffer consumption, using Superdex™ 75 and Superdex™ 200 media for determination of size and purity status.  相似文献   

8.
Genetic heterogeneity in a mixed sample of tumor and normal DNA can confound characterization of the tumor genome. Numerous computational methods have been proposed to detect aberrations in DNA samples from tumor and normal tissue mixtures. Most of these require tumor purities to be at least 10–15%. Here, we present a statistical model to capture information, contained in the individual''s germline haplotypes, about expected patterns in the B allele frequencies from SNP microarrays while fully modeling their magnitude, the first such model for SNP microarray data. Our model consists of a pair of hidden Markov models—one for the germline and one for the tumor genome—which, conditional on the observed array data and patterns of population haplotype variation, have a dependence structure induced by the relative imbalance of an individual''s inherited haplotypes. Together, these hidden Markov models offer a powerful approach for dealing with mixtures of DNA where the main component represents the germline, thus suggesting natural applications for the characterization of primary clones when stromal contamination is extremely high, and for identifying lesions in rare subclones of a tumor when tumor purity is sufficient to characterize the primary lesions. Our joint model for germline haplotypes and acquired DNA aberration is flexible, allowing a large number of chromosomal alterations, including balanced and imbalanced losses and gains, copy-neutral loss-of-heterozygosity (LOH) and tetraploidy. We found our model (which we term J-LOH) to be superior for localizing rare aberrations in a simulated 3% mixture sample. More generally, our model provides a framework for full integration of the germline and tumor genomes to deal more effectively with missing or uncertain features, and thus extract maximal information from difficult scenarios where existing methods fail.  相似文献   

9.
Ruling out disease often requires expensive or potentially harmful confirmation testing. For such testing, a less invasive triage test is often used. Intuitively, few negative confirmatory tests suggest success of this approach. However, if negative confirmation tests become too rare, too many disease cases could have been missed. It is therefore important to know how many negative tests are needed to safely exclude a diagnosis. We quantified this relationship using Bayes’ theorem, and applied this to the example of pulmonary embolism (PE), for which triage is done with a Clinical Decision Rule (CDR) and D-dimer testing, and CT-angiography (CTA) is the confirmation test. For a maximum proportion of missed PEs of 1% in triage-negative patients, we calculate a 67% ''mandatory minimum'' proportion of negative CTA scans. To achieve this, the proportion of patients with PE undergoing triage testing should be appropriately low, in this case no higher than 24%. Pre-test probability, triage test characteristics, the proportion of negative confirmation tests, and the number of missed diagnoses are mathematically entangled. The proportion of negative confirmation tests—not too high, but definitely not too low either—could be a quality benchmark for diagnostic processes.  相似文献   

10.
During steady fixation, observers make small fixational saccades at a rate of around 1–2 per second. Presentation of a visual stimulus triggers a biphasic modulation in fixational saccade rate—an initial inhibition followed by a period of elevated rate and a subsequent return to baseline. Here we show that, during passive viewing, this rate signature is highly sensitive to small changes in stimulus contrast. By training a linear support vector machine to classify trials in which a stimulus is either present or absent, we directly compared the contrast sensitivity of fixational eye movements with individuals'' psychophysical judgements. Classification accuracy closely matched psychophysical performance, and predicted individuals'' threshold estimates with less bias and overall error than those obtained using specific features of the signature. Performance of the classifier was robust to changes in the training set (novel subjects and/or contrasts) and good prediction accuracy was obtained with a practicable number of trials. Our results indicate a tight coupling between the sensitivity of visual perceptual judgements and fixational eye control mechanisms. This raises the possibility that fixational saccades could provide a novel and objective means of estimating visual contrast sensitivity without the need for observers to make any explicit judgement.  相似文献   

11.
Genomic prediction models are often calibrated using multi-generation data. Over time, as data accumulates, training data sets become increasingly heterogeneous. Differences in allele frequency and linkage disequilibrium patterns between the training and prediction genotypes may limit prediction accuracy. This leads to the question of whether all available data or a subset of it should be used to calibrate genomic prediction models. Previous research on training set optimization has focused on identifying a subset of the available data that is optimal for a given prediction set. However, this approach does not contemplate the possibility that different training sets may be optimal for different prediction genotypes. To address this problem, we recently introduced a sparse selection index (SSI) that identifies an optimal training set for each individual in a prediction set. Using additive genomic relationships, the SSI can provide increased accuracy relative to genomic-BLUP (GBLUP). Non-parametric genomic models using Gaussian kernels (KBLUP) have, in some cases, yielded higher prediction accuracies than standard additive models. Therefore, here we studied whether combining SSIs and kernel methods could further improve prediction accuracy when training genomic models using multi-generation data. Using four years of doubled haploid maize data from the International Maize and Wheat Improvement Center (CIMMYT), we found that when predicting grain yield the KBLUP outperformed the GBLUP, and that using SSI with additive relationships (GSSI) lead to 5–17% increases in accuracy, relative to the GBLUP. However, differences in prediction accuracy between the KBLUP and the kernel-based SSI were smaller and not always significant.Subject terms: Quantitative trait, Genetic models  相似文献   

12.
An algorithm for clustering cDNA fingerprints   总被引:6,自引:0,他引:6  
Clustering large data sets is a central challenge in gene expression analysis. The hybridization of synthetic oligonucleotides to arrayed cDNAs yields a fingerprint for each cDNA clone. Cluster analysis of these fingerprints can identify clones corresponding to the same gene. We have developed a novel algorithm for cluster analysis that is based on graph theoretic techniques. Unlike other methods, it does not assume that the clusters are hierarchically structured and does not require prior knowledge on the number of clusters. In tests with simulated libraries the algorithm outperformed the Greedy method and demonstrated high speed and robustness to high error rate. Good solution quality was also obtained in a blind test on real cDNA fingerprints.  相似文献   

13.
Does knowing when mental arithmetic judgments are right—and when they are wrong—lead to more accurate judgments over time? We hypothesize that the successful detection of errors (and avoidance of false alarms) may contribute to the development of mental arithmetic performance. Insight into error detection abilities can be gained by examining the “calibration” of mental arithmetic judgments—that is, the alignment between confidence in judgments and the accuracy of those judgments. Calibration may be viewed as a measure of metacognitive monitoring ability. We conducted a developmental longitudinal investigation of the relationship between the calibration of children''s mental arithmetic judgments and their performance on a mental arithmetic task. Annually between Grades 5 and 8, children completed a problem verification task in which they rapidly judged the accuracy of arithmetic expressions (e.g., 25+50 = 75) and rated their confidence in each judgment. Results showed that calibration was strongly related to concurrent mental arithmetic performance, that calibration continued to develop even as mental arithmetic accuracy approached ceiling, that poor calibration distinguished children with mathematics learning disability from both low and typically achieving children, and that better calibration in Grade 5 predicted larger gains in mental arithmetic accuracy between Grades 5 and 8. We propose that good calibration supports the implementation of cognitive control, leading to long-term improvement in mental arithmetic accuracy. Because mental arithmetic “fluency” is critical for higher-level mathematics competence, calibration of confidence in mental arithmetic judgments may represent a novel and important developmental predictor of future mathematics performance.  相似文献   

14.
Knee osteoarthritis is a progressive disease mediated by high joint loads. Foot progression angle modifications that reduce the knee adduction moment (KAM), a surrogate of knee loading, have demonstrated efficacy in alleviating pain and improving function. Although changes to the foot progression angle are overall beneficial, KAM reductions are not consistent across patients. Moreover, customized interventions are time-consuming and require instrumentation not commonly available in the clinic. We present a regression model that uses minimal clinical data—a set of six features easily obtained in the clinic—to predict the extent of first peak KAM reduction after toe-in gait retraining. For such a model to generalize, the training data must be large and variable. Given the lack of large public datasets that contain different gaits for the same patient, we generated this dataset synthetically. Insights learned from a ground-truth dataset with both baseline and toe-in gait trials (N = 12) enabled the creation of a large (N = 138) synthetic dataset for training the predictive model. On a test set of data collected by a separate research group (N = 15), the first peak KAM reduction was predicted with a mean absolute error of 0.134% body weight * height (%BW*HT). This error is smaller than the standard deviation of the first peak KAM during baseline walking averaged across test subjects (0.306%BW*HT). This work demonstrates the feasibility of training predictive models with synthetic data and provides clinicians with a new tool to predict the outcome of patient-specific gait retraining without requiring gait lab instrumentation.  相似文献   

15.
We analyzed DNA fingerprints in the chestnut blight fungus, Cryphonectria parasitica, for stability, inheritance, linkage and variability in a natural population. DNA fingerprints resulting from hybridization with a dispersed moderately repetitive DNA sequence of C. parasitica in plasmid pMS5.1 hybridized to 6-17 restriction fragments per individual isolate. In a laboratory cross and from progeny from a single perithecium collected from a field population, the presence/absence of 11 fragments in the laboratory cross and 12 fragments in the field progeny set segregated in 1:1 ratios. Two fragments in each progeny set cosegregated; no other linkage was detected among the segregating fragments. Mutations, identified by missing bands, were detected for only one fragment in which 4 of 43 progeny lacked a band present in both parents; no novel fragments were detected in any progeny. All other fragments appeared to be stably inherited. Hybridization patterns did not change during vegetative growth or sporulation. However, fingerprint patterns of single conidial isolates of strains EP155 and EP67 were found to be heterogenous due to mutations that occurred during culturing in the laboratory since these strains were first isolated in 1976-1977. In a population sample of 39 C. parasitica isolates, we found 33 different fingerprint patterns with pMS5.1. Most isolates differed from all other isolates by the presence or absence of several fragments. Six fingerprint patterns each occurred twice. Isolates with identical fingerprints occurred in cankers on the same chestnut stems three times; isolates within the other three pairs were isolated from cankers more than 5 m apart. The null hypothesis of random mating in this population could not be rejected if the six putative clones were removed from the analysis. Thus, a rough estimate of the clonal fraction of this population is 6 in 39 isolates (15.4%).  相似文献   

16.
Predicting food web structure in future climates is a pressing goal of ecology. These predictions may be impossible without a solid understanding of the factors that structure current food webs. The most fundamental aspect of food web structure—the relationship between the number of links and species—is still poorly understood. Some species interactions may be physically or physiologically ‘forbidden''—like consumption by non-consumer species—with possible consequences for food web structure. We show that accounting for these ‘forbidden interactions'' constrains the feasible link-species space, in tight agreement with empirical data. Rather than following one particular scaling relationship, food webs are distributed throughout this space according to shared biotic and abiotic features. Our study provides new insights into the long-standing question of which factors determine this fundamental aspect of food web structure.  相似文献   

17.
Genomic prediction uses DNA sequences and phenotypes to predict genetic values. In homogeneous populations, theory indicates that the accuracy of genomic prediction increases with sample size. However, differences in allele frequencies and linkage disequilibrium patterns can lead to heterogeneity in SNP effects. In this context, calibrating genomic predictions using a large, potentially heterogeneous, training data set may not lead to optimal prediction accuracy. Some studies tried to address this sample size/homogeneity trade-off using training set optimization algorithms; however, this approach assumes that a single training data set is optimum for all individuals in the prediction set. Here, we propose an approach that identifies, for each individual in the prediction set, a subset from the training data (i.e., a set of support points) from which predictions are derived. The methodology that we propose is a sparse selection index (SSI) that integrates selection index methodology with sparsity-inducing techniques commonly used for high-dimensional regression. The sparsity of the resulting index is controlled by a regularization parameter (λ); the G-Best Linear Unbiased Predictor (G-BLUP) (the prediction method most commonly used in plant and animal breeding) appears as a special case which happens when λ = 0. In this study, we present the methodology and demonstrate (using two wheat data sets with phenotypes collected in 10 different environments) that the SSI can achieve significant (anywhere between 5 and 10%) gains in prediction accuracy relative to the G-BLUP.  相似文献   

18.
19.
A large number of recent studies suggest that the sensorimotor system uses probabilistic models to predict its environment and makes inferences about unobserved variables in line with Bayesian statistics. One of the important features of Bayesian statistics is Occam''s Razor—an inbuilt preference for simpler models when comparing competing models that explain some observed data equally well. Here, we test directly for Occam''s Razor in sensorimotor control. We designed a sensorimotor task in which participants had to draw lines through clouds of noisy samples of an unobserved curve generated by one of two possible probabilistic models—a simple model with a large length scale, leading to smooth curves, and a complex model with a short length scale, leading to more wiggly curves. In training trials, participants were informed about the model that generated the stimulus so that they could learn the statistics of each model. In probe trials, participants were then exposed to ambiguous stimuli. In probe trials where the ambiguous stimulus could be fitted equally well by both models, we found that participants showed a clear preference for the simpler model. Moreover, we found that participants’ choice behaviour was quantitatively consistent with Bayesian Occam''s Razor. We also show that participants’ drawn trajectories were similar to samples from the Bayesian predictive distribution over trajectories and significantly different from two non-probabilistic heuristics. In two control experiments, we show that the preference of the simpler model cannot be simply explained by a difference in physical effort or by a preference for curve smoothness. Our results suggest that Occam''s Razor is a general behavioural principle already present during sensorimotor processing.  相似文献   

20.
Sequencing by hybridization (SBH) approaches to DNA sequencing face two conflicting constraints. First, in order to ensure that the target DNA binds reliably, the oligonucleotide probes that are attached to the chip array must be >15 bp in length. Secondly, the total number of possible 15 bp oligonucleotides is too large (>415) to fit on a chip with current technology. To circumvent the conflict between these two opposing constraints, we present a novel gene-specific DNA chip design. Our design is based on the idea that not all conceivable oligonucleotides need to be placed on a chip— only those that capture sequence combinations occurring in nature. Our approach uses a training set of aligned sequences that code for the gene in question. We compute the minimum number of oligonucleotides (generally 15–30 bp in length) that need to be placed on a DNA chip to capture the variation implied by the training set using a graph search algorithm. We tested the approach in silico using cytochrome-b sequences. Results indicate that on average, 98% of the sequence of an unknown target can be determined using the approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号