首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.

Aim

To assess the performance of automated disease detection in diabetic retinopathy screening using two field mydriatic photography.

Methods

Images from 8,271 sequential patient screening episodes from a South London diabetic retinopathy screening service were processed by the Medalytix iGrading™ automated grading system. For each screening episode macular-centred and disc-centred images of both eyes were acquired and independently graded according to the English national grading scheme. Where discrepancies were found between the automated result and original manual grade, internal and external arbitration was used to determine the final study grades. Two versions of the software were used: one that detected microaneurysms alone, and one that detected blot haemorrhages and exudates in addition to microaneurysms. Results for each version were calculated once using both fields and once using the macula-centred field alone.

Results

Of the 8,271 episodes, 346 (4.2%) were considered unassessable. Referable disease was detected in 587 episodes (7.1%). The sensitivity of the automated system for detecting unassessable images ranged from 97.4% to 99.1% depending on configuration. The sensitivity of the automated system for referable episodes ranged from 98.3% to 99.3%. All the episodes that included proliferative or pre-proliferative retinopathy were detected by the automated system regardless of configuration (192/192, 95% confidence interval 98.0% to 100%). If implemented as the first step in grading, the automated system would have reduced the manual grading effort by between 2,183 and 3,147 patient episodes (26.4% to 38.1%).

Conclusion

Automated grading can safely reduce the workload of manual grading using two field, mydriatic photography in a routine screening service.  相似文献   

2.

Background  

Profile Hidden Markov Models (pHMMs) are a widely used tool for protein family research. Up to now, however, there exists no method to visualize all of their central aspects graphically in an intuitively understandable way.  相似文献   

3.

Objective

Digital retinal imaging is an established method of screening for diabetic retinopathy (DR). It has been established that currently about 1% of the world’s blind or visually impaired is due to DR. However, the increasing prevalence of diabetes mellitus and DR is creating an increased workload on those with expertise in grading retinal images. Safe and reliable automated analysis of retinal images may support screening services worldwide. This study aimed to compare the Iowa Detection Program (IDP) ability to detect diabetic eye diseases (DED) to human grading carried out at Moorfields Reading Centre on the population of Nakuru Study from Kenya.

Participants

Retinal images were taken from participants of the Nakuru Eye Disease Study in Kenya in 2007/08 (n = 4,381 participants [NW6 Topcon Digital Retinal Camera]).

Methods

First, human grading was performed for the presence or absence of DR, and for those with DR this was sub-divided in to referable or non-referable DR. The automated IDP software was deployed to identify those with DR and also to categorize the severity of DR.

Main Outcome Measures

The primary outcomes were sensitivity, specificity, and positive and negative predictive value of IDP versus the human grader as reference standard.

Results

Altogether 3,460 participants were included. 113 had DED, giving a prevalence of 3.3% (95% CI, 2.7–3.9%). Sensitivity of the IDP to detect DED as by the human grading was 91.0% (95% CI, 88.0–93.4%). The IDP ability to detect DED gave an AUC of 0.878 (95% CI 0.850–0.905). It showed a negative predictive value of 98%. The IDP missed no vision threatening retinopathy in any patients and none of the false negative cases met criteria for treatment.

Conclusions

In this epidemiological sample, the IDP’s grading was comparable to that of human graders’. It therefore might be feasible to consider inclusion into usual epidemiological grading.  相似文献   

4.
5.
Some journals are using ineffective software to screen images for manipulation. In doing so, they are creating a false sense of security in the research community about the integrity of the image data they publish.“There must be an easier way!” It''s the mantra of anyone performing a labor-intensive task, and the motivation behind the human desire for automation. Apparently, it also holds true for image screening.Open in a separate windowFigure 1©cartoonbank.com. All Rights Reserved.At the Rockefeller University Press, we screen all images in all accepted papers for evidence of manipulation (1). We do this by visually inspecting every image using basic adjustments in Photoshop. When editors from other publishers see a demonstration of our process, they often assert, “There must be an easier way!”The possibility of automating the image screening process was described in a Nature news article more than two years ago (2). About a year ago, one of the largest publisher services providers, Cadmus Communications, started offering an automated image screening service using a program called Rigour, which they publicize as “the world''s first automated Image Manipulation Analysis Software” (www.suprocktech.com).Cadmus demonstrated an early version of this software at the Press, but we found that it could not detect blatant examples of band deletions, band intensity adjustments, large regions of duplication, or composite images. In an e-mail to Cadmus dated September 11, 2007, I expressed my concern, “I am worried about causing a setback in the publishing community if editors think the current Rigour software is effective at detecting problems in biomedical images (specifically gel images). I have already heard of editors saying they will not initiate visual screening because they will just use the Cadmus software. This is creating a false sense of security in the community, because the software is not yet an effective screening tool.” I received no response to this e-mail.I was surprised to learn that, within a couple of months, Cadmus had started to sell an image screening service to publishers using this software. But given the availability of such a service, I was not surprised to learn that editors at two very prominent journals were using it. Publishers were clearly looking for a less labor-intensive solution to an image problem, in two senses of the word—image data, and public image. They wanted to be seen by the public to be actively addressing the problem of image manipulation.I asked these publishers if they had tested the service before they started to use it. Both had done so, but one of them declined to send the results of their tests; the other indicated that the Cadmus service had a 20% success rate. It seems that these publishers were not really concerned if the screening process they used actually worked.Problems with the service were still evident recently when I was consulted by a third party about a case of image manipulation in a paper published in one of these journals. The paper made a surprising claim with important clinical implications. Given that journal''s policy of only screening a fraction of papers for image manipulation, one might expect that they would at least select those with important clinical implications. In fact, the papers are selected at random, and this one had not been screened. After questions were raised, the figures were screened by Cadmus using their software, but they did not detect problems with the images that were easily revealed with visual screening.In personal communications, publishers have argued that using the Cadmus service must be better than doing nothing. In fact, it is worse than doing nothing. These publishers are creating a false sense of security in the community about the integrity of the image data they publish.A recent test of the Cadmus image screening service showed some improvement, with the software detecting manipulation in 10 out of 22 images (45%) in which image manipulation had previously been detected by visual inspection. However, when multiplied by the small fraction of images being screened by these journals, the percentage of images that are effectively screened is dramatically lower. At the very least, these journals should fully disclose their screening practices (and their efficacy) to their readers.Although complete protection against manipulated images cannot be guaranteed, it is incumbent on journal editors to screen the images they publish using the best available method, not just to some known (and low) percentage of efficacy. The issue of data integrity should not be left to chance and probability. This is scholarly publishing, not blackjack.There are others developing software to detect image manipulation, and it is possible that these applications may eventually prove to be useful and effective tools for editors. But journal editors should not rely on an automated method for image screening unless they know it is as effective as the visual method. Otherwise, readers are left to hedge their bets.  相似文献   

6.
V-Xtractor (http://www.cmde.science.ubc.ca/mohn/software.html) uses Hidden Markov Models to locate, verify, and extract defined hypervariable sequence segments (V1-V9) from bacterial, archaeal, and fungal small-subunit rRNA sequences. With a detection efficiency of 99.6% and low susceptibility to false-positives, this tool refines data reliability and facilitates subsequent analysis in community assays.  相似文献   

7.
Diabetic Retinopathy (DR) is a complication of diabetes mellitus that affects more than one-quarter of the population with diabetes, and can lead to blindness if not discovered in time. An automated screening enables the identification of patients who need further medical attention. This study aimed to classify retinal images of Aboriginal and Torres Strait Islander peoples utilizing an automated computer-based multi-lesion eye screening program for diabetic retinopathy. The multi-lesion classifier was trained on 1,014 images from the São Paulo Eye Hospital and tested on retinal images containing no DR-related lesion, single lesions, or multiple types of lesions from the Inala Aboriginal and Torres Strait Islander health care centre. The automated multi-lesion classifier has the potential to enhance the efficiency of clinical practice delivering diabetic retinopathy screening. Our program does not necessitate image samples for training from any specific ethnic group or population being assessed and is independent of image pre- or post-processing to identify retinal lesions. In this Aboriginal and Torres Strait Islander population, the program achieved 100% sensitivity and 88.9% specificity in identifying bright lesions, while detection of red lesions achieved a sensitivity of 67% and specificity of 95%. When both bright and red lesions were present, 100% sensitivity with 88.9% specificity was obtained. All results obtained with this automated screening program meet WHO standards for diabetic retinopathy screening.  相似文献   

8.
We introduce a new approach to learning statistical models from multiple sequence alignments (MSA) of proteins. Our method, called GREMLIN (Generative REgularized ModeLs of proteINs), learns an undirected probabilistic graphical model of the amino acid composition within the MSA. The resulting model encodes both the position-specific conservation statistics and the correlated mutation statistics between sequential and long-range pairs of residues. Existing techniques for learning graphical models from MSA either make strong, and often inappropriate assumptions about the conditional independencies within the MSA (e.g., Hidden Markov Models), or else use suboptimal algorithms to learn the parameters of the model. In contrast, GREMLIN makes no a priori assumptions about the conditional independencies within the MSA. We formulate and solve a convex optimization problem, thus guaranteeing that we find a globally optimal model at convergence. The resulting model is also generative, allowing for the design of new protein sequences that have the same statistical properties as those in the MSA. We perform a detailed analysis of covariation statistics on the extensively studied WW and PDZ domains and show that our method out-performs an existing algorithm for learning undirected probabilistic graphical models from MSA. We then apply our approach to 71 additional families from the PFAM database and demonstrate that the resulting models significantly out-perform Hidden Markov Models in terms of predictive accuracy.  相似文献   

9.
10.
This paper proposes an ensemble of classifiers for biomedical name recognition in which three classifiers, one Support Vector Machine and two discriminative Hidden Markov Models, are combined effectively using a simple majority voting strategy. In addition, we incorporate three post-processing modules, including an abbreviation resolution module, a protein/gene name refinement module and a simple dictionary matching module, into the system to further improve the performance. Evaluation shows that our system achieves the best performance from among 10 systems with a balanced F-measure of 82.58 on the closed evaluation of the BioCreative protein/gene name recognition task (Task 1A).  相似文献   

11.
In the current clinical care practice, Gleason grading system is one of the most powerful prognostic predictors for prostate cancer (PCa). The grading system is based on the architectural pattern of cancerous epithelium in histological images. However, the standard procedure of histological examination often involves complicated tissue fixation and staining, which are time‐consuming and may delay the diagnosis and surgery. In this study, label‐free multiphoton microscopy (MPM) was used to acquire subcellular‐resolution images of unstained prostate tissues. Then, a deep learning architecture (U‐net) was introduced for epithelium segmentation of prostate tissues in MPM images. The obtained segmentation results were then merged with the original MPM images to train a classification network (AlexNet) for automated Gleason grading. The developed method achieved an overall pixel accuracy of 92.3% with a mean F1 score of 0.839 for epithelium segmentation. By merging the segmentation results with the MPM images, the accuracy of Gleason grading was improved from 72.42% to 81.13% in hold‐out test set. Our results suggest that MPM in combination with deep learning holds the potential to be used as a fast and powerful clinical tool for PCa diagnosis.  相似文献   

12.

Background  

High content screening (HCS)-based image analysis is becoming an important and widely used research tool. Capitalizing this technology, ample cellular information can be extracted from the high content cellular images. In this study, an automated, reliable and quantitative cellular image analysis system developed in house has been employed to quantify the toxic responses of human H4 neuroglioma cells exposed to metal oxide nanoparticles. This system has been proved to be an essential tool in our study.  相似文献   

13.
Recent applications of Hidden Markov Models in computational biology   总被引:2,自引:0,他引:2  
This paper examines recent developments and applications of Hidden Markov Models (HMMs) to various problems in computational biology, including multiple sequence alignment, homology detection, protein sequences classification, and genomic annotation.  相似文献   

14.
Hidden Markov Models (HMMs) are practical tools which provide probabilistic base for protein secondary structure prediction. In these models, usually, only the information of the left hand side of an amino acid is considered. Accordingly, these models seem to be inefficient with respect to long range correlations. In this work we discuss a Segmental Semi Markov Model (SSMM) in which the information of both sides of amino acids are considered. It is assumed and seemed reasonable that the information on both sides of an amino acid can provide a suitable tool for measuring dependencies. We consider these dependencies by dividing them into shorter dependencies. Each of these dependency models can be applied for estimating the probability of segments in structural classes. Several conditional probabilities concerning dependency of an amino acid to the residues appeared on its both sides are considered. Based on these conditional probabilities a weighted model is obtained to calculate the probability of each segment in a structure. This results in 2.27% increase in prediction accuracy in comparison with the ordinary Segmental Semi Markov Models, SSMMs. We also compare the performance of our model with that of the Segmental Semi Markov Model introduced by Schmidler et al. [C.S. Schmidler, J.S. Liu, D.L. Brutlag, Bayesian segmentation of protein secondary structure, J. Comp. Biol. 7(1/2) (2000) 233-248]. The calculations show that the overall prediction accuracy of our model is higher than the SSMM introduced by Schmidler.  相似文献   

15.
We present three programs for ab initio gene prediction in eukaryotes: Exonomy, Unveil and GlimmerM. Exonomy is a 23-state Generalized Hidden Markov Model (GHMM), Unveil is a 283-state standard Hidden Markov Model (HMM) and GlimmerM is a previously-described genefinder which utilizes decision trees and Interpolated Markov Models (IMMs). All three are readily re-trainable for new organisms and have been found to perform well compared to other genefinders. Results are presented for Arabidopsis thaliana. Cases have been found where each of the genefinders outperforms each of the others, demonstrating the collective value of this ensemble of genefinders. These programs are all accessible through webservers at http://www.tigr.org/software.  相似文献   

16.
Junior physicians mainly learn during their observation in the operative room. The senior physicians evaluate them based on the same kind of observation. Knowledge transfer evaluation is thus done without quantitative methods and it mainly lies on a subjective assessment. In this paper, we present some recent techniques used to objectively evaluate medical gestures. The classical techniques used are Hidden Markov Models (HMM) or Dynamic Time Warping (DTW). Both techniques lies on the temporal analysis of the gestures. We proposed here a technique based on the arc length parametrization in order to analyze the gestures in space which is more appropriate because it gives information about the shape of the gestures independently of the chosen coordinate system.  相似文献   

17.

Background  

Discriminative models are designed to naturally address classification tasks. However, some applications require the inclusion of grammar rules, and in these cases generative models, such as Hidden Markov Models (HMMs) and Stochastic Grammars, are routinely applied.  相似文献   

18.

Background  

Hidden Markov Models (HMMs) have proven very useful in computational biology for such applications as sequence pattern matching, gene-finding, and structure prediction. Thus far, however, they have been confined to representing 1D sequence (or the aspects of structure that could be represented by character strings).  相似文献   

19.
In vivo quantification of β-amyloid deposition using positron emission tomography is emerging as an important procedure for the early diagnosis of the Alzheimer''s disease and is likely to play an important role in upcoming clinical trials of disease modifying agents. However, many groups use manually defined regions, which are non-standard across imaging centers. Analyses often are limited to a handful of regions because of the labor-intensive nature of manual region drawing. In this study, we developed an automatic image quantification protocol based on FreeSurfer, an automated whole brain segmentation tool, for quantitative analysis of amyloid images. Standard manual tracing and FreeSurfer-based analyses were performed in 77 participants including 67 cognitively normal individuals and 10 individuals with early Alzheimer''s disease. The manual and FreeSurfer approaches yielded nearly identical estimates of amyloid burden (intraclass correlation = 0.98) as assessed by the mean cortical binding potential. An MRI test-retest study demonstrated excellent reliability of FreeSurfer based regional amyloid burden measurements. The FreeSurfer-based analysis also revealed that the majority of cerebral cortical regions accumulate amyloid in parallel, with slope of accumulation being the primary difference between regions.  相似文献   

20.
A central challenge of medical imaging studies is to extract biomarkers that characterize disease pathology or outcomes. Modern automated approaches have found tremendous success in high-resolution, high-quality magnetic resonance images. These methods, however, may not translate to low-resolution images acquired on magnetic resonance imaging (MRI) scanners with lower magnetic field strength. In low-resource settings where low-field scanners are more common and there is a shortage of radiologists to manually interpret MRI scans, it is critical to develop automated methods that can augment or replace manual interpretation, while accommodating reduced image quality. We present a fully automated framework for translating radiological diagnostic criteria into image-based biomarkers, inspired by a project in which children with cerebral malaria (CM) were imaged using low-field 0.35 Tesla MRI. We integrate multiatlas label fusion, which leverages high-resolution images from another sample as prior spatial information, with parametric Gaussian hidden Markov models based on image intensities, to create a robust method for determining ventricular cerebrospinal fluid volume. We also propose normalized image intensity and texture measurements to determine the loss of gray-to-white matter tissue differentiation and sulcal effacement. These integrated biomarkers have excellent classification performance for determining severe brain swelling due to CM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号