首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Imaging sebaceous glands and evaluating morphometric parameters are important for diagnosis and treatment of serum problems. In this article, we investigate the feasibility of high-resolution optical coherence tomography (OCT) in combination with deep learning assisted automatic identification for these purposes. Specifically, with a spatial resolution of 2.3 μm × 6.2 μm (axial × lateral, in air), OCT is capable of clearly differentiating sebaceous gland from other skin structures and resolving the sebocyte layer. In order to achieve efficient and timely imaging analysis, a deep learning approach built upon ResNet18 is developed to automatically classify OCT images (with/without sebaceous gland), with a classification accuracy of 97.9%. Based on the result of automatic identification, we further demonstrate the possibility to measure gland size, sebocyte layer thickness and gland density.  相似文献   

2.
Deep learning based retinopathy classification with optical coherence tomography (OCT) images has recently attracted great attention. However, existing deep learning methods fail to work well when training and testing datasets are different due to the general issue of domain shift between datasets caused by different collection devices, subjects, imaging parameters, etc. To address this practical and challenging issue, we propose a novel deep domain adaptation (DDA) method to train a model on a labeled dataset and adapt it to an unlabelled dataset (collected under different conditions). It consists of two modules for domain alignment, that is, adversarial learning and entropy minimization. We conduct extensive experiments on three public datasets to evaluate the performance of the proposed method. The results indicate that there are large domain shifts between datasets, resulting a poor performance for conventional deep learning methods. The proposed DDA method can significantly outperform existing methods for retinopathy classification with OCT images. It achieves retinopathy classification accuracies of 0.915, 0.959 and 0.990 under three cross-domain (cross-dataset) scenarios. Moreover, it obtains a comparable performance with human experts on a dataset where no labeled data in this dataset have been used to train the proposed DDA method. We have also visualized the learnt features by using the t-distributed stochastic neighbor embedding (t-SNE) technique. The results demonstrate that the proposed method can learn discriminative features for retinopathy classification.  相似文献   

3.
Optical coherence tomography (OCT) is widely used for biomedical imaging and clinical diagnosis. However, speckle noise is a key factor affecting OCT image quality. Here, we developed a custom generative adversarial network (GAN) to denoise OCT images. A speckle‐modulating OCT (SM‐OCT) was built to generate low speckle images to be used as the ground truth. In total, 210 000 SM‐OCT images were used for training and validating the neural network model, which we call SM‐GAN. The performance of the SM‐GAN method was further demonstrated using online benchmark retinal images, 3D OCT images acquired from human fingers and OCT videos of a beating fruit fly heart. The denoise performance of the SM‐GAN model was compared to traditional OCT denoising methods and other state‐of‐the‐art deep learning based denoise networks. We conclude that the SM‐GAN model presented here can effectively reduce speckle noise in OCT images and videos while maintaining spatial and temporal resolutions.  相似文献   

4.
As a powerful diagnostic tool, optical coherence tomography (OCT) has been widely used in various clinical setting. However, OCT images are susceptible to inherent speckle noise that may contaminate subtle structure information, due to low-coherence interferometric imaging procedure. Many supervised learning-based models have achieved impressive performance in reducing speckle noise of OCT images trained with a large number of noisy-clean paired OCT images, which are not commonly feasible in clinical practice. In this article, we conducted a comparative study to investigate the denoising performance of OCT images over different deep neural networks through an unsupervised Noise2Noise (N2N) strategy, which only trained with noisy OCT samples. Four representative network architectures including U-shaped model, multi-information stream model, straight-information stream model and GAN-based model were investigated on an OCT image dataset acquired from healthy human eyes. The results demonstrated all four unsupervised N2N models offered denoised OCT images with a performance comparable with that of supervised learning models, illustrating the effectiveness of unsupervised N2N models in denoising OCT images. Furthermore, U-shaped models and GAN-based models using UNet network as generator are two preferred and suitable architectures for reducing speckle noise of OCT images and preserving fine structure information of retinal layers under unsupervised N2N circumstances.  相似文献   

5.
  1. A time‐consuming challenge faced by camera trap practitioners is the extraction of meaningful data from images to inform ecological management. An increasingly popular solution is automated image classification software. However, most solutions are not sufficiently robust to be deployed on a large scale due to lack of location invariance when transferring models between sites. This prevents optimal use of ecological data resulting in significant expenditure of time and resources to annotate and retrain deep learning models.
  2. We present a method ecologists can use to develop optimized location invariant camera trap object detectors by (a) evaluating publicly available image datasets characterized by high intradataset variability in training deep learning models for camera trap object detection and (b) using small subsets of camera trap images to optimize models for high accuracy domain‐specific applications.
  3. We collected and annotated three datasets of images of striped hyena, rhinoceros, and pigs, from the image‐sharing websites FlickR and iNaturalist (FiN), to train three object detection models. We compared the performance of these models to that of three models trained on the Wildlife Conservation Society and Camera CATalogue datasets, when tested on out‐of‐sample Snapshot Serengeti datasets. We then increased FiN model robustness by infusing small subsets of camera trap images into training.
  4. In all experiments, the mean Average Precision (mAP) of the FiN trained models was significantly higher (82.33%–88.59%) than that achieved by the models trained only on camera trap datasets (38.5%–66.74%). Infusion further improved mAP by 1.78%–32.08%.
  5. Ecologists can use FiN images for training deep learning object detection solutions for camera trap image processing to develop location invariant, robust, out‐of‐the‐box software. Models can be further optimized by infusion of 5%–10% camera trap images into training data. This would allow AI technologies to be deployed on a large scale in ecological applications. Datasets and code related to this study are open source and available on this repository: https://doi.org/10.5061/dryad.1c59zw3tx.
  相似文献   

6.
Glycosylation is one of the most abundant and an important post-translational modification of proteins. Glycosylated proteins (glycoproteins) are involved in various cellular biological functions like protein folding, cell-cell interactions, cell recognition and host-pathogen interactions. A large number of eukaryotic glycoproteins also have therapeutic and potential technology applications. Therefore, characterization and analysis of glycosites (glycosylated residues) in these proteins is of great interest to biologists. In order to cater these needs a number of in silico tools have been developed over the years, however, a need to get even better prediction tools remains. Therefore, in this study we have developed a new webserver GlycoEP for more accurate prediction of N-linked, O-linked and C-linked glycosites in eukaryotic glycoproteins using two larger datasets, namely, standard and advanced datasets. In case of standard datasets no two glycosylated proteins are more similar than 40%; advanced datasets are highly non-redundant where no two glycosites’ patterns (as defined in methods) have more than 60% similarity. Further, based on our results with several algorihtms developed using different machine-learning techniques, we found Support Vector Machine (SVM) as optimum tool to develop glycosite prediction models. Accordingly, using our more stringent and non-redundant advanced datasets, the SVM based models developed in this study achieved a prediction accuracy of 84.26%, 86.87% and 91.43% with corresponding MCC of 0.54, 0.20 and 0.78, for N-, O- and C-linked glycosites, respectively. The best performing models trained on advanced datasets were then implemented as a user-friendly web server GlycoEP (http://www.imtech.res.in/raghava/glycoep/). Additionally, this server provides prediction models developed on standard datasets and allows users to scan sequons in input protein sequences.  相似文献   

7.
Optical coherence tomography (OCT) is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D), that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra’s algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 496×644×51 voxels (captured by Spectralis SD-OCT) was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (∼ 4 microns), which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data.  相似文献   

8.
The standard medical practice for cancer diagnosis requires histopathology, which is an invasive and time-consuming procedure. Optical coherence tomography (OCT) is an alternative that is relatively fast, noninvasive, and able to capture three-dimensional structures of epithelial tissue. Unlike most previous OCT systems, which cannot capture crucial cellular-level information for squamous cell carcinoma (SCC) diagnosis, the full-field OCT (FF-OCT) technology used in this paper is able to produce images at sub-micron resolution and thereby facilitates the development of a deep learning algorithm for SCC detection. Experimental results show that the SCC detection algorithm can achieve a classification accuracy of 80% for mouse skin. Using the sub-micron FF-OCT imaging system, the proposed SCC detection algorithm has the potential for in-vivo applications.  相似文献   

9.
Intravascular optical coherence tomography (IV‐OCT) is a light‐based imaging modality with high resolution, which employs near‐infrared light to provide tomographic intracoronary images. Morbidity caused by coronary heart disease is a substantial cause of acute coronary syndrome and sudden cardiac death. The most common intracoronay complications caused by coronary artery disease are intimal hyperplasia, calcification, fibrosis, neovascularization and macrophage accumulation, which require efficient prevention strategies. OCT can provide discriminative information of the intracoronary tissues, which can be used to train a robust fully automatic tissue characterization model based on deep learning. In this study, we aimed to design a diagnostic model of coronary artery lesions. Particularly, we trained a random forest using convolutional neural network features to distinguish between normal and diseased arterial wall structure. Then, based on the arterial wall structure, fully convolutional network is designed to extract the tissue layers in normal cases, and pathological tissues regardless of lesion type in pathological cases. Then, the type of the lesions can be characterized with high precision using our previous model. The results demonstrate the robustness of the model with the approximate overall accuracy up to 90%.   相似文献   

10.
Determining a subset of wavelengths that best discriminates reef benthic habitats and their associated communities is essential for the development of remote sensing techniques to monitor them. This study measured spectral reflectance from 17 species of western Caribbean reef biota including coral, algae, seagrasses, and sediments, as well as healthy and diseased coral. It sought to extend the spectral library of reef-associated species found in the literature and to test the spectral discrimination of a hierarchy of habitats, community groups, and species. We compared results from hyperspectral reflectance and derivative datasets to those simulated for the three visible multispectral wavebands of the IKONOS sensor. The best discriminating subset of wavelengths was identified by multivariate stepwise selection procedure (discriminant function analysis). Best discrimination at all levels was obtained using the derivative dataset based on 6–15 non-contiguous wavebands depending on the level of the classification, followed by the hyperspectral reflectance dataset which was based on as few as 2–4 non-contiguous wavebands. IKONOS wavebands performed worst. The best discriminating subset of wavelengths in the three classification resolutions, and particularly those of the medium resolution, was in agreement with those identified by Hochberg and Atkinson (2003) and Hochberg et al. (2003) for reef communities worldwide. At all levels of classification, reflectance wavebands selected by the analysis were similar to those reported in recent studies carried out elsewhere, confirming their applicability in different biogeographical regions. However the greater accuracies achieved using the derivative datasets suggests that hyperspectral data is required for the most accurate classification of reef biotic systems.  相似文献   

11.
Owing to major technological advances, bioacoustics has become a burgeoning field in ecological research worldwide. Autonomous passive acoustic recorders are becoming widely used to monitor aerial insectivorous bats, and automatic classifiers have emerged to aid researchers in the daunting task of analysing the resulting massive acoustic datasets. However, the scarcity of comprehensive reference call libraries still hampers their wider application in highly diverse tropical assemblages. Capitalizing on a unique acoustic dataset of >650,000 bat call sequences collected over a 3-year period in the Brazilian Amazon, the aims of this study were (a) to assess how pre-identified recordings of free-flying and hand-released bats could be used to train an automatic classification algorithm (random forest), and (b) to optimize acoustic analysis protocols by combining automatic classification with visual post-validation, whereby we evaluated the proportion of sound files to be post-validated for different thresholds of classification accuracy. Classifiers were trained at species or sonotype (group of species with similar calls) level. Random forest models confirmed the reliability of using calls of both free-flying and hand-released bats to train custom-built automatic classifiers. To achieve a general classification accuracy of ~85%, random forest had to be trained with at least 500 pulses per species/sonotype. For seven out of 20 sonotypes, the most abundant in our dataset, we obtained high classification accuracy (>90%). Adopting a desired accuracy probability threshold of 95% for the random forest classifier, we found that the percentage of sound files required for manual post-validation could be reduced by up to 75%, a significant saving in terms of workload. Combining automatic classification with manual ID through fully customizable classifiers implemented in open-source software as demonstrated here shows great potential to help overcome the acknowledged risks and biases associated with the sole reliance on automatic classification.  相似文献   

12.
Wearable sensors have potential for quantitative, gait-based, point-of-care fall risk assessment that can be easily and quickly implemented in clinical-care and older-adult living environments. This investigation generated models for wearable-sensor based fall-risk classification in older adults and identified the optimal sensor type, location, combination, and modelling method; for walking with and without a cognitive load task. A convenience sample of 100 older individuals (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m under single-task and dual-task conditions while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, and left and right shanks. Participants also completed the Activities-specific Balance Confidence scale, Community Health Activities Model Program for Seniors questionnaire, six minute walk test, and ranked their fear of falling. Fall risk classification models were assessed for all sensor combinations and three model types: multi-layer perceptron neural network, naïve Bayesian, and support vector machine. The best performing model was a multi-layer perceptron neural network with input parameters from pressure-sensing insoles and head, pelvis, and left shank accelerometers (accuracy = 84%, F1 score = 0.600, MCC score = 0.521). Head sensor-based models had the best performance of the single-sensor models for single-task gait assessment. Single-task gait assessment models outperformed models based on dual-task walking or clinical assessment data. Support vector machines and neural networks were the best modelling technique for fall risk classification. Fall risk classification models developed for point-of-care environments should be developed using support vector machines and neural networks, with a multi-sensor single-task gait assessment.  相似文献   

13.
Jinbo Xu  Sheng Wang 《Proteins》2019,87(12):1069-1081
This paper reports the CASP13 results of distance-based contact prediction, threading, and folding methods implemented in three RaptorX servers, which are built upon the powerful deep convolutional residual neural network (ResNet) method initiated by us for contact prediction in CASP12. On the 32 CASP13 FM (free-modeling) targets with a median multiple sequence alignment (MSA) depth of 36, RaptorX yielded the best contact prediction among 46 groups and almost the best 3D structure modeling among all server groups without time-consuming conformation sampling. In particular, RaptorX achieved top L/5, L/2, and L long-range contact precision of 70%, 58%, and 45%, respectively, and predicted correct folds (TMscore > 0.5) for 18 of 32 targets. Further, RaptorX predicted correct folds for all FM targets with >300 residues (T0950-D1, T0969-D1, and T1000-D2) and generated the best 3D models for T0950-D1 and T0969-D1 among all groups. This CASP13 test confirms our previous findings: (a) predicted distance is more useful than contacts for both template-based and free modeling; and (b) structure modeling may be improved by integrating template and coevolutionary information via deep learning. This paper will discuss progress we have made since CASP12, the strength and weakness of our methods, and why deep learning performed much better in CASP13.  相似文献   

14.
Many ecosystems, particularly wetlands, are significantly degraded or lost as a result of climate change and anthropogenic activities. Simultaneously, developments in machine learning, particularly deep learning methods, have greatly improved wetland mapping, which is a critical step in ecosystem monitoring. Yet, present deep and very deep models necessitate a greater number of training data, which are costly, logistically challenging, and time-consuming to acquire. Thus, we explore and address the potential and possible limitations caused by the availability of limited ground-truth data for large-scale wetland mapping. To overcome this persistent problem for remote sensing data classification using deep learning models, we propose 3D UNet Generative Adversarial Network Swin Transformer (3DUNetGSFormer) to adaptively synthesize wetland training data based on each class's data availability. Both real and synthesized training data are then imported to a novel deep learning architecture consisting of cutting-edge Convolutional Neural Networks and vision transformers for wetland mapping. Results demonstrated that the developed wetland classifier obtained a high level of kappa coefficient, average accuracy, and overall accuracy of 96.99%, 97.13%, and 97.39%, respectively, for the data in three pilot sites in and around Grand Falls-Windsor, Avalon, and Gros Morne National Park located in Canada. The results show that the proposed methodology opens a new window for future high-quality wetland data generation and classification. The developed codes are available at https://github.com/aj1365/3DUNetGSFormer.  相似文献   

15.
Polarimetric data is nowadays used to build recognition models for the characterization of organic tissues or the early detection of some diseases. Different Mueller matrix-derived polarimetric observables, which allow a physical interpretation of a specific characteristic of samples, are proposed in literature to feed the required recognition algorithms. However, they are obtained through mathematical transformations of the Mueller matrix and this process may loss relevant sample information in search of physical interpretation. In this work, we present a thorough comparative between 12 classification models based on different polarimetric datasets to find the ideal polarimetric framework to construct tissues classification models. The study is conducted on the experimental Mueller matrices images measured on different tissues: muscle, tendon, myotendinous junction and bone; from a collection of 165 ex-vivo chicken thighs. Three polarimetric datasets are analyzed: (A) a selection of most representative metrics presented in literature; (B) Mueller matrix elements; and (C) the combination of (A) and (B) sets. Results highlight the importance of using raw Mueller matrix elements for the design of classification models.  相似文献   

16.
The purpose of this study was to evaluate early vascular and tomographic changes in the retina of diabetic patients using artificial intelligence (AI). The study included 74 age‐matched normal eyes, 171 diabetic eyes without retinopathy (DWR) eyes and 69 mild non‐proliferative diabetic retinopathy (NPDR) eyes. All patients underwent optical coherence tomography angiography (OCTA) imaging. Tomographic features (thickness and volume) were derived from the OCTA B‐scans. These features were used in AI models. Both OCT and OCTA features showed significant differences between the groups (P < .05). However, the OCTA features indicated early retinal changes in DWR eyes better than OCT (P < .05). In the AI model using both OCT and OCTA features simultaneously, the best area under the curve of 0.91 ± 0.02 was obtained (P < .05). Thus, the combined use of AI, OCT and OCTA significantly improved the early diagnosis of diabetic changes in the retina.  相似文献   

17.
Machine learning algorithms, including recent advances in deep learning, are promising for tools for detection and classification of broadband high frequency signals in passive acoustic recordings. However, these methods are generally data-hungry and progress has been limited by challenges related to the lack of labeled datasets adequate for training and testing. Large quantities of known and as yet unidentified broadband signal types mingle in marine recordings, with variability introduced by acoustic propagation, source depths and orientations, and interacting signals. Manual classification of these datasets is unmanageable without an in-depth knowledge of the acoustic context of each recording location. A signal classification pipeline is presented which combines unsupervised and supervised learning phases with opportunities for expert oversight to label signals of interest. The method is illustrated with a case study using unsupervised clustering to identify five toothed whale echolocation click types and two anthropogenic signal categories. These categories are used to train a deep network to classify detected signals in either averaged time bins or as individual detections, in two independent datasets. Bin-level classification achieved higher overall precision (>99%) than click-level classification. However, click-level classification had the advantage of providing a label for every signal, and achieved higher overall recall, with overall precision from 92 to 94%. The results suggest that unsupervised learning is a viable solution for efficiently generating the large, representative training sets needed for applications of deep learning in passive acoustics.  相似文献   

18.
19.
20.
The identification of virulent proteins in any de-novo sequenced genome is useful in estimating its pathogenic ability and understanding the mechanism of pathogenesis. Similarly, the identification of such proteins could be valuable in comparing the metagenome of healthy and diseased individuals and estimating the proportion of pathogenic species. However, the common challenge in both the above tasks is the identification of virulent proteins since a significant proportion of genomic and metagenomic proteins are novel and yet unannotated. The currently available tools which carry out the identification of virulent proteins provide limited accuracy and cannot be used on large datasets. Therefore, we have developed an MP3 standalone tool and web server for the prediction of pathogenic proteins in both genomic and metagenomic datasets. MP3 is developed using an integrated Support Vector Machine (SVM) and Hidden Markov Model (HMM) approach to carry out highly fast, sensitive and accurate prediction of pathogenic proteins. It displayed Sensitivity, Specificity, MCC and accuracy values of 92%, 100%, 0.92 and 96%, respectively, on blind dataset constructed using complete proteins. On the two metagenomic blind datasets (Blind A: 51–100 amino acids and Blind B: 30–50 amino acids), it displayed Sensitivity, Specificity, MCC and accuracy values of 82.39%, 97.86%, 0.80 and 89.32% for Blind A and 71.60%, 94.48%, 0.67 and 81.86% for Blind B, respectively. In addition, the performance of MP3 was validated on selected bacterial genomic and real metagenomic datasets. To our knowledge, MP3 is the only program that specializes in fast and accurate identification of partial pathogenic proteins predicted from short (100–150 bp) metagenomic reads and also performs exceptionally well on complete protein sequences. MP3 is publicly available at http://metagenomics.iiserb.ac.in/mp3/index.php.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号