首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
  1. Insect populations are changing rapidly, and monitoring these changes is essential for understanding the causes and consequences of such shifts. However, large‐scale insect identification projects are time‐consuming and expensive when done solely by human identifiers. Machine learning offers a possible solution to help collect insect data quickly and efficiently.
  2. Here, we outline a methodology for training classification models to identify pitfall trap‐collected insects from image data and then apply the method to identify ground beetles (Carabidae). All beetles were collected by the National Ecological Observatory Network (NEON), a continental scale ecological monitoring project with sites across the United States. We describe the procedures for image collection, image data extraction, data preparation, and model training, and compare the performance of five machine learning algorithms and two classification methods (hierarchical vs. single‐level) identifying ground beetles from the species to subfamily level. All models were trained using pre‐extracted feature vectors, not raw image data. Our methodology allows for data to be extracted from multiple individuals within the same image thus enhancing time efficiency, utilizes relatively simple models that allow for direct assessment of model performance, and can be performed on relatively small datasets.
  3. The best performing algorithm, linear discriminant analysis (LDA), reached an accuracy of 84.6% at the species level when naively identifying species, which was further increased to >95% when classifications were limited by known local species pools. Model performance was negatively correlated with taxonomic specificity, with the LDA model reaching an accuracy of ~99% at the subfamily level. When classifying carabid species not included in the training dataset at higher taxonomic levels species, the models performed significantly better than if classifications were made randomly. We also observed greater performance when classifications were made using the hierarchical classification method compared to the single‐level classification method at higher taxonomic levels.
  4. The general methodology outlined here serves as a proof‐of‐concept for classifying pitfall trap‐collected organisms using machine learning algorithms, and the image data extraction methodology may be used for nonmachine learning uses. We propose that integration of machine learning in large‐scale identification pipelines will increase efficiency and lead to a greater flow of insect macroecological data, with the potential to be expanded for use with other noninsect taxa.
  相似文献   

2.
Deep learning based retinopathy classification with optical coherence tomography (OCT) images has recently attracted great attention. However, existing deep learning methods fail to work well when training and testing datasets are different due to the general issue of domain shift between datasets caused by different collection devices, subjects, imaging parameters, etc. To address this practical and challenging issue, we propose a novel deep domain adaptation (DDA) method to train a model on a labeled dataset and adapt it to an unlabelled dataset (collected under different conditions). It consists of two modules for domain alignment, that is, adversarial learning and entropy minimization. We conduct extensive experiments on three public datasets to evaluate the performance of the proposed method. The results indicate that there are large domain shifts between datasets, resulting a poor performance for conventional deep learning methods. The proposed DDA method can significantly outperform existing methods for retinopathy classification with OCT images. It achieves retinopathy classification accuracies of 0.915, 0.959 and 0.990 under three cross-domain (cross-dataset) scenarios. Moreover, it obtains a comparable performance with human experts on a dataset where no labeled data in this dataset have been used to train the proposed DDA method. We have also visualized the learnt features by using the t-distributed stochastic neighbor embedding (t-SNE) technique. The results demonstrate that the proposed method can learn discriminative features for retinopathy classification.  相似文献   

3.
White blood cell (WBC) detection plays a vital role in peripheral blood smear analysis. However, cell detection remains a challenging task due to multi-cell adhesion, different staining and imaging conditions. Owing to the powerful feature extraction capability of deep learning, object detection methods based on convolutional neural networks (CNNs) have been widely applied in medical image analysis. Nevertheless, the CNN training is time-consuming and inaccuracy, especially for large-scale blood smear images, where most of the images are background. To address the problem, we propose a two-stage approach that treats WBC detection as a small salient object detection task. In the first saliency detection stage, we use the Itti's visual attention model to locate the regions of interest (ROIs), based on the proposed adaptive center-surround difference (ACSD) operator. In the second WBC detection stage, the modified CenterNet model is performed on ROI sub-images to obtain a more accurate localization and classification result of each WBC. Experimental results showed that our method exceeds the performance of several existing methods on two different data sets, and achieves a state-of-the-art mAP of over 98.8%.  相似文献   

4.
The importance of T cells in immunotherapy has motivated developing technologies to improve therapeutic efficacy. One objective is assessing antigen‐induced T cell activation because only functionally active T cells are capable of killing the desired targets. Autofluorescence imaging can distinguish T cell activity states in a non‐destructive manner by detecting endogenous changes in metabolic co‐enzymes such as NAD(P)H. However, recognizing robust activity patterns is computationally challenging in the absence of exogenous labels. We demonstrate machine learning methods that can accurately classify T cell activity across human donors from NAD(P)H intensity images. Using 8260 cropped single‐cell images from six donors, we evaluate classifiers ranging from traditional models that use previously‐extracted image features to convolutional neural networks (CNNs) pre‐trained on general non‐biological images. Adapting pre‐trained CNNs for the T cell activity classification task provides substantially better performance than traditional models or a simple CNN trained with the autofluorescence images alone. Visualizing the images with dimension reduction provides intuition into why the CNNs achieve higher accuracy than other approaches. Our image processing and classifier training software is available at https://github.com/gitter‐lab/t‐cell‐classification .  相似文献   

5.
  1. A time‐consuming challenge faced by camera trap practitioners is the extraction of meaningful data from images to inform ecological management. An increasingly popular solution is automated image classification software. However, most solutions are not sufficiently robust to be deployed on a large scale due to lack of location invariance when transferring models between sites. This prevents optimal use of ecological data resulting in significant expenditure of time and resources to annotate and retrain deep learning models.
  2. We present a method ecologists can use to develop optimized location invariant camera trap object detectors by (a) evaluating publicly available image datasets characterized by high intradataset variability in training deep learning models for camera trap object detection and (b) using small subsets of camera trap images to optimize models for high accuracy domain‐specific applications.
  3. We collected and annotated three datasets of images of striped hyena, rhinoceros, and pigs, from the image‐sharing websites FlickR and iNaturalist (FiN), to train three object detection models. We compared the performance of these models to that of three models trained on the Wildlife Conservation Society and Camera CATalogue datasets, when tested on out‐of‐sample Snapshot Serengeti datasets. We then increased FiN model robustness by infusing small subsets of camera trap images into training.
  4. In all experiments, the mean Average Precision (mAP) of the FiN trained models was significantly higher (82.33%–88.59%) than that achieved by the models trained only on camera trap datasets (38.5%–66.74%). Infusion further improved mAP by 1.78%–32.08%.
  5. Ecologists can use FiN images for training deep learning object detection solutions for camera trap image processing to develop location invariant, robust, out‐of‐the‐box software. Models can be further optimized by infusion of 5%–10% camera trap images into training data. This would allow AI technologies to be deployed on a large scale in ecological applications. Datasets and code related to this study are open source and available on this repository: https://doi.org/10.5061/dryad.1c59zw3tx.
  相似文献   

6.
This work proposes a new online monitoring method for an assistance during laser osteotomy. The method allows differentiating the type of ablated tissue and the applied dose of laser energy. The setup analyzes the laser-induced acoustic emission, detected by an airborne microphone sensor. The analysis of the acoustic signals is carried out using a machine learning algorithm that is pre-trained in a supervised manner. The efficiency of the method is experimentally evaluated with several types of tissues, which are: skin, fat, muscle, and bone. Several cutting-edge machine learning frameworks are tested for the comparison with the resulting classification accuracy in the range of 84–99%. It is shown that the datasets for the training of the machine learning algorithms are easy to collect in real-life conditions. In the future, this method could assist the doctors during laser osteotomy, minimizing the damage of the nearby healthy tissues and provide cleaner pathologic tissue removal.  相似文献   

7.
Confocal Raman microscopy is a useful tool to observe composition and constitution of label-free samples at high spatial resolution. However, accurate characterization of microstructure of tissue and its application in diagnostic imaging are challenging due to weak Raman scattering signal and complex chemical composition of tissue. We have developed a method to improve imaging speed, diffraction efficiency, and spectral resolution of confocal Raman microscopy. In addition to the novel imaging technique, the machine learning method enables confocal Raman microscopy to visualize accurate histology of tissue sections. Here, we have demonstrated the performance of the proposed method by measuring histological classification of atherosclerotic arteries and compared the histological confocal Raman images with the conventional staining method. Our new confocal Raman microscopy enables us to comprehend the structure and biochemical composition of tissue and diagnose the buildup of atherosclerotic plaques in the arterial wall without labeling.  相似文献   

8.
Development of label‐free methods for accurate classification of cells with high throughput can yield powerful tools for biological research and clinical applications. We have developed a deep neural network of DINet for extracting features from cross‐polarized diffraction image (p‐DI) pairs on multiple pixel scales to accurately classify cells in five types. A total of 6185 cells were measured by a polarization diffraction imaging flow cytometry (p‐DIFC) method followed by cell classification with DINet on p‐DI data. The averaged value and SD of classification accuracy were found to be 98.9% ± 1.00% on test data sets for 5‐fold training and test. The invariance of DINet to image translation, rotation, and blurring has been verified with an expanded p‐DI data set. To study feature‐based classification by DINet, two sets of correctly and incorrectly classified cells were selected and compared for each of two prostate cell types. It has been found that the signature features of large dissimilarities between p‐DI data of correctly and incorrectly classified cell sets increase markedly from convolutional layers 1 and 2 to layers 3 and 4. These results clearly demonstrate the importance of high‐order correlations extracted at the deep layers for accurate cell classification.   相似文献   

9.
This review covers original articles using deep learning in the biophotonic field published in the last years. In these years deep learning, which is a subset of machine learning mostly based on artificial neural network geometries, was applied to a number of biophotonic tasks and has achieved state‐of‐the‐art performances. Therefore, deep learning in the biophotonic field is rapidly growing and it will be utilized in the next years to obtain real‐time biophotonic decision‐making systems and to analyze biophotonic data in general. In this contribution, we discuss the possibilities of deep learning in the biophotonic field including image classification, segmentation, registration, pseudostaining and resolution enhancement. Additionally, we discuss the potential use of deep learning for spectroscopic data including spectral data preprocessing and spectral classification. We conclude this review by addressing the potential applications and challenges of using deep learning for biophotonic data.  相似文献   

10.
Polarimetric data is nowadays used to build recognition models for the characterization of organic tissues or the early detection of some diseases. Different Mueller matrix-derived polarimetric observables, which allow a physical interpretation of a specific characteristic of samples, are proposed in literature to feed the required recognition algorithms. However, they are obtained through mathematical transformations of the Mueller matrix and this process may loss relevant sample information in search of physical interpretation. In this work, we present a thorough comparative between 12 classification models based on different polarimetric datasets to find the ideal polarimetric framework to construct tissues classification models. The study is conducted on the experimental Mueller matrices images measured on different tissues: muscle, tendon, myotendinous junction and bone; from a collection of 165 ex-vivo chicken thighs. Three polarimetric datasets are analyzed: (A) a selection of most representative metrics presented in literature; (B) Mueller matrix elements; and (C) the combination of (A) and (B) sets. Results highlight the importance of using raw Mueller matrix elements for the design of classification models.  相似文献   

11.
Optical coherence tomography (OCT) imaging shows a significant potential in clinical routines due to its noninvasive property. However, the quality of OCT images is generally limited by inherent speckle noise of OCT imaging and low sampling rate. To obtain high signal-to-noise ratio (SNR) and high-resolution (HR) OCT images within a short scanning time, we presented a learning-based method to recover high-quality OCT images from noisy and low-resolution OCT images. We proposed a semisupervised learning approach named N2NSR-OCT, to generate denoised and super-resolved OCT images simultaneously using up- and down-sampling networks (U-Net (Semi) and DBPN (Semi)). Additionally, two different super-resolution and denoising models with different upscale factors (2× and 4× ) were trained to recover the high-quality OCT image of the corresponding down-sampling rates. The new semisupervised learning approach is able to achieve results comparable with those of supervised learning using up- and down-sampling networks, and can produce better performance than other related state-of-the-art methods in the aspects of maintaining subtle fine retinal structures.  相似文献   

12.
Malignant tumors have high metabolic and perfusion rates, which result in a unique temperature distribution as compared to healthy tissues. Here, we sought to characterize the thermal response of the cervix following brachytherapy in women with advanced cervical carcinoma. Six patients underwent imaging with a thermal camera before a brachytherapy treatment session and after a 7-day follow-up period. A designated algorithm was used to calculate and store the texture parameters of the examined tissues across all time points. We used supervised machine learning classification methods (K Nearest Neighbors and Support Vector Machine) and unsupervised machine learning classification (K-means). Our algorithms demonstrated a 100% detection rate for physiological changes in cervical tumors before and after brachytherapy. Thus, we showed that thermal imaging combined with advanced feature extraction could potentially be used to detect tissue-specific changes in the cervix in response to local brachytherapy for cervical cancer.  相似文献   

13.
Phenotypic profiling of large three-dimensional microscopy data sets has not been widely adopted due to the challenges posed by cell segmentation and feature selection. The computational demands of automated processing further limit analysis of hard-to-segment images such as of neurons and organoids. Here we describe a comprehensive shallow-learning framework for automated quantitative phenotyping of three-dimensional (3D) image data using unsupervised data-driven voxel-based feature learning, which enables computationally facile classification, clustering and advanced data visualization. We demonstrate the analysis potential on complex 3D images by investigating the phenotypic alterations of: neurons in response to apoptosis-inducing treatments and morphogenesis for oncogene-expressing human mammary gland acinar organoids. Our novel implementation of image analysis algorithms called Phindr3D allowed rapid implementation of data-driven voxel-based feature learning into 3D high content analysis (HCA) operations and constitutes a major practical advance as the computed assignments represent the biology while preserving the heterogeneity of the underlying data. Phindr3D is provided as Matlab code and as a stand-alone program (https://github.com/DWALab/Phindr3D).  相似文献   

14.
We present a study of buzzing sounds of several common species of bumblebees, with the focus on automatic classification of bumblebee species and types. Such classification is useful for bumblebee monitoring, which is important in view of evaluating the quality of their living environment and protecting the biodiversity of these important pollinators. We analysed natural buzzing frequencies for queens and workers of 12 species. In addition, we analysed changes in buzzing of Bombus hypnorum worker for different types of behaviour. We developed a bumblebee classification application using machine learning algorithms. We extracted audio features from sound recordings using a large feature library. We used the best features to train a classification model, with Random Forest proving to be the best training algorithm on the testing set of samples. The web and mobile application also allows expert users to upload new recordings that can be later used to improve the classification model and expand it to include more species.  相似文献   

15.

Background

Predicting type-1 Human Immunodeficiency Virus (HIV-1) protease cleavage site in protein molecules and determining its specificity is an important task which has attracted considerable attention in the research community. Achievements in this area are expected to result in effective drug design (especially for HIV-1 protease inhibitors) against this life-threatening virus. However, some drawbacks (like the shortage of the available training data and the high dimensionality of the feature space) turn this task into a difficult classification problem. Thus, various machine learning techniques, and specifically several classification methods have been proposed in order to increase the accuracy of the classification model. In addition, for several classification problems, which are characterized by having few samples and many features, selecting the most relevant features is a major factor for increasing classification accuracy.

Results

We propose for HIV-1 data a consistency-based feature selection approach in conjunction with recursive feature elimination of support vector machines (SVMs). We used various classifiers for evaluating the results obtained from the feature selection process. We further demonstrated the effectiveness of our proposed method by comparing it with a state-of-the-art feature selection method applied on HIV-1 data, and we evaluated the reported results based on attributes which have been selected from different combinations.

Conclusion

Applying feature selection on training data before realizing the classification task seems to be a reasonable data-mining process when working with types of data similar to HIV-1. On HIV-1 data, some feature selection or extraction operations in conjunction with different classifiers have been tested and noteworthy outcomes have been reported. These facts motivate for the work presented in this paper.

Software availability

The software is available at http://ozyer.etu.edu.tr/c-fs-svm.rar.The software can be downloaded at esnag.etu.edu.tr/software/hiv_cleavage_site_prediction.rar; you will find a readme file which explains how to set the software in order to work.  相似文献   

16.
赖氨酸琥珀酰化是一种新型的翻译后修饰,在蛋白质调节和细胞功能控制中发挥重要作用,所以准确识别蛋白质中的琥珀酰化位点是有必要的。传统的实验耗费物力和财力。通过计算方法预测是近段时间以来提出的一种高效的预测方法。本研究中,我们开发了一种新的预测方法iSucc-PseAAC,它是通过使用多种分类算法结合不同的特征提取方法。最终发现,基于耦合序列(PseAAC)特征提取下,使用支持向量机分类效果是最好的,并结合集成学习解决了数据不平衡问题。与现有方法预测效果对比,iSucc-PseAAC在区分赖氨酸琥珀酰化位点方面,更具有意义和实用性。  相似文献   

17.
Rapid and early identification of pathogens is critical to guide antibiotic therapy. Raman spectroscopy as a noninvasive diagnostic technique provides rapid and accurate detection of pathogens. Raman spectrum of single cells serves as the “fingerprint” of the cell, revealing its metabolic characteristics. Rapid identification of pathogens can be achieved by combining Raman spectroscopy and deep learning. Traditional classification techniques frequently require lots of data for training, which is time costing to collect Raman spectra. For trace samples and strains that are difficult to culture, it is difficult to provide an accurate classification model. In order to reduce the number of samples collected and improve the accuracy of the classification model, a new pathogen detection method integrating Raman spectroscopy, variational auto-encoder (VAE), and long short-term memory network (LSTM) is proposed in this paper. We collect the Raman signals of pathogens and input them to VAE for training. VAE will generate a large number of Raman spectral data that cannot be distinguished from the real spectrum, and the signal-to-noise ratio is higher than that of the real spectrum. These spectra are input into the LSTM together with the real spectrum for training, and a good classification model is obtained. The results of the experiments reveal that this method not only improves the average accuracy of pathogen classification to 96.9% but also reduces the number of Raman spectra collected from 1000 to 200. With this technology, the number of Raman spectra collected can be greatly reduced, so that strains that are difficult to culture or trace can be rapidly identified.  相似文献   

18.
19.
Shortwave infrared window (SWIR: 1000–1700 nm) represents a major improvement compared to the NIR-I region (700–900 nm) in terms of temporal and spatial resolutions in depths down to 4 mm. SWIR is a fast and cheap alternative to more precise methods such as X-ray and opto-acoustic imaging. Main obstacles in SWIR imaging are the noise and scattering from tissues and skin that reduce the precision of the method. We demonstrate that the combination of SWIR in vivo imaging in the NIR-IIb region (1500–1700 nm) with advanced deep learning image analysis allows to overcome these obstacles and making a large step forward to high resolution imaging: it allows to precisely segment vessels from tissues and noise, provides morphological structure of the vessels network, with learned pseudo-3D shape, their relative position, dynamic information of blood vascularization in depth in small animals and distinguish the vessels types: artieries and veins. For demonstration we use neural network IterNet that exploits structural redundancy of the blood vessels, which provides a useful analysis tool for raw SWIR images.  相似文献   

20.
Lasers with wavelengths in the visible and near infrared region, pose a potential hazard to vision as the radiation can be focused on the retina. The laser safety standard IEC 60825–1:2014 provides limits and evaluation methods to perform a classification for such systems. An important parameter is the retinal spot size which is described by the angular subtense of the apparent source. In laser safety evaluations, the radiation is often described as a Gaussian beam and the image on the retina is calculated using the wave optical propagation through a thin lens. For coherent radiation, this method can be insufficient as the diffraction effects of the pupil aperture influence the retinal image. In this publication, we analyze these effects and propose a general analytical calculation method for the angular subtense. The proposed formula is validated for collimated and divergent Gaussian beams.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号