首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Deep learning based retinopathy classification with optical coherence tomography (OCT) images has recently attracted great attention. However, existing deep learning methods fail to work well when training and testing datasets are different due to the general issue of domain shift between datasets caused by different collection devices, subjects, imaging parameters, etc. To address this practical and challenging issue, we propose a novel deep domain adaptation (DDA) method to train a model on a labeled dataset and adapt it to an unlabelled dataset (collected under different conditions). It consists of two modules for domain alignment, that is, adversarial learning and entropy minimization. We conduct extensive experiments on three public datasets to evaluate the performance of the proposed method. The results indicate that there are large domain shifts between datasets, resulting a poor performance for conventional deep learning methods. The proposed DDA method can significantly outperform existing methods for retinopathy classification with OCT images. It achieves retinopathy classification accuracies of 0.915, 0.959 and 0.990 under three cross-domain (cross-dataset) scenarios. Moreover, it obtains a comparable performance with human experts on a dataset where no labeled data in this dataset have been used to train the proposed DDA method. We have also visualized the learnt features by using the t-distributed stochastic neighbor embedding (t-SNE) technique. The results demonstrate that the proposed method can learn discriminative features for retinopathy classification.  相似文献   

2.
Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation.  相似文献   

3.
Camera traps often produce massive images, and empty images that do not contain animals are usually overwhelming. Deep learning is a machine‐learning algorithm and widely used to identify empty camera trap images automatically. Existing methods with high accuracy are based on millions of training samples (images) and require a lot of time and personnel costs to label the training samples manually. Reducing the number of training samples can save the cost of manually labeling images. However, the deep learning models based on a small dataset produce a large omission error of animal images that many animal images tend to be identified as empty images, which may lead to loss of the opportunities of discovering and observing species. Therefore, it is still a challenge to build the DCNN model with small errors on a small dataset. Using deep convolutional neural networks and a small‐size dataset, we proposed an ensemble learning approach based on conservative strategies to identify and remove empty images automatically. Furthermore, we proposed three automatic identifying schemes of empty images for users who accept different omission errors of animal images. Our experimental results showed that these three schemes automatically identified and removed 50.78%, 58.48%, and 77.51% of the empty images in the dataset when the omission errors were 0.70%, 1.13%, and 2.54%, respectively. The analysis showed that using our scheme to automatically identify empty images did not omit species information. It only slightly changed the frequency of species occurrence. When only a small dataset was available, our approach provided an alternative to users to automatically identify and remove empty images, which can significantly reduce the time and personnel costs required to manually remove empty images. The cost savings were comparable to the percentage of empty images removed by models.  相似文献   

4.

Background

Cervical cancer is the fifth most common cancer among women, which is the third leading cause of cancer death in women worldwide. Brachytherapy is the most effective treatment for cervical cancer. For brachytherapy, computed tomography (CT) imaging is necessary since it conveys tissue density information which can be used for dose planning. However, the metal artifacts caused by brachytherapy applicators remain a challenge for the automatic processing of image data for image-guided procedures or accurate dose calculations. Therefore, developing an effective metal artifact reduction (MAR) algorithm in cervical CT images is of high demand.

Methods

A novel residual learning method based on convolutional neural network (RL-ARCNN) is proposed to reduce metal artifacts in cervical CT images. For MAR, a dataset is generated by simulating various metal artifacts in the first step, which will be applied to train the CNN. This dataset includes artifact-insert, artifact-free, and artifact-residual images. Numerous image patches are extracted from the dataset for training on deep residual learning artifact reduction based on CNN (RL-ARCNN). Afterwards, the trained model can be used for MAR on cervical CT images.

Results

The proposed method provides a good MAR result with a PSNR of 38.09 on the test set of simulated artifact images. The PSNR of residual learning (38.09) is higher than that of ordinary learning (37.79) which shows that CNN-based residual images achieve favorable artifact reduction. Moreover, for a 512?×?512 image, the average removal artifact time is less than 1 s.

Conclusions

The RL-ARCNN indicates that residual learning of CNN remarkably reduces metal artifacts and improves critical structure visualization and confidence of radiation oncologists in target delineation. Metal artifacts are eliminated efficiently free of sinogram data and complicated post-processing procedure.
  相似文献   

5.
Electron crystallography of membrane proteins determines the structure of membrane-reconstituted and two-dimensionally (2D) crystallized membrane proteins by low-dose imaging with the transmission electron microscope, and computer image processing. We have previously presented the software system 2dx, for user-friendly image processing of 2D crystal images. Its central component 2dx_image is based on the MRC program suite, and allows the optionally fully automatic processing of one 2D crystal image. We present here the program 2dx_merge, which assists the user in the management of a 2D crystal image processing project, and facilitates the merging of the data from multiple images. The merged dataset can be used as a reference to re-process all images, which usually improves the resolution of the final reconstruction. Image processing and merging can be applied iteratively, until convergence is reached. 2dx is available under the GNU General Public License at http://2dx.org.  相似文献   

6.
Existing computational pipelines for quantitative analysis of high‐content microscopy data rely on traditional machine learning approaches that fail to accurately classify more than a single dataset without substantial tuning and training, requiring extensive analysis. Here, we demonstrate that the application of deep learning to biological image data can overcome the pitfalls associated with conventional machine learning classifiers. Using a deep convolutional neural network (DeepLoc) to analyze yeast cell images, we show improved performance over traditional approaches in the automated classification of protein subcellular localization. We also demonstrate the ability of DeepLoc to classify highly divergent image sets, including images of pheromone‐arrested cells with abnormal cellular morphology, as well as images generated in different genetic backgrounds and in different laboratories. We offer an open‐source implementation that enables updating DeepLoc on new microscopy datasets. This study highlights deep learning as an important tool for the expedited analysis of high‐content microscopy data.  相似文献   

7.
We propose a new method for mapping neural connectivity optically, by utilizing Cre/Lox system Brainbow to tag synapses of different neurons with random mixtures of different fluorophores, such as GFP, YFP, etc., and then detecting patterns of fluorophores at different synapses using light microscopy (LM). Such patterns will immediately report the pre- and post-synaptic cells at each synaptic connection, without tracing neural projections from individual synapses to corresponding cell bodies. We simulate fluorescence from a population of densely labeled synapses in a block of hippocampal neuropil, completely reconstructed from electron microscopy data, and show that high-end LM is able to detect such patterns with over 95% accuracy. We conclude, therefore, that with the described approach neural connectivity in macroscopically large neural circuits can be mapped with great accuracy, in scalable manner, using fast optical tools, and straightforward image processing. Relying on an electron microscopy dataset, we also derive and explicitly enumerate the conditions that should be met to allow synaptic connectivity studies with high-resolution optical tools.  相似文献   

8.
Optical projection tomography (OPT) is a 3D mesoscopic imaging modality that can utilize absorption or fluorescence contrast. 3D images can be rapidly reconstructed from tomographic data sets sampled with sufficient numbers of projection angles using the Radon transform, as is typically implemented with optically cleared samples of the mm‐to‐cm scale. For in vivo imaging, considerations of phototoxicity and the need to maintain animals under anesthesia typically preclude the acquisition of OPT data at a sufficient number of angles to avoid artifacts in the reconstructed images. For sparse samples, this can be addressed with iterative algorithms to reconstruct 3D images from undersampled OPT data, but the data processing times present a significant challenge for studies imaging multiple animals. We show here that convolutional neural networks (CNN) can be used in place of iterative algorithms to remove artifacts—reducing processing time for an undersampled in vivo zebrafish dataset from 77 to 15 minutes. We also show that using CNN produces reconstructions of equivalent quality to compressed sensing with 40% fewer projections. We further show that diverse training data classes, for example, ex vivo mouse tissue data, can be used for CNN‐based reconstructions of OPT data of other species including live zebrafish.   相似文献   

9.
Electroporation is the phenomenon that occurs when a cell is exposed to a high electric field, which causes transient cell membrane permeabilization. A paramount electroporation-based application is electrochemotherapy, which is performed by delivering high-voltage electric pulses that enable the chemotherapeutic drug to more effectively destroy the tumor cells. Electrochemotherapy can be used for treating deep-seated metastases (e.g. in the liver, bone, brain, soft tissue) using variable-geometry long-needle electrodes. To treat deep-seated tumors, patient-specific treatment planning of the electroporation-based treatment is required. Treatment planning is based on generating a 3D model of the organ and target tissue subject to electroporation (i.e. tumor nodules). The generation of the 3D model is done by segmentation algorithms. We implemented and evaluated three automatic liver segmentation algorithms: region growing, adaptive threshold, and active contours (snakes). The algorithms were optimized using a seven-case dataset manually segmented by the radiologist as a training set, and finally validated using an additional four-case dataset that was previously not included in the optimization dataset. The presented results demonstrate that patient''s medical images that were not included in the training set can be successfully segmented using our three algorithms. Besides electroporation-based treatments, these algorithms can be used in applications where automatic liver segmentation is required.  相似文献   

10.
One of the major methodological challenges in single particle electron microscopy is obtaining initial reconstructions which represent the structural heterogeneity of the dataset. Random Conical Tilt and Orthogonal Tilt Reconstruction techniques in combination with 3D alignment and classification can be used to obtain initial low-resolution reconstructions which represent the full range of structural heterogeneity of the dataset. In order to achieve statistical significance, however, a large number of 3D reconstructions, and, in turn, a large number of tilted image pairs are required. The extraction of single particle tilted image pairs from micrographs can be tedious and time-consuming, as it requires intensive user input even for semi-automated approaches. To overcome the bottleneck of manual selection of a large number of tilt pairs, we developed an algorithm for the correlation of single particle images from tilted image pairs in a fully automated and user-independent manner. The algorithm reliably correlates correct pairs even from noisy micrographs. We further demonstrate the applicability of the algorithm by using it to obtain initial references both from negative stain and unstained cryo datasets.  相似文献   

11.
PurposeIn this article, we propose a novel, semi-automatic segmentation method to process 3D MR images of the prostate using the Bhattacharyya coefficient and active band theory with the goal of providing technical support for computer-aided diagnosis and surgery of the prostate.MethodsOur method consecutively segments a stack of rotationally resectioned 2D slices of a prostate MR image by assessing the similarity of the shape and intensity distribution in neighboring slices. 2D segmentation is first performed on an initial slice by manually selecting several points on the prostate boundary, after which the segmentation results are propagated consecutively to neighboring slices. A framework of iterative graph cuts is used to optimize the energy function, which contains a global term for the Bhattacharyya coefficient with the help of an auxiliary function. Our method does not require previously segmented data for training or for building statistical models, and manual intervention can be applied flexibly and intuitively, indicating the potential utility of this method in the clinic.ResultsWe tested our method on 3D T2-weighted MR images from the ISBI dataset and PROMISE12 dataset of 129 patients, and the Dice similarity coefficients were 90.34 ± 2.21% and 89.32 ± 3.08%, respectively. The comparison was performed with several state-of-the-art methods, and the results demonstrate that the proposed method is robust and accurate, achieving similar or higher accuracy than other methods without requiring training.ConclusionThe proposed algorithm for segmenting 3D MR images of the prostate is accurate, robust, and readily applicable to a clinical environment for computer-aided surgery or diagnosis.  相似文献   

12.
Selection of particle images from electron micrographs presents a bottleneck in determining the structures of macromolecular assemblies by single particle electron cryomicroscopy (cryo-EM). The problem is particularly important when an experimentalist wants to improve the resolution of a 3D map by increasing by tens or hundreds of thousands of images the size of the dataset used for calculating the map. Although several existing methods for automatic particle image selection work well for large protein complexes that produce high-contrast images, it is well known in the cryo-EM community that small complexes that give low-contrast images are often refractory to existing automated particle image selection schemes. Here we develop a method for partially-automated particle image selection when an initial 3D map of the protein under investigation is already available. Candidate particle images are selected from micrographs by template matching with template images derived from projections of the existing 3D map. The candidate particle images are then used to train a support vector machine, which classifies the candidates as particle images or non-particle images. In a final step in the analysis, the selected particle images are subjected to projection matching against the initial 3D map, with the correlation coefficient between the particle image and the best matching map projection used to assess the reliability of the particle image. We show that this approach is able to rapidly select particle images from micrographs of a rotary ATPase, a type of membrane protein complex involved in many aspects of biology.  相似文献   

13.

Background

There are two ways that statistical methods can learn from biomedical data. One way is to learn classifiers to identify diseases and to predict outcomes using the training dataset with established diagnosis for each sample. When the training dataset is not available the task can be to mine for presence of meaningful groups (clusters) of samples and to explore underlying data structure (unsupervised learning).

Results

We investigated the proteomic profiles of the cytosolic fraction of human liver samples using two-dimensional electrophoresis (2DE). Samples were resected upon surgical treatment of hepatic metastases in colorectal cancer. Unsupervised hierarchical clustering of 2DE gel images (n = 18) revealed a pair of clusters, containing 11 and 7 samples. Previously we used the same specimens to measure biochemical profiles based on cytochrome P450-dependent enzymatic activities and also found that samples were clearly divided into two well-separated groups by cluster analysis. It turned out that groups by enzyme activity almost perfectly match to the groups identified from proteomic data. Of the 271 reproducible spots on our 2DE gels, we selected 15 to distinguish the human liver cytosolic clusters. Using MALDI-TOF peptide mass fingerprinting, we identified 12 proteins for the selected spots, including known cancer-associated species.

Conclusions/Significance

Our results highlight the importance of hierarchical cluster analysis of proteomic data, and showed concordance between results of biochemical and proteomic approaches. Grouping of the human liver samples and/or patients into differing clusters may provide insights into possible molecular mechanism of drug metabolism and creates a rationale for personalized treatment.  相似文献   

14.
A new learning-based approach is presented for particle detection in cryo-electron micrographs using the Adaboost learning algorithm. The approach builds directly on the successful detectors developed for the domain of face detection. It is a discriminative algorithm which learns important features of the particle's appearance using a set of training examples of the particles and a set of images that do not contain particles. The algorithm is fast (10 s on a 1.3 GHz Pentium M processor), is generic, and is not limited to any particular shape or size of the particle to be detected. The method has been evaluated on a publicly available dataset of 82 cryoEM images of keyhole lympet hemocyanin (KLH). From 998 automatically extracted particle images, the 3-D structure of KLH has been reconstructed at a resolution of 23.2 A which is the same resolution as obtained using particles manually selected by a trained user.  相似文献   

15.
提出一种基于三维卷积神经网络对肺部计算机断层扫描图像(CT)进行肺结节自动探测及定位的方法.基于开源数据集LUNA16开展研究,对数据进行像素归一化、坐标转换等预处理,对正样本使用随机平移、旋转和翻转的方式进行扩充,对负样本进行随机采样.搭建了三维卷积神经网络并在训练过程中调整网络参数,直到得到性能最佳的网络.此外还设...  相似文献   

16.
Lung ultrasound (LUS) imaging as a point-of-care diagnostic tool for lung pathologies has been proven superior to X-ray and comparable to CT, enabling earlier and more accurate diagnosis in real-time at the patient’s bedside. The main limitation to widespread use is its dependence on the operator training and experience. COVID-19 lung ultrasound findings predominantly reflect a pneumonitis pattern, with pleural effusion being infrequent. However, pleural effusion is easy to detect and to quantify, therefore it was selected as the subject of this study, which aims to develop an automated system for the interpretation of LUS of pleural effusion. A LUS dataset was collected at the Royal Melbourne Hospital which consisted of 623 videos containing 99,209 2D ultrasound images of 70 patients using a phased array transducer. A standardized protocol was followed that involved scanning six anatomical regions providing complete coverage of the lungs for diagnosis of respiratory pathology. This protocol combined with a deep learning algorithm using a Spatial Transformer Network provides a basis for automatic pathology classification on an image-based level. In this work, the deep learning model was trained using supervised and weakly supervised approaches which used frame- and video-based ground truth labels respectively. The reference was expert clinician image interpretation. Both approaches show comparable accuracy scores on the test set of 92.4% and 91.1%, respectively, not statistically significantly different. However, the video-based labelling approach requires significantly less effort from clinical experts for ground truth labelling.  相似文献   

17.
Crop pests are responsible for serious economic loss around the worldwide. Accurate recognition of pests is the key to pest control and is a considerable challenge in farming. Deep learning models have shown great promise in image recognition, drawing the attention of many agricultural experts. However, the lack of pest image datasets and the inexplicability of deep learning models have hindered the development of deep learning models in the field of pest recognition. Our work provides the following four contributions: (1) We constructed a new and more effective dataset, for crop pest recognition, named IP41 comprising 46,567 original images of crop pests in 41 classes. (2) We trained three different deep learning models based on IP41, using transfer learning combined with fine-tuning. The results of the three deep learning models exceeded 80.00% recognition. (3) A negative sample judgment method was proposed to exclude the uploaded pest-free images of the user. (4) We provided reasonable visual explanations for the most critical areas of the recognition layers by using the gradient-weighted class activation mapping method. This research suggests that the recognition process focuses more on image details than the image as a whole, and that overall difference is ignored to a certain extent. These results will be helpful to future research in the field of agricultural pest recognition  相似文献   

18.
Ecological camera traps are increasingly used by wildlife biologists to unobtrusively monitor an ecosystems animal population. However, manual inspection of the images produced is expensive, laborious, and time‐consuming. The success of deep learning systems using camera trap images has been previously explored in preliminary stages. These studies, however, are lacking in their practicality. They are primarily focused on extremely large datasets, often millions of images, and there is little to no focus on performance when tasked with species identification in new locations not seen during training. Our goal was to test the capabilities of deep learning systems trained on camera trap images using modestly sized training data, compare performance when considering unseen background locations, and quantify the gradient of lower bound performance to provide a guideline of data requirements in correspondence to performance expectations. We use a dataset provided by Parks Canada containing 47,279 images collected from 36 unique geographic locations across multiple environments. Images represent 55 animal species and human activity with high‐class imbalance. We trained, tested, and compared the capabilities of six deep learning computer vision networks using transfer learning and image augmentation: DenseNet201, Inception‐ResNet‐V3, InceptionV3, NASNetMobile, MobileNetV2, and Xception. We compare overall performance on “trained” locations where DenseNet201 performed best with 95.6% top‐1 accuracy showing promise for deep learning methods for smaller scale research efforts. Using trained locations, classifications with <500 images had low and highly variable recall of 0.750 ± 0.329, while classifications with over 1,000 images had a high and stable recall of 0.971 ± 0.0137. Models tasked with classifying species from untrained locations were less accurate, with DenseNet201 performing best with 68.7% top‐1 accuracy. Finally, we provide an open repository where ecologists can insert their image data to train and test custom species detection models for their desired ecological domain.  相似文献   

19.
Evaluating and tracking wound size is a fundamental metric for the wound assessment process. Good location and size estimates can enable proper diagnosis and effective treatment. Traditionally, laboratory wound healing studies include a collection of images at uniform time intervals exhibiting the wounded area and the healing process in the test animal, often a mouse. These images are then manually observed to determine key metrics —such as wound size progress— relevant to the study. However, this task is a time-consuming and laborious process. In addition, defining the wound edge could be subjective and can vary from one individual to another even among experts. Furthermore, as our understanding of the healing process grows, so does our need to efficiently and accurately track these key factors for high throughput (e.g., over large-scale and long-term experiments). Thus, in this study, we develop a deep learning-based image analysis pipeline that aims to intake non-uniform wound images and extract relevant information such as the location of interest, wound only image crops, and wound periphery size over-time metrics. In particular, our work focuses on images of wounded laboratory mice that are used widely for translationally relevant wound studies and leverages a commonly used ring-shaped splint present in most images to predict wound size. We apply the method to a dataset that was never meant to be quantified and, thus, presents many visual challenges. Additionally, the data set was not meant for training deep learning models and so is relatively small in size with only 256 images. We compare results to that of expert measurements and demonstrate preservation of information relevant to predicting wound closure despite variability from machine-to-expert and even expert-to-expert. The proposed system resulted in high fidelity results on unseen data with minimal human intervention. Furthermore, the pipeline estimates acceptable wound sizes when less than 50% of the images are missing reference objects.  相似文献   

20.
Visualization of scientific data is crucial not only for scientific discovery but also to communicate science and medicine to both experts and a general audience. Until recently, we have been limited to visualizing the three‐dimensional (3D) world of biology in 2 dimensions. Renderings of 3D cells are still traditionally displayed using two‐dimensional (2D) media, such as on a computer screen or paper. However, the advent of consumer grade virtual reality (VR) headsets such as Oculus Rift and HTC Vive means it is now possible to visualize and interact with scientific data in a 3D virtual world. In addition, new microscopic methods provide an unprecedented opportunity to obtain new 3D data sets. In this perspective article, we highlight how we have used cutting edge imaging techniques to build a 3D virtual model of a cell from serial block‐face scanning electron microscope (SBEM) imaging data. This model allows scientists, students and members of the public to explore and interact with a “real” cell. Early testing of this immersive environment indicates a significant improvement in students’ understanding of cellular processes and points to a new future of learning and public engagement. In addition, we speculate that VR can become a new tool for researchers studying cellular architecture and processes by populating VR models with molecular data.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号