首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In recent years, progressive application of convolutional neural networks in image processing has successfully filtered into medical diagnosis. As a prerequisite for images detection and classification, object segmentation in medical images has attracted a great deal of attention. This study is based on the fact that most of the analysis of pathological diagnoses requires nuclei detection as the starting phase for obtaining an insight into the underlying biological process and further diagnosis. In this paper, we introduce an embedded attention model in multi-bridge Wnet (AMB-Wnet) to achieve suppression of irrelevant background areas and obtain good features for learning image semantics and modality to automatically segment nuclei, inspired by the 2018 Data Science Bowl. The proposed architecture, consisting of the redesigned down sample group, up-sample group, and middle block (a new multiple-scale convolutional layers block), is designed to extract different level features. In addition, a connection group is proposed instead of skip-connection to transfer semantic information among different levels. In addition, the attention model is well embedded in the connection group, and the performance of the model is improved without increasing the amount of calculation. To validate the model's performance, we evaluated it using the BBBC038V1 data sets for nuclei segmentation. Our proposed model achieves 85.83% F1-score, 97.81% accuracy, 86.12% recall, and 83.52% intersection over union. The proposed AMB-Wnet exhibits superior results compared to the original U-Net, MultiResUNet, and recent Attention U-Net architecture.  相似文献   

2.
Deep learning algorithms have improved the speed and quality of segmentation for certain tasks in medical imaging. The aim of this work is to design and evaluate an algorithm capable of segmenting bones in dual-energy CT data sets. A convolutional neural network based on the 3D U-Net architecture was implemented and evaluated using high tube voltage images, mixed images and dual-energy images from 30 patients. The network performed well on all the data sets; the mean Dice coefficient for the test data was larger than 0.963. Of special interest is that it performed better on dual-energy CT volumes compared to mixed images that mimicked images taken at 120 kV. The corresponding increase in the Dice coefficient from 0.965 to 0.966 was small since the enhancements were mainly at the edges of the bones. The method can easily be extended to the segmentation of multi-energy CT data.  相似文献   

3.
《IRBM》2021,42(6):415-423
ObjectivesConvolutional neural networks (CNNs) have established state-of-the-art performance in computer vision tasks such as object detection and segmentation. One of the major remaining challenges concerns their ability to capture consistent spatial and anatomically plausible attributes in medical image segmentation. To address this issue, many works advocate to integrate prior information at the level of the loss function. However, prior-based losses often suffer from local solutions and training instability. The CoordConv layers are extensions of convolutional neural network wherein convolution is conditioned on spatial coordinates. The objective of this paper is to investigate CoordConv as a proficient substitute to convolutional layers for medical image segmentation tasks when trained under prior-based losses.MethodsThis work introduces CoordConv-Unet which is a novel structure that can be used to accommodate training under anatomical prior losses. The proposed architecture demonstrates a dual role relative to prior constrained CNN learning: it either demonstrates a regularizing role that stabilizes learning while maintaining system performance, or improves system performance by allowing the learning to be more stable and to evade local minima.ResultsTo validate the performance of the proposed model, experiments are conducted on two well-known public datasets from the Decathlon challenge: a mono-modal MRI dataset dedicated to segmentation of the left atrium, and a CT image dataset whose objective is to segment the spleen, an organ characterized with varying size and mild convexity issues.ConclusionResults show that, despite the inadequacy of CoordConv when trained with the regular dice baseline loss, the proposed CoordConv-Unet structure can improve significantly model performance when trained under anatomically constrained prior losses.  相似文献   

4.
Mitochondrial morphological defects are a common feature of diseased cardiac myocytes. However, quantitative assessment of mitochondrial morphology is limited by the time-consuming manual segmentation of electron micrograph (EM) images. To advance understanding of the relation between morphological defects and dysfunction, an efficient morphological reconstruction method is desired to enable isolation and reconstruction of mitochondria from EM images. We propose a new method for isolating and reconstructing single mitochondria from serial block-face scanning EM (SBEM) images. CDeep3M, a cloud-based deep learning network for EM images, was used to segment mitochondrial interior volumes and boundaries. Post-processing was performed using both the predicted interior volume and exterior boundary to isolate and reconstruct individual mitochondria. Series of SBEM images from two separate cardiac myocytes were processed. The highest F1-score was 95% using 50 training datasets, greater than that for previously reported automated methods and comparable to manual segmentations. Accuracy of separation of individual mitochondria was 80% on a pixel basis. A total of 2315 mitochondria in the two series of SBEM images were evaluated with a mean volume of 0.78 µm3. The volume distribution was very broad and skewed; the most frequent mitochondria were 0.04–0.06 µm3, but mitochondria larger than 2.0 µm3 accounted for more than 10% of the total number. The average short-axis length was 0.47 µm. Primarily longitudinal mitochondria (0–30 degrees) were dominant (54%). This new automated segmentation and separation method can help quantitate mitochondrial morphology and improve understanding of myocyte structure–function relationships.  相似文献   

5.
The importance of T cells in immunotherapy has motivated developing technologies to improve therapeutic efficacy. One objective is assessing antigen‐induced T cell activation because only functionally active T cells are capable of killing the desired targets. Autofluorescence imaging can distinguish T cell activity states in a non‐destructive manner by detecting endogenous changes in metabolic co‐enzymes such as NAD(P)H. However, recognizing robust activity patterns is computationally challenging in the absence of exogenous labels. We demonstrate machine learning methods that can accurately classify T cell activity across human donors from NAD(P)H intensity images. Using 8260 cropped single‐cell images from six donors, we evaluate classifiers ranging from traditional models that use previously‐extracted image features to convolutional neural networks (CNNs) pre‐trained on general non‐biological images. Adapting pre‐trained CNNs for the T cell activity classification task provides substantially better performance than traditional models or a simple CNN trained with the autofluorescence images alone. Visualizing the images with dimension reduction provides intuition into why the CNNs achieve higher accuracy than other approaches. Our image processing and classifier training software is available at https://github.com/gitter‐lab/t‐cell‐classification .  相似文献   

6.
Image segmentation is a critical step in digital picture analysis, especially for that of tissue sections. As the morphology of the cell nuclei provides important biological information, their segmentation is of particular interest. The known segmentation methods are not adequate for segmenting cell nuclei of tissue sections; the reason for this lies in the optical properties of their images. We have developed new gradient methods of segmentation of previously presegmented images by taking these properties into account and by using the approximately circular shape of the cell nuclei as a priori information. In our first technique, the segment method, the images of the nuclei are divided into eight segments, special gradient filters being defined for each segment. This has enabled us to improve the gradient image. After searching for local maxima, the contours of nuclei can be found. In the second method, the method of transformation into the polar coordinate system (PCS), the a priori information serves to define a circular direction field for gradient computation and contour finding. In contrast with the first method, which offers a rapid, general idea about the nuclear shape, the PCS method permits precise segmentation and morphological analysis of the cell nuclei.  相似文献   

7.
OBJECTIVE: To design an automated system for the classification of cells based on analysis of serous cytology, with the aim of segmenting both cytoplasm and nucleus using color information from the images as the main characteristic of the cells. STUDY DESIGN: The segmentation strategy uses color information coupled with mathematical morphology tools, such as watersheds. Cytoplasm and nuclei of all diagnostic cells are retained; erythrocytes and debris are eliminated. Special techniques are used for the separation of clustered cells. RESULTS: A large set of cells was assessed by experts to score the segmentation success rate. All cells were segmented whatever their spatial configurations. The average success rate was 92.5% for nuclei and 91.1% for cytoplasm. CONCLUSION: This color information-based segmentation of images of serous cells is accurate and provides a useful tool. This segmentation strategy will improve the automated classification of cells.  相似文献   

8.
Cardiovascular diseases are closely associated with deteriorating atherosclerotic plaques. Optical coherence tomography (OCT) is a recently developed intravascular imaging technique with high resolution approximately 10 microns and could provide accurate quantification of coronary plaque morphology. However, tissue segmentation of OCT images in clinic is still mainly performed manually by physicians which is time consuming and subjective. To overcome these limitations, two automatic segmentation methods for intracoronary OCT image based on support vector machine (SVM) and convolutional neural network (CNN) were performed to identify the plaque region and characterize plaque components. In vivo IVUS and OCT coronary plaque data from 5 patients were acquired at Emory University with patient’s consent obtained. Seventy-seven matched IVUS and OCT slices with good image quality and lipid cores were selected for this study. Manual OCT segmentation was performed by experts using virtual histology IVUS as guidance, and used as gold standard in the automatic segmentations. The overall classification accuracy based on CNN method achieved 95.8%, and the accuracy based on SVM was 71.9%. The CNN-based segmentation method can better characterize plaque compositions on OCT images and greatly reduce the time spent by doctors in segmenting and identifying plaques.  相似文献   

9.
Our ability to perceive a stable visual world in the presence of continuous movements of the body, head, and eyes has puzzled researchers in the neuroscience field for a long time. We reformulated this problem in the context of hierarchical convolutional neural networks (CNNs)—whose architectures have been inspired by the hierarchical signal processing of the mammalian visual system—and examined perceptual stability as an optimization process that identifies image-defining features for accurate image classification in the presence of movements. Movement signals, multiplexed with visual inputs along overlapping convolutional layers, aided classification invariance of shifted images by making the classification faster to learn and more robust relative to input noise. Classification invariance was reflected in activity manifolds associated with image categories emerging in late CNN layers and with network units acquiring movement-associated activity modulations as observed experimentally during saccadic eye movements. Our findings provide a computational framework that unifies a multitude of biological observations on perceptual stability under optimality principles for image classification in artificial neural networks.  相似文献   

10.
Ground cover and surface vegetation information are key inputs to wildfire propagation models and are important indicators of ecosystem health. Often these variables are approximated using visual estimation by trained professionals but the results are prone to bias and error. This study analyzed the viability of using nadir or downward photos from smartphones (iPhone 7) to provide quantitative ground cover and biomass loading estimates. Good correlations were found between field measured values and pixel counts from manually segmented photos delineating a pre-defined set of 10 discrete cover types. Although promising, segmenting photos manually was labor intensive and therefore costly. We explored the viability of using a trained deep convolutional neural network (DCNN) to perform image segmentation automatically. The DCNN was able to segment nadir images with 95% accuracy when compared with manually delineated photos. To validate the flexibility and robustness of the automated image segmentation algorithm, we applied it to an independent dataset of nadir photographs captured at a different study site with similar surface vegetation characteristics to the training site with promising results.  相似文献   

11.
Background

Multiplex immunohistochemistry (mIHC) permits the labeling of six or more distinct cell types within a single histologic tissue section. The classification of each cell type requires detection of uniquely colored chromogens localized to cells expressing biomarkers of interest. The most comprehensive and reproducible method to evaluate such slides is to employ digital pathology and image analysis pipelines to whole-slide images (WSIs). Our suite of deep learning tools quantitatively evaluates the expression of six biomarkers in mIHC WSIs. These methods address the current lack of readily available methods to evaluate more than four biomarkers and circumvent the need for specialized instrumentation to spectrally separate different colors. The use case application for our methods is a study that investigates tumor immune interactions in pancreatic ductal adenocarcinoma (PDAC) with a customized mIHC panel.

Methods

Six different colored chromogens were utilized to label T-cells (CD3, CD4, CD8), B-cells (CD20), macrophages (CD16), and tumor cells (K17) in formalin-fixed paraffin-embedded (FFPE) PDAC tissue sections. We leveraged pathologist annotations to develop complementary deep learning-based methods: (1) ColorAE is a deep autoencoder which segments stained objects based on color; (2) U-Net is a convolutional neural network (CNN) trained to segment cells based on color, texture and shape; and (3) ensemble methods that employ both ColorAE and U-Net, collectively referred to as ColorAE:U-Net. We assessed the performance of our methods using: structural similarity and DICE score to evaluate segmentation results of ColorAE against traditional color deconvolution; F1 score, sensitivity, positive predictive value, and DICE score to evaluate the predictions from ColorAE, U-Net, and ColorAE:U-Net ensemble methods against pathologist-generated ground truth. We then used prediction results for spatial analysis (nearest neighbor).

Results

We observed that (1) the performance of ColorAE is comparable to traditional color deconvolution for single-stain IHC images (note: traditional color deconvolution cannot be used for mIHC); (2) ColorAE and U-Net are complementary methods that detect six different classes of cells with comparable performance; (3) combinations of ColorAE and U-Net in ensemble methods outperform ColorAE and U-Net alone; and (4) ColorAE:U-Net ensemble methods can be employed for detailed analysis of the tumor microenvironment (TME).

Summary

We developed a suite of scalable deep learning methods to analyze 6 distinctly labeled cell populations in mIHC WSIs. We evaluated our methods and found that they reliably detected and classified cells in the PDAC tumor microenvironment. We also utilized the ColorAE:U-Net ensemble method to analyze 3 mIHC WSIs with nearest neighbor spatial analysis. We demonstrate a proof of concept that these methods can be employed to quantitatively describe the spatial distribution of immune cells within the tumor microenvironment. These complementary deep learning methods are readily deployable for use in clinical research studies.

  相似文献   

12.
Classification and recognition of wood species have critical importance in wood trade, industry, and science. Therefore, accurate identification of wood species is a great necessity. Conventional classification and recognition of wood species require knowledge and experience on the anatomy of wood which is time-consuming, cost-ineffective, and destructive. Hence, convolutional neural networks (CNNs) -a deep learning tool- have replaced the conventional methods. In this study, classification of wood species via the WOOD-AUTH dataset and evaluating the performance of various deep learning architectures including ResNet-50, Inception V3, Xception, and VGG19 in classification with transfer learning was investigated in detail. The dataset contains macroscopic images of 12 wood species with three different types of wood sections: cross, radial and tangential. The experimental findings demonstrate that Xception produced a remarkable performance as compared to the other models in this study and the WOOD-AUTH dataset owners, yielding a classification accuracy of 95.88%.  相似文献   

13.
Understanding environmental factors that influence forest health, as well as the occurrence and abundance of wildlife, is a central topic in forestry and ecology. However, the manual processing of field habitat data is time-consuming and months are often needed to progress from data collection to data interpretation. To shorten the time to process the data we propose here Habitat-Net: a novel deep learning application based on Convolutional Neural Networks (CNN) to segment habitat images of tropical rainforests. Habitat-Net takes color images as input and after multiple layers of convolution and deconvolution, produces a binary segmentation of the input image. We worked on two different types of habitat datasets that are widely used in ecological studies to characterize the forest conditions: canopy closure and understory vegetation. We trained the model with 800 canopy images and 700 understory images separately and then used 149 canopy and 172 understory images to test the performance of Habitat-Net. We compared the performance of Habitat-Net to the performance of a simple threshold based method, manual processing by a second researcher and a CNN approach called U-Net, upon which Habitat-Net is based. Habitat-Net, U-Net and simple thresholding reduced total processing time to milliseconds per image, compared to 45 s per image for manual processing. However, the higher mean Dice coefficient of Habitat-Net (0.94 for canopy and 0.95 for understory) indicates that accuracy of Habitat-Net is higher than that of both the simple thresholding (0.64, 0.83) and U-Net (0.89, 0.94). Habitat-Net will be of great relevance for ecologists and foresters, who need to monitor changes in their forest structures. The automated workflow not only reduces the time, it also standardizes the analytical pipeline and, thus, reduces the degree of uncertainty that would be introduced by manual processing of images by different people (either over time or between study sites).  相似文献   

14.
For most of the cells, water permeability and plasma membrane properties play a vital role in the optimal protocol for successful cryopreservation. Measuring the water permeability of cells during subzero temperature is essential. So far, there is no perfect segmentation technique to be used for the image processing task on subzero temperature accurately. The ice formation and variable background during freezing posed a significant challenge for most of the conventional segmentation algorithms. Thus, a robust and accurate segmentation approach that can accurately extract cells from extracellular ice that surrounding the cell boundary is needed. Therefore, we propose a convolutional neural network (CNN) architecture similar to U-Net but differs from those conventionally used in computer vision to extract all the cell boundaries as they shrank in the engulfing ice. The images used was obtained from the cryo-stage microscope, and the data was validated using the Hausdorff distance, means ± standard deviation for different methods of segmentation result using the CNN model. The experimental results prove that the typical CNN model extracts cell borders contour from the background in its subzero state more coherent and effective as compared to other traditional segmentation approaches.  相似文献   

15.
Optical coherence tomography angiography (OCTA) can map the microvascular networks of the cerebral cortices with micrometer resolution and millimeter penetration. However, the high scattering of the skull and the strong noise in the deep imaging region will distort the vasculature projections and decrease the OCTA image quality. Here, we proposed a deep learning-based segmentation method based on a U-Net convolutional neural network to extract the cortical region from the OCT image. The vascular networks were then visualized by three OCTA algorithms. The image quality of the vasculature projections was assessed by two metrics, including the peak signal-to-noise ratio (PSNR) and the contrast-to-noise ratio (CNR). The results show the accuracy of the cortical segmentation was 96.07%. The PSNR and CNR values increased significantly in the projections of the selected cortical regions. The OCTA incorporating the deep learning-based cortical segmentation can efficiently improve the image quality and enhance the vasculature clarity.  相似文献   

16.
《IRBM》2022,43(6):521-537
ObjectivesAccurate and reliable segmentation of brain tumors from MRI images helps in planning an enhanced treatment and increases the life expectancy of patients. However, the manual segmentation of brain tumors is subjective and more prone to errors. Nonetheless, the recent advances in convolutional neural network (CNN)-based methods have exhibited outstanding potential in robust segmentation of brain tumors. This article comprehensively investigates recent advances in CNN-based methods for automatic segmentation of brain tumors from MRI images. It examines popular deep learning (DL) libraries/tools for an expeditious and effortless implementation of CNN models. Furthermore, a critical assessment of current DL architectures is delineated along with the scope of improvement.MethodsIn this work, more than 50 scientific papers from 2014-2020 are selected using Google Scholar and PubMed. Also, the leading journals related to our work along with proceedings from major conferences such as MICCAI, MIUA and ECCV are retrieved. This research investigated various annual challenges too related to this work including Multimodal Brain Tumor Segmentation Challenge (MICCAI BRATS) and Ischemic Stroke Lesion Segmentation Challenge (ISLES).ResultAfter a systematic literature search pertinent to the theme, we found that principally there exist three variations of CNN architecture for brain tumor segmentation: single-path and multi-path, fully convolutional, and cascaded CNNs. The respective performances of most automated methods based on CNN are appraised on the BraTS dataset, provided as a part of the MICCAI Multimodal Brain Tumor Segmentation challenge held annually since 2012.ConclusionNotwithstanding the remarkable potential of CNN-based methods, reliable and robust segmentation of brain tumors continues to be an intractable challenge. This is due to the intricate anatomy of the brain, variability in its appearance, and imperfection in image acquisition. Moreover, owing to the small size of MRI datasets, CNN-based methods cannot operate with their full capacity, as demonstrated with large scale datasets, such as ImageNet.  相似文献   

17.
Digital pathology and microscope image analysis is widely used in comprehensive studies of cell morphology. Identification and analysis of leukocytes in blood smear images, acquired from bright field microscope, are vital for diagnosing many diseases such as hepatitis, leukaemia and acquired immune deficiency syndrome (AIDS). The major challenge for robust and accurate identification and segmentation of leukocyte in blood smear images lays in the large variations of cell appearance such as size, colour and shape of cells, the adhesion between leukocytes (white blood cells, WBCs) and erythrocytes (red blood cells, RBCs), and the emergence of substantial dyeing impurities in blood smear images. In this paper, an end‐to‐end leukocyte localization and segmentation method is proposed, named LeukocyteMask, in which pixel‐level prior information is utilized for supervisor training of a deep convolutional neural network, which is then employed to locate the region of interests (ROI) of leukocyte, and finally segmentation mask of leukocyte is obtained based on the extracted ROI by forward propagation of the network. Experimental results validate the effectiveness of the propose method and both the quantitative and qualitative comparisons with existing methods indicate that LeukocyteMask achieves a state‐of‐the‐art performance for the segmentation of leukocyte in terms of robustness and accuracy .  相似文献   

18.
Mitochondria exist as dynamic networks that often change shape and subcellular distribution. The morphology of mitochondria within a cell is controlled by precisely regulated rates of organelle fusion and fission. Several reports have described dramatic alterations in mitochondrial morphology during the early stages of apoptosis: a fragmentation of the network and the cristae remodeling. However, whether this mitochondrial fragmentation is a required step for apoptosis is highly debated. In this review the recent progress in understanding the mechanisms governing mitochondrial morphology during apoptosis and the latest advances connecting the regulation of mitochondrial morphology with apoptosis are discussed.  相似文献   

19.
Mitochondria regulate critical components of cellular function via ATP production, reactive oxygen species production, Ca(2+) handling and apoptotic signaling. Two classical methods exist to study mitochondrial function of skeletal muscles: isolated mitochondria and permeabilized myofibers. Whereas mitochondrial isolation removes a portion of the mitochondria from their cellular environment, myofiber permeabilization preserves mitochondrial morphology and functional interactions with other intracellular components. Despite this, isolated mitochondria remain the most commonly used method to infer in vivo mitochondrial function. In this study, we directly compared measures of several key aspects of mitochondrial function in both isolated mitochondria and permeabilized myofibers of rat gastrocnemius muscle. Here we show that mitochondrial isolation i) induced fragmented organelle morphology; ii) dramatically sensitized the permeability transition pore sensitivity to a Ca(2+) challenge; iii) differentially altered mitochondrial respiration depending upon the respiratory conditions; and iv) dramatically increased H(2)O(2) production. These alterations are qualitatively similar to the changes in mitochondrial structure and function observed in vivo after cellular stress-induced mitochondrial fragmentation, but are generally of much greater magnitude. Furthermore, mitochondrial isolation markedly altered electron transport chain protein stoichiometry. Collectively, our results demonstrate that isolated mitochondria possess functional characteristics that differ fundamentally from those of intact mitochondria in permeabilized myofibers. Our work and that of others underscores the importance of studying mitochondrial function in tissue preparations where mitochondrial structure is preserved and all mitochondria are represented.  相似文献   

20.
Remote sensing images obtained by unoccupied aircraft systems (UAS) across different seasons enabled capturing of species-specific phenological patterns of tropical trees. The application of UAS multi-season images to classify tropical tree species is still poorly understood. In this study, we used RGB images from different seasons obtained by a low-cost UAS and convolutional neural networks (CNNs) to map tree species in an Amazonian forest. Individual tree crowns (ITC) were outlined in the UAS images and identified to the species level using forest inventory data. The CNN model was trained with images obtained in February, May, August, and November. The classification accuracy in the rainy season (November and February) was higher than in the dry season (May and August). Fusing images from multiple seasons improved the average accuracy of tree species classification by up to 21.1 percentage points, reaching 90.5%. The CNN model can learn species-specific phenological characteristics that impact the classification accuracy, such as leaf fall in the dry season, which highlights its potential to discriminate species in various conditions. We produced high-quality individual tree crown maps of the species using a post-processing procedure. The combination of multi-season UAS images and CNNs has the potential to map tree species in the Amazon, providing valuable insights for forest management and conservation initiatives.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号