首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
PurposeTo develop an automatic multimodal method for segmentation of parotid glands (PGs) from pre-registered computed tomography (CT) and magnetic resonance (MR) images and compare its results to the results of an existing state-of-the-art algorithm that segments PGs from CT images only.MethodsMagnetic resonance images of head and neck were registered to the accompanying CT images using two different state-of-the-art registration procedures. The reference domains of registered image pairs were divided on the complementary PG regions and backgrounds according to the manual delineation of PGs on CT images, provided by a physician. Patches of intensity values from both image modalities, centered around randomly sampled voxels from the reference domain, served as positive or negative samples in the training of the convolutional neural network (CNN) classifier. The trained CNN accepted a previously unseen (registered) image pair and classified its voxels according to the resemblance of its patches to the patches used for training. The final segmentation was refined using a graph-cut algorithm, followed by the dilate-erode operations.ResultsUsing the same image dataset, segmentation of PGs was performed using the proposed multimodal algorithm and an existing monomodal algorithm, which segments PGs from CT images only. The mean value of the achieved Dice overlapping coefficient for the proposed algorithm was 78.8%, while the corresponding mean value for the monomodal algorithm was 76.5%.ConclusionsAutomatic PG segmentation on the planning CT image can be augmented with the MR image modality, leading to an improved RT planning of head and neck cancer.  相似文献   

2.
N. Bhaskar  M. Suchetha 《IRBM》2021,42(4):268-276
ObjectivesIn this paper, we propose a computationally efficient Correlational Neural Network (CorrNN) learning model and an automated diagnosis system for detecting Chronic Kidney Disease (CKD). A Support Vector Machine (SVM) classifier is integrated with the CorrNN model for improving the prediction accuracy.Material and methodsThe proposed hybrid model is trained and tested with a novel sensing module. We have monitored the concentration of urea in the saliva sample to detect the disease. Experiments are carried out to test the model with real-time samples and to compare its performance with conventional Convolutional Neural Network (CNN) and other traditional data classification methods.ResultsThe proposed method outperforms the conventional methods in terms of computational speed and prediction accuracy. The CorrNN-SVM combined network achieved a prediction accuracy of 98.67%. The experimental evaluations show a reduction in overall computation time of about 9.85% compared to the conventional CNN algorithm.ConclusionThe use of the SVM classifier has improved the capability of the network to make predictions more accurately. The proposed framework substantially advances the current methodology, and it provides more precise results compared to other data classification methods.  相似文献   

3.
《IRBM》2021,42(6):415-423
ObjectivesConvolutional neural networks (CNNs) have established state-of-the-art performance in computer vision tasks such as object detection and segmentation. One of the major remaining challenges concerns their ability to capture consistent spatial and anatomically plausible attributes in medical image segmentation. To address this issue, many works advocate to integrate prior information at the level of the loss function. However, prior-based losses often suffer from local solutions and training instability. The CoordConv layers are extensions of convolutional neural network wherein convolution is conditioned on spatial coordinates. The objective of this paper is to investigate CoordConv as a proficient substitute to convolutional layers for medical image segmentation tasks when trained under prior-based losses.MethodsThis work introduces CoordConv-Unet which is a novel structure that can be used to accommodate training under anatomical prior losses. The proposed architecture demonstrates a dual role relative to prior constrained CNN learning: it either demonstrates a regularizing role that stabilizes learning while maintaining system performance, or improves system performance by allowing the learning to be more stable and to evade local minima.ResultsTo validate the performance of the proposed model, experiments are conducted on two well-known public datasets from the Decathlon challenge: a mono-modal MRI dataset dedicated to segmentation of the left atrium, and a CT image dataset whose objective is to segment the spleen, an organ characterized with varying size and mild convexity issues.ConclusionResults show that, despite the inadequacy of CoordConv when trained with the regular dice baseline loss, the proposed CoordConv-Unet structure can improve significantly model performance when trained under anatomically constrained prior losses.  相似文献   

4.
AimThis study evaluated a convolutional neural network (CNN) for automatically delineating the liver on contrast-enhanced or non-contrast-enhanced CT, making comparisons with a commercial automated technique (MIM Maestro®).BackgroundIntensity-modulated radiation therapy requires careful labor-intensive planning involving delineation of the target and organs on CT or MR images to ensure delivery of the effective dose to the target while avoiding organs at risk.Materials and MethodsContrast-enhanced planning CT images from 101 pancreatic cancer cases and accompanying mask images showing manually-delineated liver contours were used to train the CNN to segment the liver. The trained CNN then performed liver segmentation on a further 20 contrast-enhanced and 15 non-contrastenhanced CT image sets, producing three-dimensional mask images of the liver.ResultsFor both contrast-enhanced and non-contrast-enhanced images, the mean Dice similarity coefficients between CNN segmentations and ground-truth manual segmentations were significantly higher than those between ground-truth and MIM Maestro software (p < 0.001). Although mean CT values of the liver were higher on contrast-enhanced than on non-contrast-enhanced CT, there were no significant differences in the Hausdorff distances of the CNN segmentations, indicating that the CNN could successfully segment the liver on both image types, despite being trained only on contrast-enhanced images.ConclusionsOur results suggest that a CNN can perform highly accurate automated delineation of the liver on CT images, irrespective of whether the CT images are contrast-enhanced or not.  相似文献   

5.
《IRBM》2022,43(3):161-168
BackgroundAccurate delineation of organs at risk (OARs) is critical in radiotherapy. Manual delineation is tedious and suffers from both interobserver and intraobserver variability. Automatic segmentation of brain MR images has a wide range of applications in brain tumor radiotherapy. In this paper, we propose a multi-atlas based adaptive active contour model for OAR automatic segmentation in brain MR images.MethodsThe proposed method consists of two parts: multi-atlas based OAR contour initiation and an adaptive edge and local region based active contour evolution. In the adaptive active contour model, we define an energy functional with an adaptive edge intensity fitting force which is responsible for evaluating contour inwards or outwards, and a local region intensity fitting force which guides the evolution of the contour.ResultsExperimental results show that the proposed method achieved more accurate segmentation results in brainstem, eyes and lens automatic segmentation with the Dice Similar Coefficient (DSC) value of 87.19%, 91.96%, 77.11% respectively. Besides, the dosimetric parameters also demonstrate the high consistency of the manual OAR delineations and the auto segmentation results of the proposed method in brain tumor radiotherapy.ConclusionsThe geometric and dosimetric evaluations show the desirable performance of the proposed method on the application of OARs segmentations in brain tumor radiotherapy.  相似文献   

6.
PurposeThe classification of urinary stones is important prior to treatment because the treatments depend on three types of urinary stones, i.e., calcium, uric acid, and mixture stones. We have developed an automatic approach for the classification of urinary stones into the three types based on microcomputed tomography (micro-CT) images using a convolutional neural network (CNN).Materials and methodsThirty urinary stones from different patients were scanned in vitro using micro-CT (pixel size: 14.96 μm; slice thickness: 15 μm); a total of 2,430 images (micro-CT slices) were produced. The slices (227 × 227 pixels) were classified into the three categories based on their energy dispersive X-ray (EDX) spectra obtained via scanning electron microscopy (SEM). The images of urinary stones from each category were divided into three parts; 66%, 17%, and 17% of the dataset were assigned to the training, validation, and test datasets, respectively. The CNN model with 15 layers was assessed based on validation accuracy for the optimization of hyperparameters such as batch size, learning rate, and number of epochs with different optimizers. Then, the model with the optimized hyperparameters was evaluated for the test dataset to obtain classification accuracy and error.ResultsThe validation accuracy of the developed approach with CNN with optimized hyperparameters was 0.9852. The trained CNN model achieved a test accuracy of 0.9959 with a classification error of 1.2%.ConclusionsThe proposed automated CNN-based approach could successfully classify urinary stones into three types, namely calcium, uric acid, and mixture stones, using micro-CT images.  相似文献   

7.
PurposeWe introduced and evaluated an end-to-end organs-at-risk (OARs) segmentation model that can provide accurate and consistent OARs segmentation results in much less time.MethodsWe collected 105 patients’ Computed Tomography (CT) scans that diagnosed locally advanced cervical cancer and treated with radiotherapy in one hospital. Seven organs, including the bladder, bone marrow, left femoral head, right femoral head, rectum, small intestine and spinal cord were defined as OARs. The annotated contours of the OARs previously delineated manually by the patient’s radiotherapy oncologist and confirmed by the professional committee consisted of eight experienced oncologists before the radiotherapy were used as the ground truth masks. A multi-class segmentation model based on U-Net was designed to fulfil the OARs segmentation task. The Dice Similarity Coefficient (DSC) and 95th Hausdorff Distance (HD) are used as quantitative evaluation metrics to evaluate the proposed method.ResultsThe mean DSC values of the proposed method are 0.924, 0.854, 0.906, 0.900, 0.791, 0.833 and 0.827 for the bladder, bone marrow, femoral head left, femoral head right, rectum, small intestine, and spinal cord, respectively. The mean HD values are 5.098, 1.993, 1.390, 1.435, 5.949, 5.281 and 3.269 for the above OARs respectively.ConclusionsOur proposed method can help reduce the inter-observer and intra-observer variability of manual OARs delineation and lessen oncologists’ efforts. The experimental results demonstrate that our model outperforms the benchmark U-Net model and the oncologists’ evaluations show that the segmentation results are highly acceptable to be used in radiation therapy planning.  相似文献   

8.
目的 蛋白质的柔性运动对生物体各种反应有着重要意义,基于蛋白质的空间结构预测其柔性运动是蛋白质结构-功能关系领域的重要问题.卷积神经网络(convolutional neural network,CNN)在蛋白质结构-功能关系研究中已有成功应用.方法 本研究借鉴计算机视觉研究中PointNet方法的思想,提出了一种蛋白...  相似文献   

9.
Inspection of insect sticky paper traps is an essential task for an effective integrated pest management (IPM) programme. However, identification and counting of the insect pests stuck on the traps is a very cumbersome task. Therefore, an efficient approach is needed to alleviate the problem and to provide timely information on insect pests. In this research, an automatic method for the multi-class recognition of small-size greenhouse insect pests on sticky paper trap images acquired by wireless imaging devices is proposed. The developed algorithm features a cascaded approach that uses a convolutional neural network (CNN) object detector and CNN image classifiers, separately. The object detector was trained for detecting objects in an image, and a CNN classifier was applied to further filter out non-insect objects from the detected objects in the first stage. The obtained insect objects were then further classified into flies (Diptera: Drosophilidae), gnats (Diptera: Sciaridae), thrips (Thysanoptera: Thripidae) and whiteflies (Hemiptera: Aleyrodidae), using a multi-class CNN classifier in the second stage. Advantages of this approach include flexibility in adding more classes to the multi-class insect classifier and sample control strategies to improve classification performance. The algorithm was developed and tested for images taken by multiple wireless imaging devices installed in several greenhouses under natural and variable lighting environments. Based on the testing results from long-term experiments in greenhouses, it was found that the algorithm could achieve average F1-scores of 0.92 and 0.90 and mean counting accuracies of 0.91 and 0.90, as tested on a separate 6-month image data set and on an image data set from a different greenhouse, respectively. The proposed method in this research resolves important problems for the automated recognition of insect pests and provides instantaneous information of insect pest occurrences in greenhouses, which offers vast potential for developing more efficient IPM strategies in agriculture.  相似文献   

10.
PurposeIn this study we trained a deep neural network model for female pelvis organ segmentation using data from several sites without any personal data sharing. The goal was to assess its prediction power compared with the model trained in a centralized manner.MethodsVarian Learning Portal (VLP) is a distributed machine learning (ML) infrastructure enabling privacy-preserving research across hospitals from different regions or countries, within the framework of a trusted consortium. Such a framework is relevant in the case when there is a high level of trust among the participating sites, but there are legal restrictions which do not allow the actual data sharing between them. We trained an organ segmentation model for the female pelvic region using the synchronous data distributed framework provided by the VLP.ResultsThe prediction performance of the model trained using the federated framework offered by VLP was on the same level as the performance of the model trained in a centralized manner where all training data was pulled together in one centre.ConclusionsVLP infrastructure can be used for GPU-based training of a deep neural network for organ segmentation for the female pelvic region. This organ segmentation instance is particularly difficult due to the high variation in the organs’ shape and size. Being able to train the model using data from several clinics can help, for instance, by exposing the model to a larger range of data variations. VLP framework enables such a distributed training approach without sharing protected health information.  相似文献   

11.
《IRBM》2022,43(5):422-433
BackgroundElectrocardiogram (ECG) is a method of recording the electrical activity of the heart and it provides a diagnostic means for heart-related diseases. Arrhythmia is any irregularity of the heartbeat that causes an abnormality in the heart rhythm. Early detection of arrhythmia has great importance to prevent many diseases. Manual analysis of ECG recordings is not practical for quickly identifying arrhythmias that may cause sudden deaths. Hence, many studies have been presented to develop computer-aided-diagnosis (CAD) systems to automatically identify arrhythmias.MethodsThis paper proposes a novel deep learning approach to identify arrhythmias in ECG signals. The proposed approach identifies arrhythmia classes using Convolutional Neural Network (CNN) trained by two-dimensional (2D) ECG beat images. Firstly, ECG signals, which consist of 5 different arrhythmias, are segmented into heartbeats which are transformed into 2D grayscale images. Afterward, the images are used as input for training a new CNN architecture to classify heartbeats.ResultsThe experimental results show that the classification performance of the proposed approach reaches an overall accuracy of 99.7%, sensitivity of 99.7%, and specificity of 99.22% in the classification of five different ECG arrhythmias. Further, the proposed CNN architecture is compared to other popular CNN architectures such as LeNet and ResNet-50 to evaluate the performance of the study.ConclusionsTest results demonstrate that the deep network trained by ECG images provides outstanding classification performance of arrhythmic ECG signals and outperforms similar network architectures. Moreover, the proposed method has lower computational costs compared to existing methods and is more suitable for mobile device-based diagnosis systems as it does not involve any complex preprocessing process. Hence, the proposed approach provides a simple and robust automatic cardiac arrhythmia detection scheme for the classification of ECG arrhythmias.  相似文献   

12.
PurposeTo train and evaluate a very deep dilated residual network (DD-ResNet) for fast and consistent auto-segmentation of the clinical target volume (CTV) for breast cancer (BC) radiotherapy with big data.MethodsDD-ResNet was an end-to-end model enabling fast training and testing. We used big data comprising 800 patients who underwent breast-conserving therapy for evaluation. The CTV were validated by experienced radiation oncologists. We performed a fivefold cross-validation to test the performance of the model. The segmentation accuracy was quantified by the Dice similarity coefficient (DSC) and the Hausdorff distance (HD). The performance of the proposed model was evaluated against two different deep learning models: deep dilated convolutional neural network (DDCNN) and deep deconvolutional neural network (DDNN).ResultsMean DSC values of DD-ResNet (0.91 and 0.91) were higher than the other two networks (DDCNN: 0.85 and 0.85; DDNN: 0.88 and 0.87) for both right-sided and left-sided BC. It also has smaller mean HD values of 10.5 mm and 10.7 mm compared with DDCNN (15.1 mm and 15.6 mm) and DDNN (13.5 mm and 14.1 mm). Mean segmentation time was 4 s, 21 s and 15 s per patient with DDCNN, DDNN and DD-ResNet, respectively. The DD-ResNet was also superior with regard to results in the literature.ConclusionsThe proposed method could segment the CTV accurately with acceptable time consumption. It was invariant to the body size and shape of patients and could improve the consistency of target delineation and streamline radiotherapy workflows.  相似文献   

13.
14.
《IRBM》2022,43(4):290-299
ObjectiveIn this research paper, the brain MRI images are going to classify by considering the excellence of CNN on a public dataset to classify Benign and Malignant tumors.Materials and MethodsDeep learning (DL) methods due to good performance in the last few years have become more popular for Image classification. Convolution Neural Network (CNN), with several methods, can extract features without using handcrafted models, and eventually, show better accuracy of classification. The proposed hybrid model combined CNN and support vector machine (SVM) in terms of classification and with threshold-based segmentation in terms of detection.ResultThe findings of previous studies are based on different models with their accuracy as Rough Extreme Learning Machine (RELM)-94.233%, Deep CNN (DCNN)-95%, Deep Neural Network (DNN) and Discrete Wavelet Autoencoder (DWA)-96%, k-nearest neighbors (kNN)-96.6%, CNN-97.5%. The overall accuracy of the hybrid CNN-SVM is obtained as 98.4959%.ConclusionIn today's world, brain cancer is one of the most dangerous diseases with the highest death rate, detection and classification of brain tumors due to abnormal growth of cells, shapes, orientation, and the location is a challengeable task in medical imaging. Magnetic resonance imaging (MRI) is a typical method of medical imaging for brain tumor analysis. Conventional machine learning (ML) techniques categorize brain cancer based on some handicraft property with the radiologist specialist choice. That can lead to failure in the execution and also decrease the effectiveness of an Algorithm. With a brief look came to know that the proposed hybrid model provides more effective and improvement techniques for classification.  相似文献   

15.
PurposeA radiomics features classifier was implemented to evaluate segmentation quality of heart structures. A robust feature set sensitive to incorrect contouring would provide an ideal quantitative index to drive autocontouring optimization.MethodsTwenty-five cardiac sub-structures were contoured as regions of interest in 36 CTs. Radiomic features were extracted from manually-contoured (MC) and Hierarchical-Clustering automatic-contouring (AC) structures. A robust feature-set was identified from correctly contoured CT datasets. Features variation was analyzed over a MC/AC dataset. A supervised-learning approach was used to train an Artificial-Intelligence (AI) classifier; incorrect contouring cases were generated from the gold-standard MC datasets with translations, expansions and contractions. ROC curves and confusion matrices were used to evaluate the AI-classifier performance.ResultsTwenty radiomics features, were found to be robust across structures, showing a good/excellent intra-class correlation coefficient (ICC) index comparing MC/AC. A significant correlation was obtained with quantitative indexes (Dice-Index, Hausdorff-distance). The trained AI-classifier detected correct contours (CC) and not correct contours (NCC) with an accuracy of 82.6% and AUC of 0.91. True positive rate (TPR) was 85.1% and 81.3% for CC and NCC. Detection of NCC at this point of the development still depended strongly on degree of contouring imperfection.ConclusionsA set of radiomics features, robust on “gold-standard” contour and sensitive to incorrect contouring was identified and implemented in an AI-workflow to quantify segmentation accuracy. This workflow permits an automatic assessment of segmentation quality and may accelerate expansion of an existing autocontouring atlas database as well as improve dosimetric analyses of large treatment plan databases.  相似文献   

16.
PurposeDeep learning has shown great efficacy for semantic segmentation. However, there are difficulties in the collection, labeling and management of medical imaging data, because of ethical complications and the limited number of imaging studies available at a single facility.This study aimed to find a simple and low-cost method to increase the accuracy of deep learning semantic segmentation for radiation therapy of prostate cancer.MethodsIn total, 556 cases with non-contrast CT images for prostate cancer radiation therapy were examined using a two-dimensional U-Net. Initially, all slices were used for the input data. Then, we removed slices of the cranial portions, which were beyond the margins of the bladder and rectum. Finally, the ground truth labels for the bladder and rectum were added as channels to the input for the prostate training dataset.ResultsThe highest mean dice similarity coefficients (DSCs) for each organ in the test dataset of 56 cases were 0.85 ± 0.05, 0.94 ± 0.04 and 0.85 ± 0.07 for the prostate, bladder and rectum, respectively. Removal of the cranial slices from the original images significantly increased the DSC of the rectum from 0.83 ± 0.09 to 0.85 ± 0.07 (p < 0.05). Adding bladder and rectum information to prostate training without removing the slices significantly increased the DSC of the prostate from 0.79 ± 0.05 to 0.85 ± 0.05 (p < 0.05).ConclusionsThese cost-free approaches may be useful for new applications, which may include updated models and datasets. They may be applicable to other organs at risk (OARs) and clinical targets such as elective nodal irradiation.  相似文献   

17.
18.
19.
PurposeTo assess the impact of lung segmentation accuracy in an automatic pipeline for quantitative analysis of CT images.MethodsFour different platforms for automatic lung segmentation based on convolutional neural network (CNN), region-growing technique and atlas-based algorithm were considered. The platforms were tested using CT images of 55 COVID-19 patients with severe lung impairment. Four radiologists assessed the segmentations using a 5-point qualitative score (QS). For each CT series, a manually revised reference segmentation (RS) was obtained. Histogram-based quantitative metrics (QM) were calculated from CT histogram using lung segmentationsfrom all platforms and RS. Dice index (DI) and differences of QMs (ΔQMs) were calculated between RS and other segmentations.ResultsHighest QS and lower ΔQMs values were associated to the CNN algorithm. However, only 45% CNN segmentations were judged to need no or only minimal corrections, and in only 17 cases (31%), automatic segmentations provided RS without manual corrections. Median values of the DI for the four algorithms ranged from 0.993 to 0.904. Significant differences for all QMs calculated between automatic segmentations and RS were found both when data were pooled together and stratified according to QS, indicating a relationship between qualitative and quantitative measurements. The most unstable QM was the histogram 90th percentile, with median ΔQMs values ranging from 10HU and 158HU between different algorithms.ConclusionsNone of tested algorithms provided fully reliable segmentation. Segmentation accuracy impacts differently on different quantitative metrics, and each of them should be individually evaluated according to the purpose of subsequent analyses.  相似文献   

20.
AimsTaxon identification is an important step in many plant ecological studies. Its efficiency and reproducibility might greatly benefit from partly automating this task. Image-based identification systems exist, but mostly rely on hand-crafted algorithms to extract sets of features chosen a priori to identify species of selected taxa. In consequence, such systems are restricted to these taxa and additionally require involving experts that provide taxonomical knowledge for developing such customized systems. The aim of this study was to develop a deep learning system to learn discriminative features from leaf images along with a classifier for species identification of plants. By comparing our results with customized systems like LeafSnap we can show that learning the features by a convolutional neural network (CNN) can provide better feature representation for leaf images compared to hand-crafted features.MethodsWe developed LeafNet, a CNN-based plant identification system. For evaluation, we utilized the publicly available LeafSnap, Flavia and Foliage datasets.ResultsEvaluating the recognition accuracies of LeafNet on the LeafSnap, Flavia and Foliage datasets reveals a better performance of LeafNet compared to hand-crafted customized systems.ConclusionsGiven the overall species diversity of plants, the goal of a complete automatisation of visual plant species identification is unlikely to be met solely by continually gathering assemblies of customized, specialized and hand-crafted (and therefore expensive) identification systems. Deep Learning CNN approaches offer a self-learning state-of-the-art alternative that allows adaption to different taxa just by presenting new training data instead of developing new software systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号