首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Plant diseases play a significant role in agricultural production, in which early detection of plant diseases is deemed an essential task. Current computational intelligence and computer vision methods have been promising to improve disease diagnosis. Convolutional Neural Networks (CNN) models are capable of detecting plant diseases in an agricultural field and plantation leaf images. MobileNetV2 refers to an appropriate CNN model for mobile devices with subordinate parameters and model file sizes. However, the effectiveness of MobileNetV2 requires improvement to capture more critical features. Xception refers to the extension of InceptionV3 with fewer and excellent parameters in extracting features. This research suggests an ensemble of MobileNetV2 and Xception by concatenating the extracted features to improve plant disease detection performance. This study indicated that MobileNetV2, Xception, and ensemble model achieved 97.32%, 98.30%, and 99.10% accuracy when considering the entire Plant Village dataset. Particularly, MobileNetV2 and Xception models' accuracy improved by 1.8% and 0.8%, respectively. In addition, our model captures 99.52% of all metric scores in the user-defined dataset. Our model indicated better performance than the seven state-of-the-art CNN models, both individually and in ensemble design. It can be integrated with mobile devices, providing fewer parameters and model file size than an ensemble of MobileNetV2 with InceptionResnetV2, VGG19, and VGG16.  相似文献   

2.
《IRBM》2022,43(1):22-31
Epilepsy is a neurological disease from which a large number of younger and older people suffer all over the world. The status of the patients is primarily examined by using Electroencephalogram (EEG) signals. The most important part for successful surgery is to locate the epileptic seizure in the brain. For this reason, it is very useful to detect the seizure area automatically before surgery. In this research, a novel method based on continuous wavelet transform (CWT) and two-dimensional (2D) convolutional neural networks (CNNs) has been proposed to predict focal and non-focal epileptic seizure. The AlexNet, InceptionV3, Inception-ResNetV2, ResNet50 and VGG16 pre-trained models have been used to automatically classify 2D-scalogram images into focal and non-focal epileptic seizure. The performances of 5 pre-trained models were compared and the detection results of 2D-scalograms were examined. The best classification accuracy of 92.27% is yielded by the InceptionV3 model among the other used four pre-trained models. As a result, it may be said that the pre-trained models and 2D-scalogram images of focal and non-focal EEG signals will be useful to neurologists for rapid and robust prediction epileptic seizure before surgery.  相似文献   

3.
Chilli leaf disease has a destructive effect on the chilli crop yield. Chilli leaf disease can result in a significant decrease in both the quantity and quality of the chilli crop. Early detection, perfect identification and accurately diagnosing the disease will aid in increasing the profit of the cultivator. However, after a comprehensive survey investigation, we discovered that no studies have been previously conducted to compare the classification performance of machine learning and deep learning for the chilli leaf disease classification problem. In this study, five main leaf diseases i.e. down curl of a leaf, Geminivirus, Cercospora leaf spot, yellow leaf disease, and up curl disease were identified, and images were captured using a digital camera and are labelled. These diseases were classified using 12 different pretrained deep learning networks (AlexNet, DarkNet53, DenseNet201, EfficientNetb0, InceptionV3, MobileNetV2, NasNetLarge, ResNet101, ShuffleNet, SqueezeNet, VGG19, and XceptionNet) using chilli leaf data with and without augmentation using deep learning transfer. Performance metrics such as accuracy, recall, precision, F1-score, specificity, and misclassification were calculated for each network. VGG19 had the best accuracy (83.54%) without augmentation, and DarkNet53 had the best result (98.82%) with augmentation among all pretrained deep learning networks in our self-built chilli leaf dataset. The result was enhanced by designing a squeeze-and-excitation-based convolutional neural network (SECNN) model. The model was tested on a chilli leaf dataset with different input sizes and mini-batch sizes. The proposed model produced the best accuracy of 98.63% and 99.12% without and with augmentation, respectively. The SECNN model was also tested on different datasets from the PlantVillage data, including apple, cherry, corn, grape, peach, pepper, potato, strawberry, and tomato leaves, separately and with the chilli dataset. The proposed model achieved an accuracy of 99.28% in classifying 43 different classes of plant leaf datasets.  相似文献   

4.
The accurate identification of plant species is crucial for the conservation of biodiversity. However, traditional methods for identifying plant species are often complicated, time-consuming, and prone to errors. Therefore, it is essential to address these challenges and develop automated identification methods to enhance the efficiency and accuracy of plant species identification. In this study, a step-by-step method was utilized to identify and classify plant species. The dataset was first loaded, and then preprocessing was performed to remove noisy data. Following that, data augmentation was carried out to improve model accuracy. The deep convolutional neural network (CNN) and visual geometry group-16 (VGG-16) were then employed to extract only the relevant features, owing to their efficient learning capabilities. Feature-level fusion was accomplished by utilizing dimensionality reduction, and enhanced Spearman's principal component analysis (ESPCA) was employed to address the overfitting problem, eliminate redundant data, and reduce storage space and training time requirements. For classification, the hyperparameter-tuned batch-updated stochastic gradient descent (HP-BSGD) method was utilized. The Flavia and Swedish datasets were utilized in the experiments. The proposed hybrid classifier yielded excellent results due to its high convergence speed, good computational effectiveness, and high flexibility. To validate the experimental results, performance and comparative analyses were carried out using standard metrics. The analytical results demonstrated the superior efficiency and suitability of the proposed method in the classification of plant species over existing methods. The hybrid method achieved approximately 97% and 98.85% accuracy in the Flavia and Swedish datasets, respectively, when considering combined features. The performance of the proposed method was further enhanced by considering leaves at different stages, such as seedlings, tiny, mature, and dried leaves.  相似文献   

5.
Aflatoxins are the most dangerous mycotoxin produced by Aspergillus fungal species, such as Aspergillus flavus and Aspergillus parasiticus, and can cause various health problems, including liver cancer, in humans. Figs and many agricultural products are affected by aflatoxins. It is crucial to detect it before consumption since it is impossible to clean aflatoxin from contaminated foods. Chromatographic methods are considered the gold standard for aflatoxin detection, but these methods are pretty expensive, time-consuming, and destructive. Therefore, various studies have been conducted on non-invasive aflatoxin detection using optical spectroscopic methods. Specifically, aflatoxin-contaminated figs are sorted manually by employees in production facilities using the Bright Greenish Yellow Fluorescence (BGYF) method under ultraviolet (UV) light. However, accurate and safe manual sorting depends on expertise of employees, along with this long exposure time to UV radiation may cause employees skin cancer and eye disorders. This study presents a deep transfer learning-based approach for non-invasive detection and classification of aflatoxin-contaminated dried figs using images captured under UV light. Pre-trained transfer learning models, such as DenseNet, ResNet, VGG, and InceptionNet, are applied to the dataset, but the accuracy of these models does not outperform the other methods that detect aflatoxin with the BGYF method. Therefore, fine-tuning is performed on the models. As a result, training accuracy of 98.57% and validation accuracy of 97.50% is obtained using the DenseNet169 model. The experimental results show that our proposed method achieves the highest accuracy among other methods. Also, the proposed method shows that deep CNN can be used to automatically, rapidly, and effectively detect aflatoxin-contaminated figs.  相似文献   

6.
The traditional methods of analyzing stomatal traits are mostly manual observation and measurement. These methods are time-consuming, labor-intensive, and inefficient. Some methods have been proposed for the automatic recognition and counting of stomata, however most of those methods could not complete the automatic measurement of stomata parameters at the same time. Some non-deep learning methods could automatically measure the parameters of stomata, but they could not complete the automatic recognition and detection of stomata. In this paper, a deep learning-based method was proposed for automatically identifying, counting and measuring stomata of maize (Zea mays L.) leaves at the same time. An improved YOLO (You Only Look Once) deep learning model was proposed to identify stomata of maize leaves automatically, and an entropy rate superpixel algorithm was used for the accurate measurement of stomatal parameters. According to the characteristics of the stomata images data set, the network structure of YOLOv5 was modified, which greatly reduced the training time without affecting the recognition performance. The predictor in YOLO deep learning model was optimized, which reduced the false detection rate. At the same time, the 16-fold and 32-fold down-sampling layers were simplified according to the characteristics of stomatal objects, which improved the recognition efficiency. Experimental results showed that the recognition precision of the improved YOLO deep learning model reached 95.3% on the maize leaves stomatal data set, and the average accuracy of parameter measurement reached 90%. The proposed method could fully automatically complete the recognition, counting and measurement of stomata of plants, which can help agricultural scientists and botanists to conduct large-scale researches of stomatal morphology, structure and physiology, as well as the researches combined with genetic analysis or molecular-level analysis.  相似文献   

7.
《IRBM》2022,43(4):290-299
ObjectiveIn this research paper, the brain MRI images are going to classify by considering the excellence of CNN on a public dataset to classify Benign and Malignant tumors.Materials and MethodsDeep learning (DL) methods due to good performance in the last few years have become more popular for Image classification. Convolution Neural Network (CNN), with several methods, can extract features without using handcrafted models, and eventually, show better accuracy of classification. The proposed hybrid model combined CNN and support vector machine (SVM) in terms of classification and with threshold-based segmentation in terms of detection.ResultThe findings of previous studies are based on different models with their accuracy as Rough Extreme Learning Machine (RELM)-94.233%, Deep CNN (DCNN)-95%, Deep Neural Network (DNN) and Discrete Wavelet Autoencoder (DWA)-96%, k-nearest neighbors (kNN)-96.6%, CNN-97.5%. The overall accuracy of the hybrid CNN-SVM is obtained as 98.4959%.ConclusionIn today's world, brain cancer is one of the most dangerous diseases with the highest death rate, detection and classification of brain tumors due to abnormal growth of cells, shapes, orientation, and the location is a challengeable task in medical imaging. Magnetic resonance imaging (MRI) is a typical method of medical imaging for brain tumor analysis. Conventional machine learning (ML) techniques categorize brain cancer based on some handicraft property with the radiologist specialist choice. That can lead to failure in the execution and also decrease the effectiveness of an Algorithm. With a brief look came to know that the proposed hybrid model provides more effective and improvement techniques for classification.  相似文献   

8.
Using deep learning to estimate strawberry leaf scorch severity often achieves unsatisfactory results when a strawberry leaf image contains complex background information or multi-class diseased leaves and the number of annotated strawberry leaf images is limited. To solve these issues, in this paper, we propose a two-stage method including object detection and few-shot learning to estimate strawberry leaf scorch severity. In the first stage, Faster R-CNN is used to mark the location of strawberry leaf patches, where each single strawberry leaf patch is clipped from original strawberry leaf images to compose a new strawberry leaf patch dataset. In the second stage, the Siamese network trained on the new strawberry leaf patch dataset is used to identify the strawberry leaf patches and then estimate the severity of the original strawberry leaf scorch images according to the multi-instance learning concept. Experimental results from the first stage show that Faster R-CNN achieves better mAP in strawberry leaf patch detection than other object detection networks, at 94.56%. Results from the second stage reveal that the Siamese network achieves an accuracy of 96.67% in the identification of strawberry disease leaf patches, which is higher than the Prototype network. Comprehensive experimental results indicate that compared with other state-of-the-art models, our proposed two-stage method comprising the Faster R-CNN (VGG16) and Siamese networks achieves the highest estimation accuracy of 96.67%. Moreover, our trained two-stage model achieves an estimation accuracy of 88.83% on a new dataset containing 60 strawberry leaf images taken in the field, which indicates its excellent generalization ability.  相似文献   

9.
Plant diseases cause significant food loss and hence economic loss around the globe. Therefore, automatic plant disease identification is a primary task to take proper medications for controlling the spread of the diseases. Large variety of plants species and their dissimilar phytopathological symptoms call for the implementation of supervised machine learning techniques for efficient and reliable disease identification and classification. With the development of deep learning strategies, convolutional neural network (CNN) has paved its way for classification of multiple plant diseases by extracting rich features. However, several characteristics of the input images especially captured in real world environment, viz. complex or indistinguishable background, presence of multiple leaves with the diseased leaf, small lesion area, solemnly affect the robustness and accuracy of the CNN modules. Available strategies usually applied standard CNN architectures on the images captured in the laboratory environment and very few have considered practical in-field leaf images for their studies. However, those studies are limited with very limited number of plant species. Therefore, there is need of a robust CNN module which can successfully recognize and classify the dissimilar leaf health conditions of non-identical plants from the in-field RGB images. To achieve the above goal, an attention dense learning (ADL) mechanism is proposed in this article by merging mixed sigmoid attention learning with the basic dense learning process of deep CNN. The basic dense learning process derives new features at higher layer considering all lower layer features and that provides fast and efficient training process. Further, the attention learning process amplifies the learning ability of the dense block by discriminating the meaningful lesion portions of the images from the background areas. Other than adding an extra layer for attention learning, in the proposed ADL block the output features from higher layer dense learning are used as an attention mask to the lower layers. For an effective and fast classification process, five ADL blocks are stacked to build a new CNN architecture named DADCNN-5 for obtaining classification robustness and higher testing accuracy. Initially, the proposed DADCNN-5 module is applied on publicly available extended PlantVillage dataset to classify 38 different health conditions of 14 plant species from 54,305 images. Classification accuracy of 99.93% proves that the proposed CNN module can be used for successful leaf disease identification. Further, the efficacy of the DADCNN-5 model is checked after performing stringent experiments on a new real world plant leaf database, created by the authors. The new leaf database contains 10,851 real-world RGB leaf images of 17 plant species for classifying their 44 distinguished health conditions. Experimental outcomes reveal that the proposed DADCNN-5 outperforms the existing machine learning and standard CNN architectures, and achieved 97.33% accuracy. The obtained sensitivity, specificity and false positive rate values are 96.57%, 99.94% and 0.063% respectively. The module takes approximately 3235 min for training process and achieves 99.86% of training accuracy. Visualization of Class activation mapping (CAM) depicts that DADCNN-5 is able to learn distinguishable features from semantically important regions (i.e. lesion regions) on the leaves. Further, the robustness of the DADCNN-5 is established after experimenting with augmented and noise contaminated images of the practical database.  相似文献   

10.
PurposeAccurate detection and treatment of Coronary Artery Disease is mainly based on invasive Coronary Angiography, which could be avoided provided that a robust, non-invasive detection methodology emerged. Despite the progress of computational systems, this remains a challenging issue. The present research investigates Machine Learning and Deep Learning methods in competing with the medical experts' diagnostic yield. Although the highly accurate detection of Coronary Artery Disease, even from the experts, is presently implausible, developing Artificial Intelligence models to compete with the human eye and expertise is the first step towards a state-of-the-art Computer-Aided Diagnostic system.MethodsA set of 566 patient samples is analysed. The dataset contains Polar Maps derived from scintigraphic Myocardial Perfusion Imaging studies, clinical data, and Coronary Angiography results. The latter is considered as reference standard. For the classification of the medical images, the InceptionV3 Convolutional Neural Network is employed, while, for the categorical and continuous features, Neural Networks and Random Forest classifier are proposed.ResultsThe research suggests that an optimal strategy competing with the medical expert's accuracy involves a hybrid multi-input network composed of InceptionV3 and a Random Forest. This method matches the expert's accuracy, which is 79.15% in the particular dataset.ConclusionImage classification using deep learning methods can cooperate with clinical data classification methods to enhance the robustness of the predicting model, aiming to compete with the medical expert's ability to identify Coronary Artery Disease subjects, from a large scale patient dataset.  相似文献   

11.
Leukemoid reaction like leukemia indicates noticeable increased count of WBCs (White Blood Cells) but the cause of it is due to severe inflammation or infections in other body regions. In automatic diagnosis in classifying leukemia and leukemoid reactions, ALL IDB2 (Acute Lymphoblastic Leukemia-Image Data Base) dataset has been used which comprises 110 training images of blast cells and healthy cells. This paper aimed at an automatic process to distinguish leukemia and leukemoid reactions from blood smear images using Machine Learning. Initially, automatic detection and counting of WBC is done to identify leukocytosis and then an automatic detection of WBC blasts is performed to support classification of leukemia and leukemoid reactions. Leukocytosis is commonly observed both in leukemia and leukemoid hence physicians may have chance of wrong diagnosis of malignant leukemia for the patients with leukemoid reactions. BCCD (blood cell count detection) Dataset has been used which has 364 blood smear images of which 349 are of single WBC type. The Image segmentation algorithm of Hue Saturation Value color based on watershed has been applied. VGG16 (Visual Geometric Group) CNN (Convolution Neural Network) architecture based deep learning technique is being incorporated for classification and counting WBC type from segmented images. The VGG16 architecture based CNN used for classification and segmented images obtained from first part were tested to identify WBC blasts.  相似文献   

12.
The reproductive performance of sows is an important indicator for evaluating the economic efficiency and production level of pigs. In this paper, we design and propose a lightweight sow oestrus detection method based on acoustic data and deep convolutional neural network (CNN) algorithms by collecting and analysing short-frequency and long-frequency sow oestrus sounds. We use visual log-mel spectrograms, which can reflect three-dimensional information, as inputs to the network model to improve the overall recognition accuracy. The improved lightweight MobileNetV3_esnet model is used to identify oestrus and nonoestrus sounds and is compared with existing algorithms. The model outperforms the other algorithms, with 97.12% precision, 97.34% recall, 97.59% F1-score, and 97.52% accuracy; the model size is 5.94 MB. Compared with traditional oestrus monitoring methods, the proposed method can more accurately boost the vocal characteristics exhibited by sows in latent oestrus, thus providing an efficient and accurate approach for use in practical applications of oestrus monitoring and early warning systems on pig farms.  相似文献   

13.
N. Bhaskar  M. Suchetha 《IRBM》2021,42(4):268-276
ObjectivesIn this paper, we propose a computationally efficient Correlational Neural Network (CorrNN) learning model and an automated diagnosis system for detecting Chronic Kidney Disease (CKD). A Support Vector Machine (SVM) classifier is integrated with the CorrNN model for improving the prediction accuracy.Material and methodsThe proposed hybrid model is trained and tested with a novel sensing module. We have monitored the concentration of urea in the saliva sample to detect the disease. Experiments are carried out to test the model with real-time samples and to compare its performance with conventional Convolutional Neural Network (CNN) and other traditional data classification methods.ResultsThe proposed method outperforms the conventional methods in terms of computational speed and prediction accuracy. The CorrNN-SVM combined network achieved a prediction accuracy of 98.67%. The experimental evaluations show a reduction in overall computation time of about 9.85% compared to the conventional CNN algorithm.ConclusionThe use of the SVM classifier has improved the capability of the network to make predictions more accurately. The proposed framework substantially advances the current methodology, and it provides more precise results compared to other data classification methods.  相似文献   

14.
As a rapidly developing research direction in computer vision (CV), related algorithms such as image classification and object detection have achieved inevitable research progress. Improving the accuracy and efficiency of algorithms for fine-grained identification of plant diseases and birds in agriculture is essential to the dynamic monitoring of agricultural environments. In this study, based on the computer vision detection and classification algorithm, combined with the architecture and ideas of the CNN model, the mainstream Transformer model was optimized, and then the CA-Transformer (Transformer Combined with Channel Attention) model was proposed to improve the ability to identify and classify critical areas. The main work is as follows: (1) The C-Attention mechanism is proposed to strengthen the feature information extraction within the patch and the communication between feature information so that the entire network can be fully attentive while reducing the computational overhead; (2) The weight-sharing method is proposed to transfer parameters between different layers, improve the reusability of model data, and at the same time increase the knowledge distillation link to reduce problems such as excessive parameters and overfitting; (3) Token Labeling is proposed to generate score labels according to the position of each Token, and the total loss function of this study is proposed according to the CA-Transformer model structure. The performance of the CA-Transformer model proposed in this study is compared with the current mainstream models on datasets of different scales, and ablation experiments are performed. The results show that the accuracy and mIoU of the CA-Transformer proposed in this study reach 82.89% and 53.17MS, respectively, and have good transfer learning ability, indicating that the model has good performance in fine-grained visual categorization tasks and can be used in ecological information. In the context of more diverse ecological information, this study can provide reference and inspiration for the practical application of information.  相似文献   

15.
Wheat rusts, caused by pathogenic fungi, are responsible for significant losses in Wheat production. Leaf rust can cause around 45–50% crop loss, whereas stem and stripe rust can cause up to 100% crop loss under suitable weather conditions. Early treatment is crucial in reducing yield loss and improving the effectiveness of phytosanitary measures. In this study, an EfficientNet architecture-based model for Wheat disease identification is proposed for automatically detecting major Wheat rusts. We prepared a dataset, referred to as WheatRust21, consisting of 6556 images of healthy and diseased leaves from natural field conditions. We attempted several classical CNN-based models such as VGG19, ResNet152, DenseNet169, InceptionNetV3, and MobileNetV2 for Wheat rust disease identification and obtained accuracy ranging from 91.2 to 97.8%. To further improve accuracy, we experimented with eight variants of EfficientNet architecture and discovered that our fine-tuned EfficientNet B4 model achieved a testing accuracy of 99.35%, a result that has not been reported in the literature so far to the best of our knowledge. This model can be easily integrated into mobile applications for use by stakeholders for image-based wheat disease identification in field conditions.  相似文献   

16.
R.R. Janghel  Y.K. Rathore 《IRBM》2021,42(4):258-267
ObjectivesAlzheimer's Disease (AD) is the most general type of dementia. In all leading countries, it is one of the primary reasons of death in senior citizens. Currently, it is diagnosed by calculating the MSME score and by the manual study of MRI Scan. Also, different machine learning methods are utilized for automatic diagnosis but existing has some limitations in terms of accuracy. So, main objective of this paper to include a preprocessing method before CNN model to increase the accuracy of classification.Materials and methodIn this paper, we present a deep learning-based approach for detection of Alzheimer's Disease from ADNI database of Alzheimer's disease patients, the dataset contains fMRI and PET images of Alzheimer's patients along with normal person's image. We have applied 3D to 2D conversion and resizing of images before applying VGG-16 architecture of Convolution neural network for feature extraction. Finally, for classification SVM, Linear Discriminate, K means clustering, and Decision tree classifiers are used.ResultsThe experimental result shows that the average accuracy of 99.95% is achieved for the classification of the fMRI dataset, while the average accuracy of 73.46% is achieved with the PET dataset. On comparing results on the basis of accuracy, specificity, sensitivity and on some other parameters we found that these results are better than existing methods.Conclusionsthis paper, suggested a unique way to increase the performance of CNN models by applying some preprocessing on image dataset before sending to CNN architecture for feature extraction. We applied this method on ADNI database and on comparing the accuracies with other similar approaches it shows better results.  相似文献   

17.
Plant-leaf disease detection is one of the key problems of smart agriculture which has a significant impact on the global economy. To mitigate this, intelligent agricultural solutions are evolving that aid farmer to take preventive measures for improving crop production. With the advancement of deep learning, many convolutional neural network models have blazed their way to the identification of plant-leaf diseases. However, these models are limited to the detection of specific crops only. Therefore, this paper presents a new deeper lightweight convolutional neural network architecture (DLMC-Net) to perform plant leaf disease detection across multiple crops for real-time agricultural applications. In the proposed model, a sequence of collective blocks is introduced along with the passage layer to extract deep features. These benefits in feature propagation and feature reuse, which results in handling the vanishing gradient problem. Moreover, point-wise and separable convolution blocks are employed to reduce the number of trainable parameters. The efficacy of the proposed DLMC-Net model is validated across four publicly available datasets, namely citrus, cucumber, grapes, and tomato. Experimental results of the proposed model are compared against seven state-of-the-art models on eight parameters, namely accuracy, error, precision, recall, sensitivity, specificity, F1-score, and Matthews correlation coefficient. Experiments demonstrate that the proposed model has surpassed all the considered models, even under complex background conditions, with an accuracy of 93.56%, 92.34%, 99.50%, and 96.56% on citrus, cucumber, grapes, and tomato, respectively. Moreover, the proposed DLMC-Net requires only 6.4 million trainable parameters, which is the second best among the compared models. Therefore, it can be asserted that the proposed model is a viable alternative to perform plant leaf disease detection across multiple crops.  相似文献   

18.
《IRBM》2022,43(5):422-433
BackgroundElectrocardiogram (ECG) is a method of recording the electrical activity of the heart and it provides a diagnostic means for heart-related diseases. Arrhythmia is any irregularity of the heartbeat that causes an abnormality in the heart rhythm. Early detection of arrhythmia has great importance to prevent many diseases. Manual analysis of ECG recordings is not practical for quickly identifying arrhythmias that may cause sudden deaths. Hence, many studies have been presented to develop computer-aided-diagnosis (CAD) systems to automatically identify arrhythmias.MethodsThis paper proposes a novel deep learning approach to identify arrhythmias in ECG signals. The proposed approach identifies arrhythmia classes using Convolutional Neural Network (CNN) trained by two-dimensional (2D) ECG beat images. Firstly, ECG signals, which consist of 5 different arrhythmias, are segmented into heartbeats which are transformed into 2D grayscale images. Afterward, the images are used as input for training a new CNN architecture to classify heartbeats.ResultsThe experimental results show that the classification performance of the proposed approach reaches an overall accuracy of 99.7%, sensitivity of 99.7%, and specificity of 99.22% in the classification of five different ECG arrhythmias. Further, the proposed CNN architecture is compared to other popular CNN architectures such as LeNet and ResNet-50 to evaluate the performance of the study.ConclusionsTest results demonstrate that the deep network trained by ECG images provides outstanding classification performance of arrhythmic ECG signals and outperforms similar network architectures. Moreover, the proposed method has lower computational costs compared to existing methods and is more suitable for mobile device-based diagnosis systems as it does not involve any complex preprocessing process. Hence, the proposed approach provides a simple and robust automatic cardiac arrhythmia detection scheme for the classification of ECG arrhythmias.  相似文献   

19.
The two most generally diagnosed Neurodegenerative diseases are the Alzheimer and Parkinson diseases. So this paper presents a fully automated early screening system based on the Capsule network for the classification of these two Neurodegenerative diseases. In this study, we hypothesized that the Neurodegenerative diseases-Caps system based on the Capsule network architecture accurately performs the multiclass i.e. three class classification into either the Alzheimer class or Parkinson class or Healthy control and delivers better results in comparison other deep transfer learning models. The real motivation behind choosing the capsule network architecture is its more resilient nature towards the affine transformations as well as rotational & translational invariance, which commonly persists in the medical image datasets. Apart from this, the capsule networks overcomes the pooling layers related deficiencies from which conventional CNNs are mostly affected and unable to delivers accurate results especially in the tasks related to image classification. The various Computer aided systems based on machine learning for the classification of brain tumors and other types of cancers are already available. Whereas for the classification of Neurodegenerative diseases, the amount of research done is very limited and the number of persons suffering from this type of diseases are increasing especially in developing countries like India, China etc. So there is a need to develop an early screening system for the correct multiclass classification into Alzheimer’s, Parkinson’s and Normal or Healthy control cases. The Alzheimer disease and Parkinson progression (ADPP) dataset is used in this research study for the training of the proposed Neurodegenerative diseases-Caps system. This ADPP dataset is developed with the aid of both the Parkinson''s Progression Markers Initiative (PPMI) and Alzheimer’s disease Neuroimaging Initiative (ADNI) databases. There is no such early screening system exist yet, which can perform the accurate classification of these two Neurodegenerative diseases. For the sake of genuine comparison, other popular deep transfer learning models like VGG19, VGG16, ResNet50 and InceptionV3 are implemented and also trained over the same ADPP dataset. The proposed Neurodegenerative diseases-Caps system deliver accuracies of 97.81, 98, 96.81% for the Alzheimer, Parkinson and Healthy control or Normal cases with 70/30 (training/validation split) and performs way better as compare to the other popular Deep transfer learning models.Supplementary InformationThe online version contains supplementary material available at 10.1007/s11571-022-09787-1.  相似文献   

20.
PurposeConvolutional neural networks (CNNs) offer a promising approach to automated segmentation. However, labeling contours on a large scale is laborious. Here we propose a method to improve segmentation continually with less labeling effort.MethodsThe cohort included 600 patients with nasopharyngeal carcinoma. The proposed method was comprised of four steps. First, an initial CNN model was trained from scratch to perform segmentation of the clinical target volume. Second, a binary classifier was trained using a secondary CNN to identify samples for which the initial model gave a dice similarity coefficient (DSC) < 0.85. Third, the classifier was used to select such samples from the new coming data. Forth, the final model was fine-tuned from the initial model, using only selected samples.ResultsThe classifier can detect poor segmentation of the model with an accuracy of 92%. The proposed segmentation method improved the DSC from 0.82 to 0.86 while reducing the labeling effort by 45%.ConclusionsThe proposed method reduces the amount of labeled training data and improves segmentation by continually acquiring, fine-tuning, and transferring knowledge over long time spans.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号