首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《IRBM》2022,43(4):290-299
ObjectiveIn this research paper, the brain MRI images are going to classify by considering the excellence of CNN on a public dataset to classify Benign and Malignant tumors.Materials and MethodsDeep learning (DL) methods due to good performance in the last few years have become more popular for Image classification. Convolution Neural Network (CNN), with several methods, can extract features without using handcrafted models, and eventually, show better accuracy of classification. The proposed hybrid model combined CNN and support vector machine (SVM) in terms of classification and with threshold-based segmentation in terms of detection.ResultThe findings of previous studies are based on different models with their accuracy as Rough Extreme Learning Machine (RELM)-94.233%, Deep CNN (DCNN)-95%, Deep Neural Network (DNN) and Discrete Wavelet Autoencoder (DWA)-96%, k-nearest neighbors (kNN)-96.6%, CNN-97.5%. The overall accuracy of the hybrid CNN-SVM is obtained as 98.4959%.ConclusionIn today's world, brain cancer is one of the most dangerous diseases with the highest death rate, detection and classification of brain tumors due to abnormal growth of cells, shapes, orientation, and the location is a challengeable task in medical imaging. Magnetic resonance imaging (MRI) is a typical method of medical imaging for brain tumor analysis. Conventional machine learning (ML) techniques categorize brain cancer based on some handicraft property with the radiologist specialist choice. That can lead to failure in the execution and also decrease the effectiveness of an Algorithm. With a brief look came to know that the proposed hybrid model provides more effective and improvement techniques for classification.  相似文献   

2.
《IRBM》2022,43(5):422-433
BackgroundElectrocardiogram (ECG) is a method of recording the electrical activity of the heart and it provides a diagnostic means for heart-related diseases. Arrhythmia is any irregularity of the heartbeat that causes an abnormality in the heart rhythm. Early detection of arrhythmia has great importance to prevent many diseases. Manual analysis of ECG recordings is not practical for quickly identifying arrhythmias that may cause sudden deaths. Hence, many studies have been presented to develop computer-aided-diagnosis (CAD) systems to automatically identify arrhythmias.MethodsThis paper proposes a novel deep learning approach to identify arrhythmias in ECG signals. The proposed approach identifies arrhythmia classes using Convolutional Neural Network (CNN) trained by two-dimensional (2D) ECG beat images. Firstly, ECG signals, which consist of 5 different arrhythmias, are segmented into heartbeats which are transformed into 2D grayscale images. Afterward, the images are used as input for training a new CNN architecture to classify heartbeats.ResultsThe experimental results show that the classification performance of the proposed approach reaches an overall accuracy of 99.7%, sensitivity of 99.7%, and specificity of 99.22% in the classification of five different ECG arrhythmias. Further, the proposed CNN architecture is compared to other popular CNN architectures such as LeNet and ResNet-50 to evaluate the performance of the study.ConclusionsTest results demonstrate that the deep network trained by ECG images provides outstanding classification performance of arrhythmic ECG signals and outperforms similar network architectures. Moreover, the proposed method has lower computational costs compared to existing methods and is more suitable for mobile device-based diagnosis systems as it does not involve any complex preprocessing process. Hence, the proposed approach provides a simple and robust automatic cardiac arrhythmia detection scheme for the classification of ECG arrhythmias.  相似文献   

3.
Evaluation of blood smear is a commonly clinical test these days. Most of the time, the hematologists are interested on white blood cells (WBCs) only. Digital image processing techniques can help them in their analysis and diagnosis. For example, disease like acute leukemia is detected based on the amount and condition of the WBC. The main objective of this paper is to segment the WBC to its two dominant elements: nucleus and cytoplasm. The segmentation is conducted using a proposed segmentation framework that consists of an integration of several digital image processing algorithms. Twenty microscopic blood images were tested, and the proposed framework managed to obtain 92% accuracy for nucleus segmentation and 78% for cytoplasm segmentation. The results indicate that the proposed framework is able to extract the nucleus and cytoplasm region in a WBC image sample.  相似文献   

4.
PurposeTo develop an automatic multimodal method for segmentation of parotid glands (PGs) from pre-registered computed tomography (CT) and magnetic resonance (MR) images and compare its results to the results of an existing state-of-the-art algorithm that segments PGs from CT images only.MethodsMagnetic resonance images of head and neck were registered to the accompanying CT images using two different state-of-the-art registration procedures. The reference domains of registered image pairs were divided on the complementary PG regions and backgrounds according to the manual delineation of PGs on CT images, provided by a physician. Patches of intensity values from both image modalities, centered around randomly sampled voxels from the reference domain, served as positive or negative samples in the training of the convolutional neural network (CNN) classifier. The trained CNN accepted a previously unseen (registered) image pair and classified its voxels according to the resemblance of its patches to the patches used for training. The final segmentation was refined using a graph-cut algorithm, followed by the dilate-erode operations.ResultsUsing the same image dataset, segmentation of PGs was performed using the proposed multimodal algorithm and an existing monomodal algorithm, which segments PGs from CT images only. The mean value of the achieved Dice overlapping coefficient for the proposed algorithm was 78.8%, while the corresponding mean value for the monomodal algorithm was 76.5%.ConclusionsAutomatic PG segmentation on the planning CT image can be augmented with the MR image modality, leading to an improved RT planning of head and neck cancer.  相似文献   

5.
Classification of brain tumor in Magnetic Resonance Imaging (MRI) images is highly popular in treatment planning, early diagnosis, and outcome evaluation. It is very difficult for classifying and diagnosing tumors from several images. Thus, an automatic prediction strategy is essential in classifying brain tumors as malignant, core, edema, or benign. In this research, a novel approach using Salp Water Optimization-based Deep Belief network (SWO-based DBN) is introduced to classify brain tumor. At the initial stage, the input image is pre-processed to eradicate the artifacts present in input image. Following pre-processing, the segmentation is executed by SegNet, where the SegNet is trained using the proposed SWO. Moreover, the Convolutional Neural Network (CNN) features are employed to mine the features for future processing. At last, the introduced SWO-based DBN technique efficiently categorizes the brain tumor with respect to the extracted features. Thereafter, the produced output of the introduced SegNet + SWO-based DBN is made use of in brain tumor segmentation and classification. The developed technique produced better results with highest values of accuracy at 0.933, specificity at 0.880, and sensitivity at 0.938 using BRATS, 2018 datasets and accuracy at 0.921, specificity at 0.853, and sensitivity at 0.928 for BRATS, 2020 dataset.  相似文献   

6.
Schizosaccharomyces pombe shares many genes and proteins with humans and is a good model for chromosome behavior and DNA dynamics, which can be analyzed by visualizing the behavior of fluorescently tagged proteins in vivo. Performing a genome-wide screen for changes in such proteins requires developing methods that automate analysis of a large amount of images, the first step of which requires robust segmentation of the cell. We developed a segmentation system, PombeX, that can segment cells from transmitted illumination images with focus gradient and varying contrast. Corrections for focus gradient are applied to the image to aid in accurate detection of cell membrane and cytoplasm pixels, which is used to generate initial contours for cells. Gradient vector flow snake evolution is used to obtain the final cell contours. Finally, a machine learning-based validation of cell contours removes most incorrect or spurious contours. Quantitative evaluations show overall good segmentation performance on a large set of images, regardless of differences in image quality, lighting condition, focus condition and phenotypic profile. Comparisons with recent related methods for yeast cells show that PombeX outperforms current methods, both in terms of segmentation accuracy and computational speed.  相似文献   

7.
PurposeThis work describes PETSTEP (PET Simulator of Tracers via Emission Projection): a faster and more accessible alternative to Monte Carlo (MC) simulation generating realistic PET images, for studies assessing image features and segmentation techniques.MethodsPETSTEP was implemented within Matlab as open source software. It allows generating three-dimensional PET images from PET/CT data or synthetic CT and PET maps, with user-drawn lesions and user-set acquisition and reconstruction parameters. PETSTEP was used to reproduce images of the NEMA body phantom acquired on a GE Discovery 690 PET/CT scanner, and simulated with MC for the GE Discovery LS scanner, and to generate realistic Head and Neck scans. Finally the sensitivity (S) and Positive Predictive Value (PPV) of three automatic segmentation methods were compared when applied to the scanner-acquired and PETSTEP-simulated NEMA images.ResultsPETSTEP produced 3D phantom and clinical images within 4 and 6 min respectively on a single core 2.7 GHz computer. PETSTEP images of the NEMA phantom had mean intensities within 2% of the scanner-acquired image for both background and largest insert, and 16% larger background Full Width at Half Maximum. Similar results were obtained when comparing PETSTEP images to MC simulated data. The S and PPV obtained with simulated phantom images were statistically significantly lower than for the original images, but led to the same conclusions with respect to the evaluated segmentation methods.ConclusionsPETSTEP allows fast simulation of synthetic images reproducing scanner-acquired PET data and shows great promise for the evaluation of PET segmentation methods.  相似文献   

8.
目的 为了在细胞培养过程中便捷地分析细胞的数目和形态.方法 本文将深度学习应用到细胞识别中,实现了一种可以通过普通光学显微镜拍照,并直接在培养皿中进行细胞识别计数的方法.结果 通过构建U-Net网络结构,并对贴壁细胞和悬浮细胞图像进行标记训练,来实现贴壁细胞和悬浮细胞的分割计数.同时,用该算法绘制细胞生长曲线以及计算抑...  相似文献   

9.
Epithelial and stromal tissues are components of the tumor microenvironment and play a major role in tumor initiation and progression. Distinguishing stroma from epithelial tissues is critically important for spatial characterization of the tumor microenvironment. Here, we propose BrcaSeg, an image analysis pipeline based on a convolutional neural network (CNN) model to classify epithelial and stromal regions in whole-slide hematoxylin and eosin (H&E) stained histopathological images. The CNN model is trained using well-annotated breast cancer tissue microarrays and validated with images from The Cancer Genome Atlas (TCGA) Program. BrcaSeg achieves a classification accuracy of 91.02%, which outperforms other state-of-the-art methods. Using this model, we generate pixel-level epithelial/stromal tissue maps for 1000 TCGA breast cancer slide images that are paired with gene expression data. We subsequently estimate the epithelial and stromal ratios and perform correlation analysis to model the relationship between gene expression and tissue ratios. Gene Ontology (GO) enrichment analyses of genes that are highly correlated with tissue ratios suggest that the same tissue is associated with similar biological processes in different breast cancer subtypes, whereas each subtype also has its own idiosyncratic biological processes governing the development of these tissues. Taken all together, our approach can lead to new insights in exploring relationships between image-based phenotypes and their underlying genomic events and biological processes for all types of solid tumors. BrcaSeg can be accessed at https://github.com/Serian1992/ImgBio.  相似文献   

10.
The wealth spatial and spectral information available from last-generation Earth observation instruments has introduced extremely high computational requirements in many applications. Most currently available parallel techniques treat remotely sensed data not as images, but as unordered listings of spectral measurements with no spatial arrangement. In thematic classification applications, however, the integration of spatial and spectral information can be greatly beneficial. Although such integrated approaches can be efficiently mapped in homogeneous commodity clusters, low-cost heterogeneous networks of computers (HNOCs) have soon become a standard tool of choice for dealing with the massive amount of image data produced by Earth observation missions. In this paper, we develop a new morphological/neural algorithm for parallel classification of high-dimensional (hyperspectral) remotely sensed image data sets. The algorithm’s accuracy and parallel performance is tested in a variety of homogeneous and heterogeneous computing platforms, using two networks of workstations distributed among different locations, and also a massively parallel Beowulf cluster at NASA’s Goddard Space Flight Center in Maryland.
Javier PlazaEmail:
  相似文献   

11.
Leukemoid reaction like leukemia indicates noticeable increased count of WBCs (White Blood Cells) but the cause of it is due to severe inflammation or infections in other body regions. In automatic diagnosis in classifying leukemia and leukemoid reactions, ALL IDB2 (Acute Lymphoblastic Leukemia-Image Data Base) dataset has been used which comprises 110 training images of blast cells and healthy cells. This paper aimed at an automatic process to distinguish leukemia and leukemoid reactions from blood smear images using Machine Learning. Initially, automatic detection and counting of WBC is done to identify leukocytosis and then an automatic detection of WBC blasts is performed to support classification of leukemia and leukemoid reactions. Leukocytosis is commonly observed both in leukemia and leukemoid hence physicians may have chance of wrong diagnosis of malignant leukemia for the patients with leukemoid reactions. BCCD (blood cell count detection) Dataset has been used which has 364 blood smear images of which 349 are of single WBC type. The Image segmentation algorithm of Hue Saturation Value color based on watershed has been applied. VGG16 (Visual Geometric Group) CNN (Convolution Neural Network) architecture based deep learning technique is being incorporated for classification and counting WBC type from segmented images. The VGG16 architecture based CNN used for classification and segmented images obtained from first part were tested to identify WBC blasts.  相似文献   

12.
13.
PurposeThe classification of urinary stones is important prior to treatment because the treatments depend on three types of urinary stones, i.e., calcium, uric acid, and mixture stones. We have developed an automatic approach for the classification of urinary stones into the three types based on microcomputed tomography (micro-CT) images using a convolutional neural network (CNN).Materials and methodsThirty urinary stones from different patients were scanned in vitro using micro-CT (pixel size: 14.96 μm; slice thickness: 15 μm); a total of 2,430 images (micro-CT slices) were produced. The slices (227 × 227 pixels) were classified into the three categories based on their energy dispersive X-ray (EDX) spectra obtained via scanning electron microscopy (SEM). The images of urinary stones from each category were divided into three parts; 66%, 17%, and 17% of the dataset were assigned to the training, validation, and test datasets, respectively. The CNN model with 15 layers was assessed based on validation accuracy for the optimization of hyperparameters such as batch size, learning rate, and number of epochs with different optimizers. Then, the model with the optimized hyperparameters was evaluated for the test dataset to obtain classification accuracy and error.ResultsThe validation accuracy of the developed approach with CNN with optimized hyperparameters was 0.9852. The trained CNN model achieved a test accuracy of 0.9959 with a classification error of 1.2%.ConclusionsThe proposed automated CNN-based approach could successfully classify urinary stones into three types, namely calcium, uric acid, and mixture stones, using micro-CT images.  相似文献   

14.
PurposeArm-artifact, a type of streak artifact frequently observed in computed tomography (CT) images obtained at arms-down positioning in polytrauma patients, is known to degrade image quality. This study aimed to develop a novel arm-artifact reduction algorithm (AAR) applied to projection data.MethodsA phantom resembling an adult abdomen with two arms was scanned using a 16-row CT scanner. The projection data were processed by AAR, and CT images were reconstructed. The artifact reduction for the same phantom was compared with that achieved by two latest iterative reconstruction (IR) techniques (IR1 and IR2) using a normalized artifact index (nAI) at two locations (ventral and dorsal side). Image blurring as a processing side effect was compared with IR2 of the model-based IR using a plastic needle phantom. Additionally, the projection data of two clinical cases were processed using AAR, and the image noise was evaluated.ResultsAAR and IR2 significantly reduced nAI by 87.5% and 74.0%, respectively at the ventral side and 84.2% and 69.6%, respectively, at the dorsal side compared with each filtered back projection (P < 0.01), whereas IR1 did not. The proposed algorithm mostly maintained the original spatial resolution, compared with IR2, which yielded apparent image blurring. The image noise in the clinical cases was also reduced significantly (P < 0.01).ConclusionsAAR was more effective and superior than the latest IR techniques and is expected to improve the image quality of polytrauma CT imaging with arms-down positioning.  相似文献   

15.
16.
PurposeEPID dosimetry in the Unity MR-Linac system allows for reconstruction of absolute dose distributions within the patient geometry. Dose reconstruction is accurate for the parts of the beam arriving at the EPID through the MRI central unattenuated region, free of gradient coils, resulting in a maximum field size of ~10 × 22 cm2 at isocentre. The purpose of this study is to develop a Deep Learning-based method to improve the accuracy of 2D EPID reconstructed dose distributions outside this central region, accounting for the effects of the extra attenuation and scatter.MethodsA U-Net was trained to correct EPID dose images calculated at the isocenter inside a cylindrical phantom using the corresponding TPS dose images as ground truth for training. The model was evaluated using a 5-fold cross validation procedure. The clinical validity of the U-Net corrected dose images (the so-called DEEPID dose images) was assessed with in vivo verification data of 45 large rectum IMRT fields. The sensitivity of DEEPID to leaf bank position errors (±1.5 mm) and ±5% MU delivery errors was also tested.ResultsCompared to the TPS, in vivo 2D DEEPID dose images showed an average γ-pass rate of 90.2% (72.6%–99.4%) outside the central unattenuated region. Without DEEPID correction, this number was 44.5% (4.0%–78.4%). DEEPID correctly detected the introduced delivery errors.ConclusionsDEEPID allows for accurate dose reconstruction using the entire EPID image, thus enabling dosimetric verification for field sizes up to ~19 × 22 cm2 at isocentre. The method can be used to detect clinically relevant errors.  相似文献   

17.
摘要 目的:结合人工智能方法设计针对肝脏超声影像的辅助诊断系统,辅助医生对大样本肝脏超声影像数据的标准化和高效化诊断,实现基于肝脏超声图像的非酒精性脂肪性肝病的精准诊断。方法:通过开发肝脏超声影像的识别与分类、脂肪肝分级分析和肝脏脂肪含量定量分析三个模块,建立一套非酒精性脂肪性肝病的超声影像人工智能辅助诊断系统,该系统能够自动区分输入到系统中不同采样视野的超声影像类型,并对肝脏超声图像进行数字化分析,给出待测超声图像是否呈现脂肪肝以及其肝脏脂肪含量的百分比值。结果:本研究中的超声图像识别分类模块可高通量区分出肝肾比图像和衰减率图像的两类超声影像,其分类的准确率达100%。脂肪肝分级分析模块在测试集数据的准确率达到84%,展现出可胜任辅助医生诊断的能力。基于人工肝脏脂肪含量定量方法开发的肝脏脂肪含量定量分析模块的准确率达到67.74%。结论:本研究已开发出一套基于肝脏超声影像的智能辅助诊断系统,可以辅助医生快速、简单、无创地筛选出潜在患有脂肪肝的患者,虽然现阶段实现肝脏脂肪定量分析仍有难度,但已展现出较大的临床应用潜力。  相似文献   

18.
《IRBM》2022,43(6):640-657
ObjectivesImage segmentation plays an important role in the analysis and understanding of the cellular process. However, this task becomes difficult when there is intensity inhomogeneity between regions, and it is more challenging in the presence of the noise and clustered cells. The goal of the paper is propose an image segmentation framework that tackles the above cited problems.Material and methodsA new method composed of two steps is proposed: First, segment the image using B-spline level set with Region-Scalable Fitting (RSF) active contour model, second apply the Watershed algorithm based on new object markers to refine the segmentation and separate clustered cells. The major contributions of the paper are: 1) Use of a continuous formulation of the level set in the B-spline basis, 2) Develop the energy function and its derivative by introducing the RSF model to deal with intensity inhomogeneity, 3) For the Watershed, propose a relevant choice of markers that considers the cell properties.ResultsExperimental results are performed on widely used synthetic images, in addition to simulated and real biological images, without and with additive noise. They attest the high quality of segmentation of the proposed method in terms of quantitative and qualitative evaluation.ConclusionThe proposed method is able to tackle many difficulties at the same time: overlapped intensities, noise, different cell sizes and clustered cells. It provides an efficient tool for image segmentation especially biological ones.  相似文献   

19.
PurposeWhole-body bone scintigraphy is the most widely used method for detecting bone metastases in advanced cancer. However, its interpretation depends on the experience of the radiologist. Some automatic interpretation systems have been developed in order to improve diagnostic accuracy. These systems are pixel-based and do not use spatial or textural information of groups of pixels, which could be very important for classifying images with better accuracy. This paper presents a fast method of object-oriented classification that facilitates easier interpretation of bone scintigraphy images.MethodsNine whole-body images from patients suspected with bone metastases were analyzed in this preliminary study. First, an edge-based segmentation algorithm together with the full lambda-schedule algorithm were used to identify the object in the bone scintigraphy and the textural and spatial attributes of these objects were calculated. Then, a set of objects (224 objects, ~ 46% of the total objects) were selected as training data based on visual examination of the image, and were assigned to various levels of radionuclide accumulation before performing the data classification using both k-nearest-neighbor and support vector machine classifiers. The performance of the proposed method was evaluated using as metric the statistical parameters calculated from error matrix.ResultsThe results revealed that the proposed object-oriented classification approach using either k-nearest-neighbor or support vector machine as classification methods performed well in detecting bone metastasis in terms of overall accuracy (86.62 ± 2.163% and 86.81 ± 2.137% respectively) and kappa coefficient (0.6395 ± 0.0143 and 0.6481 ± 0.0218 respectively).ConclusionsIn conclusion, the described method provided encouraging results in mapping bone metastases in whole-body bone scintigraphy.  相似文献   

20.
《IRBM》2009,30(4):184-187
MRI is an important technique to follow up myocardial infarction, allowing studying in only one examination the contraction and the viability through delayed enhancement (DE). This study proposes to automate the segmentation of the myocardium on images prior to the estimation of the extent of infarcted tissue. Indeed the segmentation of the myocardium was performed using cine contraction images which present a high contrast between cavity and myocardium before reporting resulting segmentation on DE images. After the segmentation, the segmental transmurality is estimated on a conventional five point scale, using the fuzzy k-mean classification algorithm. A head-to-head comparison was performed between visual and quantitative analysis of the infarct transmural extent on DE-MR imaging. Results on nine patients showed an absolute agreement of 80% and a relative agreement (with one point difference) of 97%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号