首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 0 毫秒
1.
PurposeArtificial intelligence (AI) models are playing an increasing role in biomedical research and healthcare services. This review focuses on challenges points to be clarified about how to develop AI applications as clinical decision support systems in the real-world context.MethodsA narrative review has been performed including a critical assessment of articles published between 1989 and 2021 that guided challenging sections.ResultsWe first illustrate the architectural characteristics of machine learning (ML)/radiomics and deep learning (DL) approaches. For ML/radiomics, the phases of feature selection and of training, validation, and testing are described. DL models are presented as multi-layered artificial/convolutional neural networks, allowing us to directly process images. The data curation section includes technical steps such as image labelling, image annotation (with segmentation as a crucial step in radiomics), data harmonization (enabling compensation for differences in imaging protocols that typically generate noise in non-AI imaging studies) and federated learning. Thereafter, we dedicate specific sections to: sample size calculation, considering multiple testing in AI approaches; procedures for data augmentation to work with limited and unbalanced datasets; and the interpretability of AI models (the so-called black box issue). Pros and cons for choosing ML versus DL to implement AI applications to medical imaging are finally presented in a synoptic way.ConclusionsBiomedicine and healthcare systems are one of the most important fields for AI applications and medical imaging is probably the most suitable and promising domain. Clarification of specific challenging points facilitates the development of such systems and their translation to clinical practice.  相似文献   

2.
The diagnosis of Coronary Artery Disease (CAD), Myocardial Infarction (MI) and carotid atherosclerosis is of paramount importance, as these cardiovascular diseases may cause medical complications and large number of death. Ultrasound (US) is a widely used imaging modality, as it captures moving images and image features correlate well with results obtained from other imaging methods. Furthermore, US does not use ionizing radiation and it is economical when compared to other imaging modalities. However, reading US images takes time and the relationship between image and tissue composition is complex. Therefore, the diagnostic accuracy depends on both time taken to read the images and experience of the screening practitioner. Computer support tools can reduce the inter-operator variability with lower subject specific expertise, when appropriate processing methods are used. In the current review, we analysed automatic detection methods for the diagnosis of CAD, MI and carotid atherosclerosis based on thoracic and Intravascular Ultrasound (IVUS). We found that IVUS is more often used than thoracic US for CAD. But for MI and carotid atherosclerosis IVUS is still in the experimental stage. Furthermore, thoracic US is more often used than IVUS for computer aided diagnosis systems.  相似文献   

3.
PurposeTo develop a computerized detection system for the automatic classification of the presence/absence of mass lesions in digital breast tomosynthesis (DBT) annotated exams, based on a deep convolutional neural network (DCNN).Materials and MethodsThree DCNN architectures working at image-level (DBT slice) were compared: two state-of-the-art pre-trained DCNN architectures (AlexNet and VGG19) customized through transfer learning, and one developed from scratch (DBT-DCNN). To evaluate these DCNN-based architectures we analysed their classification performance on two different datasets provided by two hospital radiology departments. DBT slice images were processed following normalization, background correction and data augmentation procedures. The accuracy, sensitivity, and area-under-the-curve (AUC) values were evaluated on both datasets, using receiver operating characteristic curves. A Grad-CAM technique was also implemented providing an indication of the lesion position in the DBT slice.Results Accuracy, sensitivity and AUC for the investigated DCNN are in-line with the best performance reported in the field. The DBT-DCNN network developed in this work showed an accuracy and a sensitivity of (90% ± 4%) and (96% ± 3%), respectively, with an AUC as good as 0.89 ± 0.04. A k-fold cross validation test (with k = 4) showed an accuracy of 94.0% ± 0.2%, and a F1-score test provided a value as good as 0.93 ± 0.03. Grad-CAM maps show high activation in correspondence of pixels within the tumour regions.Conclusions We developed a deep learning-based framework (DBT-DCNN) to classify DBT images from clinical exams. We investigated also a possible application of the Grad-CAM technique to identify the lesion position.  相似文献   

4.
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.  相似文献   

5.
6.
7.
IntroductionOur markerless tumor tracking algorithm requires 4DCT data to train models. 4DCT cannot be used for markerless tracking for respiratory-gated treatment due to inaccuracies and a high radiation dose. We developed a deep neural network (DNN) to generate 4DCT from 3DCT data.MethodsWe used 2420 thoracic 4DCT datasets from 436 patients to train a DNN, designed to export 9 deformation vector fields (each field representing one-ninth of the respiratory cycle) from each CT dataset based on a 3D convolutional autoencoder with shortcut connections using deformable image registration. Then 3DCT data at exhale were transformed using the predicted deformation vector fields to obtain simulated 4DCT data. We compared markerless tracking accuracy between original and simulated 4DCT datasets for 20 patients. Our tracking algorithm used a machine learning approach with patient-specific model parameters. For the training stage, a pair of digitally reconstructed radiography images was generated using 4DCT for each patient. For the prediction stage, the tracking algorithm calculated tumor position using incoming fluoroscopic image data.ResultsDiaphragmatic displacement averaged over 40 cases for the original 4DCT were slightly higher (<1.3 mm) than those for the simulated 4DCT. Tracking positional errors (95th percentile of the absolute value of displacement, “simulated 4DCT” minus “original 4DCT”) averaged over the 20 cases were 0.56 mm, 0.65 mm, and 0.96 mm in the X, Y and Z directions, respectively.ConclusionsWe developed a DNN to generate simulated 4DCT data that are useful for markerless tumor tracking when original 4DCT is not available. Using this DNN would accelerate markerless tumor tracking and increase treatment accuracy in thoracoabdominal treatment.  相似文献   

8.
Identifying the subcellular localization of proteins is particularly helpful in the functional annotation of gene products. In this study, we use Machine Learning and Exploratory Data Analysis (EDA) techniques to examine and characterize amino acid sequences of human proteins localized in nine cellular compartments. A dataset of 3,749 protein sequences representing human proteins was extracted from the SWISS-PROT database. Feature vectors were created to capture specific amino acid sequence characteristics. Relative to a Support Vector Machine, a Multi-layer Perceptron, and a Naive Bayes classifier, the C4.5 Decision Tree algorithm was the most consistent performer across all nine compartments in reliably predicting the subcellular localization of proteins based on their amino acid sequences (average Precision=0.88; average Sensitivity=0.86). Furthermore, EDA graphics characterized essential features of proteins in each compartment. As examples, proteins localized to the plasma membrane had higher proportions of hydrophobic amino acids; cytoplasmic proteins had higher proportions of neutral amino acids; and mitochondrial proteins had higher proportions of neutral amino acids and lower proportions of polar amino acids. These data showed that the C4.5 classifier and EDA tools can be effective for characterizing and predicting the subcellular localization of human proteins based on their amino acid sequences.  相似文献   

9.
PurposeIn proton therapy, imaging prompt gamma (PG) rays has the potential to verify proton dose (PD) distribution. Despite the fact that there is a strong correlation between the gamma-ray emission and PD, they are still different in terms of the distribution and the Bragg peak (BP) position. In this work, we investigated the feasibility of using a deep learning approach to convert PG images to PD distributions.MethodsWe designed the Monte Carlo simulations using 20 digital brain phantoms irradiated with a 100-MeV proton pencil beam. Each phantom was used to simulate 200 pairs of PG images and PD distributions. A convolutional neural network based on the U-net architecture was trained to predict PD distributions from PG images.ResultsOur simulation results show that the pseudo PD distributions derived from the corresponding PG images agree well with the simulated ground truths. The mean of the BP position errors from each phantom was less than 0.4 mm. We also found that 2000 pairs of PG images and dose distributions would be sufficient to train the U-net. Moreover, the trained network could be deployed on the unseen data (i.e. different beam sizes, proton energies and real patient CT data).ConclusionsOur simulation study has shown the feasibility of predicting PD distributions from PG images using a deep learning approach, but the reliable prediction of PD distributions requires high-quality PG images. Image-degrading factors such as low counts and limited spatial resolution need to be considered in order to obtain high-quality PG images.  相似文献   

10.
PurposeTo train and evaluate a very deep dilated residual network (DD-ResNet) for fast and consistent auto-segmentation of the clinical target volume (CTV) for breast cancer (BC) radiotherapy with big data.MethodsDD-ResNet was an end-to-end model enabling fast training and testing. We used big data comprising 800 patients who underwent breast-conserving therapy for evaluation. The CTV were validated by experienced radiation oncologists. We performed a fivefold cross-validation to test the performance of the model. The segmentation accuracy was quantified by the Dice similarity coefficient (DSC) and the Hausdorff distance (HD). The performance of the proposed model was evaluated against two different deep learning models: deep dilated convolutional neural network (DDCNN) and deep deconvolutional neural network (DDNN).ResultsMean DSC values of DD-ResNet (0.91 and 0.91) were higher than the other two networks (DDCNN: 0.85 and 0.85; DDNN: 0.88 and 0.87) for both right-sided and left-sided BC. It also has smaller mean HD values of 10.5 mm and 10.7 mm compared with DDCNN (15.1 mm and 15.6 mm) and DDNN (13.5 mm and 14.1 mm). Mean segmentation time was 4 s, 21 s and 15 s per patient with DDCNN, DDNN and DD-ResNet, respectively. The DD-ResNet was also superior with regard to results in the literature.ConclusionsThe proposed method could segment the CTV accurately with acceptable time consumption. It was invariant to the body size and shape of patients and could improve the consistency of target delineation and streamline radiotherapy workflows.  相似文献   

11.
目的:观察细胞色素P450系统药物代谢酶CYP2C19基因多态性以及相关临床因素对氯吡格雷抵抗的影响。方法:选择2010年11月至2011年5月我科拟行PCI术治疗的冠心病患者共145例,均给予氯吡格雷300mg负荷剂量,75mg维持剂量。①通过流式细胞仪检测血管舒张因子刺激酸磷蛋白血小板反应性指数VASP PRI(以VASP PRI≥50%,定义为氯吡格雷抵抗)分为氯吡格雷抵抗组和氯吡格雷反应组。②检测入选患者的药物代谢酶CYP2C19的基因型;根据不同等位基因功能缺失,分为快代谢基因型(*1/*1)、中间代谢基因型(*1/*2、*1/*3)和慢代谢基因型(*2/*2、*2/*3、*3/*3)。③观察CYP2C19基因型及相关临床危险因素对氯吡格雷反应性的影响,④观察氯吡格雷抵抗与临床不良终点事件主要临床不良终点事件[心源性死亡、再发心肌梗死、靶病变再次血运重建术(TLR)]和次要临床终点事件(支架内血栓形成、脑血管意外、大出血)之间的相关性。结果:检测出氯吡格雷抵抗的患者31例,其发生率为20.67%;检测出CYP2C19慢代谢基因型携带患者19例,所占比例为12.67%。慢代谢基因型患者与(快代谢基因型+中间代谢基因型患者)之间VASP PRI比为(49.20±8.45)%VS(44.17±5.41)%,P<0.05,氯吡格雷抵抗发生率之比为35.49%(n=11)VS16.81%(n=20),P<0.05。多元回归分析提示CYP2C19慢代谢基因型(OR:4.43;95%CI:3.28-8.37,P<0.05)和2型糖尿病(OR:2.76;95%CI:2.13-6.14;P<0.05)是氯吡格雷抵抗的两种危险因素。临床随访结果显示氯吡格雷抵抗组与氯吡格雷反应组主要临床不良终点事件的发生率比为6.45%(n=2)vs2.63%(n=3),P<0.05。结论:携带CPY2C19慢代谢基因型和患有2型糖尿病是导致氯吡格雷抵抗的两种重要的危险因素,氯吡格雷抵抗的发生增加了临床不良终点事件的风险。  相似文献   

12.
目的:探讨斑点追踪成像(speckle tracking imaging,STI)技术评价经皮冠状动脉介入(percutaneous coronary intervention,PCI)术对冠状动脉严重狭窄患者左室心肌力学的改变。方法:病变组(冠状动脉左前降支狭窄75%患者)30例,分别于PCI术前1天和术后3天、术后3个月接受超声心动图检查,测量的常规指标包括:左室射血分数(LVEF)、左室舒张末内径(LVDd)、左室舒张末容积(LVEd V),同时应用STI技术测量缺血心肌节段收缩期峰值应变参数:纵向、径向、圆周应变LS、RS、CS。体检健康者30例为对照组进行比较分析。结果:1与对照组比较,病变组PCI术前缺血心肌应变值(LS、RS、CS)均呈不同程度减低,差异均有统计学意义(P0.05);术后3天与术前比较均无统计学差异(P0.05);PCI术后3个月病变组LS、RS、CS较术前均不同程度提高,差异均有统计学意义(P0.05),与对照组比较,差异无统计学意义(P0.05);2病变组PCI术前后不同时间点与对照组比较,LVEF、LVDd、LVEd V均无统计学差异(P0.05)。结论:STI技术可定量敏感的评价冠状动脉严重狭窄患者缺血心肌力学改变,为评价PCI术对冠心病患者的疗效提供了客观依据。  相似文献   

13.
Gross Primary Productivity (GPP) is the amount of sequestered CO2 during plant photosynthesis. GPP is an important indicator of ecosystem health in various ecologies and to assess climate change. The objective of the present work is to propose a machine learning based GPP estimation model using remote sensing (RS) data in combination with meteorological data (MET) and topographical data (TOPO) for prediction of GPP, which can be upscaled in temporal and spatial resolution. Random Forest Regression (RFR) is proposed for this using the Fluxnet2015 GPP dataset for the Australian region. This model has attained a very high accuracy with an R2 value of 0.82, as estimated by 10-fold cross-validation. The model has been compared with state-of-the-art machine learning models and found to be performing better than others. Different feature sets like MET-features and TOPO-features were evaluated in combination with RS-features. The results exhibited that the RFR model performed better when MET and TOPO features are combined with RS-features. GPP prediction for the year 2014, in 8 days temporal and 500m  spatial resolution for the Australian region for different plant function types is demonstrated using the proposed model and produced very high value of R2 (0.84), when compared to ground truth. Thus, the proposed approach of the RFR model for GPP estimation showed significant improvement in regional carbon cycle studies and can also be employed for simulating GPP for the future under different climate scenarios.  相似文献   

14.
MR fingerprinting (MRF) is an innovative approach to quantitative MRI. A typical disadvantage of dictionary-based MRF is the explosive growth of the dictionary as a function of the number of reconstructed parameters, an instance of the curse of dimensionality, which determines an explosion of resource requirements. In this work, we describe a deep learning approach for MRF parameter map reconstruction using a fully connected architecture. Employing simulations, we have investigated how the performance of the Neural Networks (NN) approach scales with the number of parameters to be retrieved, compared to the standard dictionary approach. We have also studied optimal training procedures by comparing different strategies for noise addition and parameter space sampling, to achieve better accuracy and robustness to noise. Four MRF sequences were considered: IR-FISP, bSSFP, IR-FISP-B1, and IR-bSSFP-B1. A comparison between NN and the dictionary approaches in reconstructing parameter maps as a function of the number of parameters to be retrieved was performed using a numerical brain phantom. Results demonstrated that training with random sampling and different levels of noise variance yielded the best performance. NN performance was at least as good as the dictionary-based approach in reconstructing parameter maps using Gaussian noise as a source of artifacts: the difference in performance increased with the number of estimated parameters because the dictionary method suffers from the coarse resolution of the parameter space sampling. The NN proved to be more efficient in memory usage and computational burden, and has great potential for solving large-scale MRF problems.  相似文献   

15.
This paper describes an X-ray phase contrast imaging technique using analyzer-based optics called X-ray Dark-Field Imaging that has been under development for the past 10 years. We describe the theory behind XDFI, the X-ray optics required for implementing it in practice, and algorithms used for 2D, 2.5D, and 3D image reconstruction. The XDFI optical chain consists of an asymmetrically cut, Bragg-type monochromator-collimator that provides a planar monochromatic X-ray beam, a positioning stage for the specimens, a Laue-case angle analyzer, and one or two cameras to capture the dark and bright field images. We demonstrate the soft-tissue discrimination capabilities of XDFI by reconstructing images with absorption and phase contrast. By using a variety of specimens such as breast tissue with cancer, joints with articular cartilage, ex-vivo human eye specimen, and others, we show that refraction-based contrast derived from XDFI is more effective in characterizing anatomical features, articular pathology, and neoplastic disease than conventional absorption-based images. For example, XDFI of breast tissue can discriminate between the normal and diseased terminal duct lobular unit, and between invasive and in-situ cancer. The final section of this paper is devoted to potential future developments to enable clinical and histo-pathological applications of this technique.  相似文献   

16.
Predictive models based on radiomics and machine-learning (ML) need large and annotated datasets for training, often difficult to collect. We designed an operative pipeline for model training to exploit data already available to the scientific community. The aim of this work was to explore the capability of radiomic features in predicting tumor histology and stage in patients with non-small cell lung cancer (NSCLC).We analyzed the radiotherapy planning thoracic CT scans of a proprietary sample of 47 subjects (L-RT) and integrated this dataset with a publicly available set of 130 patients from the MAASTRO NSCLC collection (Lung1). We implemented intra- and inter-sample cross-validation strategies (CV) for evaluating the ML predictive model performances with not so large datasets.We carried out two classification tasks: histology classification (3 classes) and overall stage classification (two classes: stage I and II). In the first task, the best performance was obtained by a Random Forest classifier, once the analysis has been restricted to stage I and II tumors of the Lung1 and L-RT merged dataset (AUC = 0.72 ± 0.11). For the overall stage classification, the best results were obtained when training on Lung1 and testing of L-RT dataset (AUC = 0.72 ± 0.04 for Random Forest and AUC = 0.84 ± 0.03 for linear-kernel Support Vector Machine).According to the classification task to be accomplished and to the heterogeneity of the available dataset(s), different CV strategies have to be explored and compared to make a robust assessment of the potential of a predictive model based on radiomics and ML.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号