共查询到20条相似文献,搜索用时 15 毫秒
1.
PurposeArtificial intelligence (AI) models are playing an increasing role in biomedical research and healthcare services. This review focuses on challenges points to be clarified about how to develop AI applications as clinical decision support systems in the real-world context.MethodsA narrative review has been performed including a critical assessment of articles published between 1989 and 2021 that guided challenging sections.ResultsWe first illustrate the architectural characteristics of machine learning (ML)/radiomics and deep learning (DL) approaches. For ML/radiomics, the phases of feature selection and of training, validation, and testing are described. DL models are presented as multi-layered artificial/convolutional neural networks, allowing us to directly process images. The data curation section includes technical steps such as image labelling, image annotation (with segmentation as a crucial step in radiomics), data harmonization (enabling compensation for differences in imaging protocols that typically generate noise in non-AI imaging studies) and federated learning. Thereafter, we dedicate specific sections to: sample size calculation, considering multiple testing in AI approaches; procedures for data augmentation to work with limited and unbalanced datasets; and the interpretability of AI models (the so-called black box issue). Pros and cons for choosing ML versus DL to implement AI applications to medical imaging are finally presented in a synoptic way.ConclusionsBiomedicine and healthcare systems are one of the most important fields for AI applications and medical imaging is probably the most suitable and promising domain. Clarification of specific challenging points facilitates the development of such systems and their translation to clinical practice. 相似文献
2.
IntroductionDeep learning (DL) is used to classify, detect, and quantify gold nanoparticles (AuNPs) in a human-sized phantom with a clinical MDCT scanner.MethodsAuNPs were imaged at concentrations between 0.0274 and 200 mgAu/mL in a 33 cm phantom. 1 mm-thick CT image slices were acquired at 120 kVp with a CTDIvol of 23.6 mGy. A convolutional neural network (CNN) was trained on 544 images to classify 17 different tissue types and AuNP concentrations. A second set of 544 images was then used for testing.ResultsAuNPs were classified with 95% accuracy at 0.1095 mgAu/mL and 97% accuracy at 0.2189 mgAu/mL. Both these concentrations are lower than what humans can visually perceive (0.3–1.4 mgAu/mL). AuNP concentrations were also classified with 95% accuracy at 150 and 200 mgAu/mL. These high concentrations result in CT numbers that are at or above the 12-bit limit for CT’s dynamic range where extended Hounsfield scales are otherwise required for measuring differences in contrast.ConclusionsWe have shown that DL can be used to detect AuNPs at concentrations lower than what humans can visually perceive and can also quantify very high AuNP concentrations that exceed the typical 12-bit dynamic range of clinical MDCT scanners. This second finding is possible due to inhomogeneous AuNP distributions and characteristic streak artifacts. It may even be possible to extend this approach beyond AuNP imaging in CT for quantifying high density objects without extended Hounsfield scales. 相似文献
3.
MP (Metabolic P) systems are a class of P systems introduced for modelling metabolic processes. We refer to the dynamical inverse problem as the problem of identifying (discrete) mathematical models exhibiting an observed dynamics. In this paper, we complete the definition of the algorithm LGSS (Log-gain Stoichiometric Stepwise regression) introduced in Manca and Marchetti (2011) for solving a general class of dynamical inverse problems. To this aim, we develop a reformulation of the classical stepwise regression in the context of MP systems. We conclude with a short review of two applications of LGSS for discovering the internal regulation logic of two phenomena relevant in systems biology. 相似文献
4.
6.
A cell's phenotype is the culmination of several cellular processes through a complex network of molecular interactions that ultimately result in a unique morphological signature. Visual cell phenotyping is the characterization and quantification of these observable cellular traits in images. Recently, cellular phenotyping has undergone a massive overhaul in terms of scale, resolution, and throughput, which is attributable to advances across electronic, optical, and chemical technologies for imaging cells. Coupled with the rapid acceleration of deep learning–based computational tools, these advances have opened up new avenues for innovation across a wide variety of high-throughput cell biology applications. Here, we review applications wherein deep learning is powering the recognition, profiling, and prediction of visual phenotypes to answer important biological questions. As the complexity and scale of imaging assays increase, deep learning offers computational solutions to elucidate the details of previously unexplored cellular phenotypes. 相似文献
7.
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions. 相似文献
8.
Zhongke Gao Weidong Dang Xinmin Wang Xiaolin Hong Linhua Hou Kai Ma Matja Perc 《Cognitive neurodynamics》2021,15(3):369
Electroencephalogram (EEG) signals acquired from brain can provide an effective representation of the human’s physiological and pathological states. Up to now, much work has been conducted to study and analyze the EEG signals, aiming at spying the current states or the evolution characteristics of the complex brain system. Considering the complex interactions between different structural and functional brain regions, brain network has received a lot of attention and has made great progress in brain mechanism research. In addition, characterized by autonomous, multi-layer and diversified feature extraction, deep learning has provided an effective and feasible solution for solving complex classification problems in many fields, including brain state research. Both of them show strong ability in EEG signal analysis, but the combination of these two theories to solve the difficult classification problems based on EEG signals is still in its infancy. We here review the application of these two theories in EEG signal research, mainly involving brain–computer interface, neurological disorders and cognitive analysis. Furthermore, we also develop a framework combining recurrence plots and convolutional neural network to achieve fatigue driving recognition. The results demonstrate that complex networks and deep learning can effectively implement functional complementarity for better feature extraction and classification, especially in EEG signal analysis. 相似文献
9.
Kinematic analysis is often performed with a camera system combined with reflective markers placed over bony landmarks. This method is restrictive (and often expensive), and limits the ability to perform analyses outside of the lab. In the present study, we used a markerless deep learning-based method to perform 2D kinematic analysis of deep water running, a task that poses several challenges to image processing methods. A single GoPro camera recorded sagittal plane lower limb motion. A deep neural network was trained using data from 17 individuals, and then used to predict the locations of markers that approximated joint centres. We found that 300–400 labelled images were sufficient to train the network to be able to position joint markers with an accuracy similar to that of a human labeler (mean difference < 3 pixels, around 1 cm). This level of accuracy is sufficient for many 2D applications, such as sports biomechanics, coaching/training, and rehabilitation. The method was sensitive enough to differentiate between closely-spaced running cadences (45–85 strides per minute in increments of 5). We also found high test–retest reliability of mean stride data, with between-session correlation coefficients of 0.90–0.97. Our approach represents a low-cost, adaptable solution for kinematic analysis, and could easily be modified for use in other movements and settings. Using additional cameras, this approach could also be used to perform 3D analyses. The method presented here may have broad applications in different fields, for example by enabling markerless motion analysis to be performed during rehabilitation, training or even competition environments. 相似文献
10.
实现对蜜蜂蜂群的实时动态监测,有助于养蜂业的数字化与智能化发展,对大幅提升养蜂管理水平具有重要意义。深度学习作为人工智能的一种新的研究方向,目前已被广泛应用于昆虫分类学、行为学、害虫生物防治等领域。随着深度学习检测算法的迅速发展,基于深度学习的蜜蜂蜂群监测技术不断涌现,为智能化养蜂提供了可能。为促进深度学习在蜜蜂领域的进一步应用,本文梳理了深度学习在蜜蜂的物种识别、行为跟踪监测、蜂群健康监测和蜂巢监测等方面的研究进展,分析了深度学习技术在蜜蜂蜂群监测研究及应用中存在的一些问题和未来发展方向,为深度学习在蜜蜂领域的应用提出了建议。 相似文献
11.
State-dependent computation is key to cognition in both biological and artificial systems. Alan Turing recognized the power of stateful computation when he created the Turing machine with theoretically infinite computational capacity in 1936. Independently, by 1950, ethologists such as Tinbergen and Lorenz also began to implicitly embed rudimentary forms of state-dependent computation to create qualitative models of internal drives and naturally occurring animal behaviors. Here, we reformulate core ethological concepts in explicitly dynamical systems terms for stateful computation. We examine, based on a wealth of recent neural data collected during complex innate behaviors across species, the neural dynamics that determine the temporal structure of internal states. We will also discuss the degree to which the brain can be hierarchically partitioned into nested dynamical systems and the need for a multi-dimensional state-space model of the neuromodulatory system that underlies motivational and affective states. 相似文献
12.
Huangxuan Zhao Ziwen Ke Ningbo Chen Songjian Wang Ke Li Lidai Wang Xiaojing Gong Wei Zheng Liang Song Zhicheng Liu Dong Liang Chengbo Liu 《Journal of biophotonics》2020,13(3)
Deconvolution is the most commonly used image processing method in optical imaging systems to remove the blur caused by the point‐spread function (PSF). While this method has been successful in deblurring, it suffers from several disadvantages, such as slow processing time due to multiple iterations required to deblur and suboptimal in cases where the experimental operator chosen to represent PSF is not optimal. In this paper, we present a deep‐learning‐based deblurring method that is fast and applicable to optical microscopic imaging systems. We tested the robustness of proposed deblurring method on the publicly available data, simulated data and experimental data (including 2D optical microscopic data and 3D photoacoustic microscopic data), which all showed much improved deblurred results compared to deconvolution. We compared our results against several existing deconvolution methods. Our results are better than conventional techniques and do not require multiple iterations or pre‐determined experimental operator. Our method has several advantages including simple operation, short time to compute, good deblur results and wide application in all types of optical microscopic imaging systems. The deep learning approach opens up a new path for deblurring and can be applied in various biomedical imaging fields. 相似文献
13.
14.
PurposeAccurate detection and treatment of Coronary Artery Disease is mainly based on invasive Coronary Angiography, which could be avoided provided that a robust, non-invasive detection methodology emerged. Despite the progress of computational systems, this remains a challenging issue. The present research investigates Machine Learning and Deep Learning methods in competing with the medical experts' diagnostic yield. Although the highly accurate detection of Coronary Artery Disease, even from the experts, is presently implausible, developing Artificial Intelligence models to compete with the human eye and expertise is the first step towards a state-of-the-art Computer-Aided Diagnostic system.MethodsA set of 566 patient samples is analysed. The dataset contains Polar Maps derived from scintigraphic Myocardial Perfusion Imaging studies, clinical data, and Coronary Angiography results. The latter is considered as reference standard. For the classification of the medical images, the InceptionV3 Convolutional Neural Network is employed, while, for the categorical and continuous features, Neural Networks and Random Forest classifier are proposed.ResultsThe research suggests that an optimal strategy competing with the medical expert's accuracy involves a hybrid multi-input network composed of InceptionV3 and a Random Forest. This method matches the expert's accuracy, which is 79.15% in the particular dataset.ConclusionImage classification using deep learning methods can cooperate with clinical data classification methods to enhance the robustness of the predicting model, aiming to compete with the medical expert's ability to identify Coronary Artery Disease subjects, from a large scale patient dataset. 相似文献
15.
PurposeAmong the different available methods for synthetic CT generation from MR images for the task of MR-guided radiation planning, the deep learning algorithms have and do outperform their conventional counterparts. In this study, we investigated the performance of some most popular deep learning architectures including eCNN, U-Net, GAN, V-Net, and Res-Net for the task of sCT generation. As a baseline, an atlas-based method is implemented to which the results of the deep learning-based model are compared.MethodsA dataset consisting of 20 co-registered MR-CT pairs of the male pelvis is applied to assess the different sCT production methods' performance. The mean error (ME), mean absolute error (MAE), Pearson correlation coefficient (PCC), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) metrics were computed between the estimated sCT and the ground truth (reference) CT images.ResultsThe visual inspection revealed that the sCTs produced by eCNN, V-Net, and ResNet, unlike the other methods, were less noisy and greatly resemble the ground truth CT image. In the whole pelvis region, the eCNN yielded the lowest MAE (26.03 ± 8.85 HU) and ME (0.82 ± 7.06 HU), and the highest PCC metrics were yielded by the eCNN (0.93 ± 0.05) and ResNet (0.91 ± 0.02) methods. The ResNet model had the highest PSNR of 29.38 ± 1.75 among all models. In terms of the Dice similarity coefficient, the eCNN method revealed superior performance in major tissue identification (air, bone, and soft tissue).ConclusionsAll in all, the eCNN and ResNet deep learning methods revealed acceptable performance with clinically tolerable quantification errors. 相似文献
16.
IntroductionOur markerless tumor tracking algorithm requires 4DCT data to train models. 4DCT cannot be used for markerless tracking for respiratory-gated treatment due to inaccuracies and a high radiation dose. We developed a deep neural network (DNN) to generate 4DCT from 3DCT data.MethodsWe used 2420 thoracic 4DCT datasets from 436 patients to train a DNN, designed to export 9 deformation vector fields (each field representing one-ninth of the respiratory cycle) from each CT dataset based on a 3D convolutional autoencoder with shortcut connections using deformable image registration. Then 3DCT data at exhale were transformed using the predicted deformation vector fields to obtain simulated 4DCT data. We compared markerless tracking accuracy between original and simulated 4DCT datasets for 20 patients. Our tracking algorithm used a machine learning approach with patient-specific model parameters. For the training stage, a pair of digitally reconstructed radiography images was generated using 4DCT for each patient. For the prediction stage, the tracking algorithm calculated tumor position using incoming fluoroscopic image data.ResultsDiaphragmatic displacement averaged over 40 cases for the original 4DCT were slightly higher (<1.3 mm) than those for the simulated 4DCT. Tracking positional errors (95th percentile of the absolute value of displacement, “simulated 4DCT” minus “original 4DCT”) averaged over the 20 cases were 0.56 mm, 0.65 mm, and 0.96 mm in the X, Y and Z directions, respectively.ConclusionsWe developed a DNN to generate simulated 4DCT data that are useful for markerless tumor tracking when original 4DCT is not available. Using this DNN would accelerate markerless tumor tracking and increase treatment accuracy in thoracoabdominal treatment. 相似文献
17.
PurposeTo develop a computerized detection system for the automatic classification of the presence/absence of mass lesions in digital breast tomosynthesis (DBT) annotated exams, based on a deep convolutional neural network (DCNN).Materials and MethodsThree DCNN architectures working at image-level (DBT slice) were compared: two state-of-the-art pre-trained DCNN architectures (AlexNet and VGG19) customized through transfer learning, and one developed from scratch (DBT-DCNN). To evaluate these DCNN-based architectures we analysed their classification performance on two different datasets provided by two hospital radiology departments. DBT slice images were processed following normalization, background correction and data augmentation procedures. The accuracy, sensitivity, and area-under-the-curve (AUC) values were evaluated on both datasets, using receiver operating characteristic curves. A Grad-CAM technique was also implemented providing an indication of the lesion position in the DBT slice.Results Accuracy, sensitivity and AUC for the investigated DCNN are in-line with the best performance reported in the field. The DBT-DCNN network developed in this work showed an accuracy and a sensitivity of (90% ± 4%) and (96% ± 3%), respectively, with an AUC as good as 0.89 ± 0.04. A k-fold cross validation test (with k = 4) showed an accuracy of 94.0% ± 0.2%, and a F1-score test provided a value as good as 0.93 ± 0.03. Grad-CAM maps show high activation in correspondence of pixels within the tumour regions.Conclusions We developed a deep learning-based framework (DBT-DCNN) to classify DBT images from clinical exams. We investigated also a possible application of the Grad-CAM technique to identify the lesion position. 相似文献
18.
Since the first revelation of proteins functioning as macromolecular machines through their three dimensional structures, researchers have been intrigued by the marvelous ways the biochemical processes are carried out by proteins. The aspiration to understand protein structures has fueled extensive efforts across different scientific disciplines. In recent years, it has been demonstrated that proteins with new functionality or shapes can be designed via structure-based modeling methods, and the design strategies have combined all available information — but largely piece-by-piece — from sequence derived statistics to the detailed atomic-level modeling of chemical interactions. Despite the significant progress, incorporating data-derived approaches through the use of deep learning methods can be a game changer. In this review, we summarize current progress, compare the arc of developing the deep learning approaches with the conventional methods, and describe the motivation and concepts behind current strategies that may lead to potential future opportunities. 相似文献
19.
20.
Event-related brain potentials (ERP) are important neural correlates of cognitive processes. In the domain of language processing,
the N400 and P600 reflect lexical-semantic integration and syntactic processing problems, respectively. We suggest an interpretation
of these markers in terms of dynamical system theory and present two nonlinear dynamical models for syntactic computations
where different processing strategies correspond to functionally different regions in the system’s phase space.
相似文献
Peter beim GrabenEmail: |