首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 0 毫秒
1.
Photoacoustic computed tomography (PACT) is a non‐invasive imaging technique offering high contrast, high resolution, and deep penetration in biological tissues. We report a PACT system equipped with a high frequency linear transducer array for mapping the microvascular network of a whole mouse brain with the skull intact and studying its hemodynamic activities. The linear array was scanned in the coronal plane to collect data from different angles, and full‐view images were synthesized from the limited‐view images in which vessels were only partially revealed. We investigated spontaneous neural activities in the deep brain by monitoring the concentration of hemoglobin in the blood vessels and observed strong interhemispherical correlations between several chosen functional regions, both in the cortical layer and in the deep regions. We also studied neural activities during an epileptic seizure and observed the epileptic wave spreading around the injection site and the wave propagating in the opposite hemisphere.

  相似文献   


2.
Non‐invasive photoacoustic tomography (PAT) of mouse brains with intact skulls has been a challenge due to the skull's strong acoustic attenuation, aberration, and reverberation, especially in the high‐frequency range (>15 MHz). In this paper, we systematically investigated the impacts of the murine skull on the photoacoustic wave propagation and on the PAT image reconstruction. We studied the photoacoustic acoustic wave aberration due to the acoustic impedance mismatch at the skull boundaries and the mode conversion between the longitudinal wave and shear wave. The wave's reverberation within the skull was investigated for both longitudinal and shear modes. In the inverse process, we reconstructed the transcranial photoacoustic computed tomography (PACT) and photoacoustic microscopy (PAM) images of a point target enclosed by the mouse skull, showing the skull's different impacts on both modalities. Finally, we experimentally validated the simulations by imaging an in vitro mouse skull phantom using representative transcranial PAM and PACT systems. The experimental results agreed well with the simulations and confirmed the accuracy of our forward and inverse models. We expect that our results will provide better understanding of the impacts of the murine skull on transcranial photoacoustic brain imaging and pave the ways for future technical improvements.   相似文献   

3.
Camera traps often produce massive images, and empty images that do not contain animals are usually overwhelming. Deep learning is a machine‐learning algorithm and widely used to identify empty camera trap images automatically. Existing methods with high accuracy are based on millions of training samples (images) and require a lot of time and personnel costs to label the training samples manually. Reducing the number of training samples can save the cost of manually labeling images. However, the deep learning models based on a small dataset produce a large omission error of animal images that many animal images tend to be identified as empty images, which may lead to loss of the opportunities of discovering and observing species. Therefore, it is still a challenge to build the DCNN model with small errors on a small dataset. Using deep convolutional neural networks and a small‐size dataset, we proposed an ensemble learning approach based on conservative strategies to identify and remove empty images automatically. Furthermore, we proposed three automatic identifying schemes of empty images for users who accept different omission errors of animal images. Our experimental results showed that these three schemes automatically identified and removed 50.78%, 58.48%, and 77.51% of the empty images in the dataset when the omission errors were 0.70%, 1.13%, and 2.54%, respectively. The analysis showed that using our scheme to automatically identify empty images did not omit species information. It only slightly changed the frequency of species occurrence. When only a small dataset was available, our approach provided an alternative to users to automatically identify and remove empty images, which can significantly reduce the time and personnel costs required to manually remove empty images. The cost savings were comparable to the percentage of empty images removed by models.  相似文献   

4.
Optical coherence tomography angiography (OCTA) offers a noninvasive label-free solution for imaging retinal vasculatures at the capillary level resolution. In principle, improved resolution implies a better chance to reveal subtle microvascular distortions associated with eye diseases that are asymptomatic in early stages. However, massive screening requires experienced clinicians to manually examine retinal images, which may result in human error and hinder objective screening. Recently, quantitative OCTA features have been developed to standardize and document retinal vascular changes. The feasibility of using quantitative OCTA features for machine learning classification of different retinopathies has been demonstrated. Deep learning-based applications have also been explored for automatic OCTA image analysis and disease classification. In this article, we summarize recent developments of quantitative OCTA features, machine learning image analysis, and classification.  相似文献   

5.
6.
Auscultation plays an important role in the clinic, and the research community has been exploring machine learning (ML) to enable remote and automatic auscultation for respiratory condition screening via sounds. To give the big picture of what is going on in this field, in this narrative review, we describe publicly available audio databases that can be used for experiments, illustrate the developed ML methods proposed to date, and flag some under-considered issues which still need attention. Compared to existing surveys on the topic, we cover the latest literature, especially those audio-based COVID-19 detection studies which have gained extensive attention in the last two years. This work can help to facilitate the application of artificial intelligence in the respiratory auscultation field.  相似文献   

7.
  1. A time‐consuming challenge faced by camera trap practitioners is the extraction of meaningful data from images to inform ecological management. An increasingly popular solution is automated image classification software. However, most solutions are not sufficiently robust to be deployed on a large scale due to lack of location invariance when transferring models between sites. This prevents optimal use of ecological data resulting in significant expenditure of time and resources to annotate and retrain deep learning models.
  2. We present a method ecologists can use to develop optimized location invariant camera trap object detectors by (a) evaluating publicly available image datasets characterized by high intradataset variability in training deep learning models for camera trap object detection and (b) using small subsets of camera trap images to optimize models for high accuracy domain‐specific applications.
  3. We collected and annotated three datasets of images of striped hyena, rhinoceros, and pigs, from the image‐sharing websites FlickR and iNaturalist (FiN), to train three object detection models. We compared the performance of these models to that of three models trained on the Wildlife Conservation Society and Camera CATalogue datasets, when tested on out‐of‐sample Snapshot Serengeti datasets. We then increased FiN model robustness by infusing small subsets of camera trap images into training.
  4. In all experiments, the mean Average Precision (mAP) of the FiN trained models was significantly higher (82.33%–88.59%) than that achieved by the models trained only on camera trap datasets (38.5%–66.74%). Infusion further improved mAP by 1.78%–32.08%.
  5. Ecologists can use FiN images for training deep learning object detection solutions for camera trap image processing to develop location invariant, robust, out‐of‐the‐box software. Models can be further optimized by infusion of 5%–10% camera trap images into training data. This would allow AI technologies to be deployed on a large scale in ecological applications. Datasets and code related to this study are open source and available on this repository: https://doi.org/10.5061/dryad.1c59zw3tx.
  相似文献   

8.
Physics has delivered extraordinary developments in almost every facet of modern life. From the humble thermometer and stethoscope to X‐Ray, CT, MRI, ultrasound, PET and radiotherapy, our health has been transformed by these advances yielding both morphological and functional metrics. Recently high resolution label‐free imaging of the microcirculation at clinically relevant depths has become available in the research domain. In this paper, we present a comprehensive review on current imaging techniques, state‐of‐the‐art advancements and applications, and general perspectives on the prospects for these modalities in the clinical realm. (© 2013 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
The identification and characterization of the structural sites which contribute to protein function are crucial for understanding biological mechanisms, evaluating disease risk, and developing targeted therapies. However, the quantity of known protein structures is rapidly outpacing our ability to functionally annotate them. Existing methods for function prediction either do not operate on local sites, suffer from high false positive or false negative rates, or require large site‐specific training datasets, necessitating the development of new computational methods for annotating functional sites at scale. We present COLLAPSE (Compressed Latents Learned from Aligned Protein Structural Environments), a framework for learning deep representations of protein sites. COLLAPSE operates directly on the 3D positions of atoms surrounding a site and uses evolutionary relationships between homologous proteins as a self‐supervision signal, enabling learned embeddings to implicitly capture structure–function relationships within each site. Our representations generalize across disparate tasks in a transfer learning context, achieving state‐of‐the‐art performance on standardized benchmarks (protein–protein interactions and mutation stability) and on the prediction of functional sites from the prosite database. We use COLLAPSE to search for similar sites across large protein datasets and to annotate proteins based on a database of known functional sites. These methods demonstrate that COLLAPSE is computationally efficient, tunable, and interpretable, providing a general‐purpose platform for computational protein analysis.  相似文献   

10.
Understanding large‐scale crop growth and its responses to climate change are critical for yield estimation and prediction, especially under the increased frequency of extreme climate and weather events. County‐level corn phenology varies spatially and interannually across the Corn Belt in the United States, where precipitation and heat stress presents a temporal pattern among growth phases (GPs) and vary interannually. In this study, we developed a long short‐term memory (LSTM) model that integrates heterogeneous crop phenology, meteorology, and remote sensing data to estimate county‐level corn yields. By conflating heterogeneous phenology‐based remote sensing and meteorological indices, the LSTM model accounted for 76% of yield variations across the Corn Belt, improved from 39% of yield variations explained by phenology‐based meteorological indices alone. The LSTM model outperformed least absolute shrinkage and selection operator (LASSO) regression and random forest (RF) approaches for end‐of‐the‐season yield estimation, as a result of its recurrent neural network structure that can incorporate cumulative and nonlinear relationships between corn yield and environmental factors. The results showed that the period from silking to dough was most critical for crop yield estimation. The LSTM model presented a robust yield estimation under extreme weather events in 2012, which reduced the root‐mean‐square error to 1.47 Mg/ha from 1.93 Mg/ha for LASSO and 2.43 Mg/ha for RF. The LSTM model has the capability to learn general patterns from high‐dimensional (spectral, spatial, and temporal) input features to achieve a robust county‐level crop yield estimation. This deep learning approach holds great promise for better understanding the global condition of crop growth based on publicly available remote sensing and meteorological data.  相似文献   

11.
PurposeA novel fast kilovoltage switching dual-energy CT with deep learning [Deep learning based-spectral CT (DL-Spectral CT)], which generates a complete sinogram for each kilovolt using deep learning views that complement the measured views at each energy, was commercialized in 2020. The purpose of this study was to evaluate the accuracy of CT numbers in virtual monochromatic images (VMIs) and iodine quantifications at various radiation doses using DL-Spectral CT.Materials and methodsTwo multi-energy phantoms (large and small) using several rods representing different materials (iodine, calcium, blood, and adipose) were scanned by DL-Spectral CT at varying radiation doses. Images were reconstructed using three reconstruction parameters (body, lung, bone). The absolute percentage errors (APEs) for CT numbers on VMIs at 50, 70, and 100 keV and iodine quantification were compared among different radiation dose protocols.ResultsThe APEs of the CT numbers on VMIs were <15% in both the large and small phantoms, except at the minimum dose in the large phantom. There were no significant differences among radiation dose protocols in computed tomography dose index volumes of 12.3 mGy or larger. The accuracy of iodine quantification provided by the body parameter was significantly better than those obtained with the lung and bone parameters. Increasing the radiation dose did not always improve the accuracy of iodine quantification, regardless of the reconstruction parameter and phantom size.ConclusionThe accuracy of iodine quantification and CT numbers on VMIs in DL-Spectral CT was not affected by the radiation dose, except for an extremely low radiation dose for body size.  相似文献   

12.
PurposeTo perform a systematic review on the research on the application of artificial intelligence (AI) to imaging published in Italy and identify its fields of application, methods and results.Materials and MethodsA Pubmed search was conducted using terms Artificial Intelligence, Machine Learning, Deep learning, imaging, and Italy as affiliation, excluding reviews and papers outside time interval 2015–2020. In a second phase, participants of the working group AI4MP on Artificial Intelligence of the Italian Association of Physics in Medicine (AIFM) searched for papers on AI in imaging.ResultsThe Pubmed search produced 794 results. 168 studies were selected, of which 122 were from Pubmed search and 46 from the working group. The most used imaging modality was MRI (44%) followed by CT(12%) ad radiography/mammography (11%). The most common clinical indication were neurological diseases (29%) and diagnosis of cancer (25%). Classification was the most common task for AI (57%) followed by segmentation (16%). 65% of studies used machine learning and 35% used deep learning. We observed a rapid increase of research in Italy on artificial intelligence in the last 5 years, peaking at 155% from 2018 to 2019.ConclusionsWe are witnessing an unprecedented interest in AI applied to imaging in Italy, in a diversity of fields and imaging techniques. Further initiatives are needed to build common frameworks and databases, collaborations among different types of institutions, and guidelines for research on AI.  相似文献   

13.
14.
15.
Methane flux (FCH4) measurements using the eddy covariance technique have increased over the past decade. FCH4 measurements commonly include data gaps, as is the case with CO2 and energy fluxes. However, gap‐filling FCH4 data are more challenging than other fluxes due to its unique characteristics including multidriver dependency, variabilities across multiple timescales, nonstationarity, spatial heterogeneity of flux footprints, and lagged influence of biophysical drivers. Some researchers have applied a marginal distribution sampling (MDS) algorithm, a standard gap‐filling method for other fluxes, to FCH4 datasets, and others have applied artificial neural networks (ANN) to resolve the challenging characteristics of FCH4. However, there is still no consensus regarding FCH4 gap‐filling methods due to limited comparative research. We are not aware of the applications of machine learning (ML) algorithms beyond ANN to FCH4 datasets. Here, we compare the performance of MDS and three ML algorithms (ANN, random forest [RF], and support vector machine [SVM]) using multiple combinations of ancillary variables. In addition, we applied principal component analysis (PCA) as an input to the algorithms to address multidriver dependency of FCH4 and reduce the internal complexity of the algorithmic structures. We applied this approach to five benchmark FCH4 datasets from both natural and managed systems located in temperate and tropical wetlands and rice paddies. Results indicate that PCA improved the performance of MDS compared to traditional inputs. ML algorithms performed better when using all available biophysical variables compared to using PCA‐derived inputs. Overall, RF was found to outperform other techniques for all sites. We found gap‐filling uncertainty is much larger than measurement uncertainty in accumulated CH4 budget. Therefore, the approach used for FCH4 gap filling can have important implications for characterizing annual ecosystem‐scale methane budgets, the accuracy of which is important for evaluating natural and managed systems and their interactions with global change processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号