首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 39 毫秒
1.
About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC) or independent component analysis (ICA), two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC) and medial temporal lobe (MTL). Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.  相似文献   

2.
The Convallaria keiskei, a plant species indigenous to Japan, is on the verge of extinction. In the past, they have been manually protected and managed. Unmanned aerial vehicles (UAVs), which are already being applied in various fields, such as agriculture, surveying, and logistics, can be applied to automate this task. Image processing and machine learning techniques applied on images obtained from UAVs can automate Convallaria keiskei classification, help to estimate the increase in colony numbers, and reduce the detection cost. In a previous study, a flower number estimation method that combines image processing and a convolutional neural network (CNN) was proposed. However, leaf regions similar to flower regions were misidentified as flower regions, and the accuracy was reduced. Therefore, in this study, a method was investigated to reduce the number of false positives by excluding areas similar to the flower regions. Specifically, a novel detection method combining image processing, CNN, and fuzzy c-means is proposed. To validate the proposed method, it was compared with the previous method as well as the method in which k-means was used instead of fuzzy c-means. All results were evaluated using flower distribution maps marked by field workers. The proposed method improved the F-measure by up to 22.0% compared with the previous method. Application of the proposed method to orthorectified images facilitates the understanding of flower populations over a wide range of areas, which can contribute to the conservation and management of the species.  相似文献   

3.
《IRBM》2021,42(5):378-389
White Blood Cells play an important role in observing the health condition of an individual. The opinion related to blood disease involves the identification and characterization of a patient's blood sample. Recent approaches employ Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and merging of CNN and RNN models to enrich the understanding of image content. From beginning to end, training of big data in medical image analysis has encouraged us to discover prominent features from sample images. A single cell patch extraction from blood sample techniques for blood cell classification has resulted in the good performance rate. However, these approaches are unable to address the issues of multiple cells overlap. To address this problem, the Canonical Correlation Analysis (CCA) method is used in this paper. CCA method views the effects of overlapping nuclei where multiple nuclei patches are extracted, learned and trained at a time. Due to overlapping of blood cell images, the classification time is reduced, the dimension of input images gets compressed and the network converges faster with more accurate weight parameters. Experimental results evaluated using publicly available database show that the proposed CNN and RNN merging model with canonical correlation analysis determines higher accuracy compared to other state-of-the-art blood cell classification techniques.  相似文献   

4.
With digitisation and the development of computer-aided diagnosis, histopathological image analysis has attracted considerable interest in recent years. In this article, we address the problem of the automated annotation of skin biopsy images, a special type of histopathological image analysis. In contrast to previous well-studied methods in histopathology, we propose a novel annotation method based on a multi-instance learning framework. The proposed framework first represents each skin biopsy image as a multi-instance sample using a graph cutting method, decomposing the image to a set of visually disjoint regions. Then, we construct two classification models using multi-instance learning algorithms, among which one provides determinate results and the other calculates a posterior probability. We evaluate the proposed annotation framework using a real dataset containing 6691 skin biopsy images, with 15 properties as target annotation terms. The results indicate that the proposed method is effective and medically acceptable.  相似文献   

5.
The scarcity of training annotation is one of the major challenges for the application of deep learning technology in medical image analysis. Recently, self-supervised learning provides a powerful solution to alleviate this challenge by extracting useful features from a large number of unlabeled training data. In this article, we propose a simple and effective self-supervised learning method for leukocyte classification by identifying the different transformations of leukocyte images, without requiring a large batch of negative sampling or specialized architectures. Specifically, a convolutional neural network backbone takes different transformations of leukocyte image as input for feature extraction. Then, a pretext task of self-supervised transformation recognition on the extracted feature is conducted by a classifier, which helps the backbone learn useful representations that generalize well across different leukocyte types and datasets. In the experiment, we systematically study the effect of different transformation compositions on useful leukocyte feature extraction. Compared with five typical baselines of self-supervised image classification, experimental results demonstrate that our method performs better in different evaluation protocols including linear evaluation, domain transfer, and finetuning, which proves the effectiveness of the proposed method.  相似文献   

6.
Plants, the only natural source of oxygen, are the most important resources for every species in the world. A proper identification of plants is important for different fields. The observation of leaf characteristics is a popular method as leaves are easily available for examination. Researchers are increasingly applying image processing techniques for the identification of plants based on leaf images. In this paper, we have proposed a leaf image classification model, called BLeafNet, for plant identification, where the concept of deep learning is combined with Bonferroni fusion learning. Initially, we have designed five classification models, using ResNet-50 architecture, where five different inputs are separately used in the models. The inputs are the five variants of the leaf grayscale images, RGB, and three individual channels of RGB - red, green, and blue. For fusion of the five ResNet-50 outputs, we have used the Bonferroni mean operator as it expresses better connectivity among the confidence scores, and it also obtains better results than the individual models. We have also proposed a two-tier training method for properly training the end-to-end model. To evaluate the proposed model, we have used the Malayakew dataset, collected at the Royal Botanic Gardens in New England, which is a very challenging dataset as many leaves from different species have a very similar appearance. Besides, the proposed method is evaluated using the Leafsnap and the Flavia datasets. The obtained results on both the datasets confirm the superiority of the model as it outperforms the results achieved by many state-of-the-art models.  相似文献   

7.
《IRBM》2023,44(3):100747
ObjectivesThe accurate preoperative segmentation of the uterus and uterine fibroids from magnetic resonance images (MRI) is an essential step for diagnosis and real-time ultrasound guidance during high-intensity focused ultrasound (HIFU) surgery. Conventional supervised methods are effective techniques for image segmentation. Recently, semi-supervised segmentation approaches have been reported in the literature. One popular technique for semi-supervised methods is to use pseudo-labels to artificially annotate unlabeled data. However, many existing pseudo-label generations rely on a fixed threshold used to generate a confidence map, regardless of the proportion of unlabeled and labeled data.Materials and MethodsTo address this issue, we propose a novel semi-supervised framework called Confidence-based Threshold Adaptation Network (CTANet) to improve the quality of pseudo-labels. Specifically, we propose an online pseudo-labels method to automatically adjust the threshold, producing high-confident unlabeled annotations and boosting segmentation accuracy. To further improve the network's generalization to fit the diversity of different patients, we design a novel mixup strategy by regularizing the network on each layer in the decoder part and introducing a consistency regularization loss between the outputs of two sub-networks in CTANet.ResultsWe compare our method with several state-of-the-art semi-supervised segmentation methods on the same uterine fibroids dataset containing 297 patients. The performance is evaluated by the Dice similarity coefficient, the precision, and the recall. The results show that our method outperforms other semi-supervised learning methods. Moreover, for the same training set, our method approaches the segmentation performance of a fully supervised U-Net (100% annotated data) but using 4 times less annotated data (25% annotated data, 75% unannotated data).ConclusionExperimental results are provided to illustrate the effectiveness of the proposed semi-supervised approach. The proposed method can contribute to multi-class segmentation of uterine regions from MRI for HIFU treatment.  相似文献   

8.
Extinction learning in humans: role of the amygdala and vmPFC   总被引:20,自引:0,他引:20  
Understanding how fears are acquired is an important step in translating basic research to the treatment of fear-related disorders. However, understanding how learned fears are diminished may be even more valuable. We explored the neural mechanisms of fear extinction in humans. Studies of extinction in nonhuman animals have focused on two interconnected brain regions: the amygdala and the ventral medial prefrontal cortex (vmPFC). Consistent with animal models suggesting that the amygdala is important for both the acquisition and extinction of conditioned fear, amygdala activation was correlated across subjects with the conditioned response in both acquisition and early extinction. Activation in the vmPFC (subgenual anterior cingulate) was primarily linked to the expression of fear learning during a delayed test of extinction, as might have been expected from studies demonstrating this region is critical for the retention of extinction. These results provide evidence that the mechanisms of extinction learning may be preserved across species.  相似文献   

9.
We contrast two computational models of sequence learning. The associative learner posits that learning proceeds by strengthening existing association weights. Alternatively, recoding posits that learning creates new and more efficient representations of the learned sequences. Importantly, both models propose that humans act as optimal learners but capture different statistics of the stimuli in their internal model. Furthermore, these models make dissociable predictions as to how learning changes the neural representation of sequences. We tested these predictions by using fMRI to extract neural activity patterns from the dorsal visual processing stream during a sequence recall task. We observed that only the recoding account can explain the similarity of neural activity patterns, suggesting that participants recode the learned sequences using chunks. We show that associative learning can theoretically store only very limited number of overlapping sequences, such as common in ecological working memory tasks, and hence an efficient learner should recode initial sequence representations.  相似文献   

10.
《IRBM》2021,42(5):334-344
Active learning is an effective solution to interactively select a limited number of informative examples and use them to train a learning algorithm that can achieve its optimal performance for specific tasks. It is suitable for medical image applications in which unlabeled data are abundant but manual annotation could be very time-consuming and expensive. However, designing an effective active learning strategy for informative example selection is a challenging task, due to the intrinsic presence of noise in medical images, the large number of images, and the variety of imaging modalities. In this study, a novel low-rank modeling-based multi-label active learning (LRMMAL) method is developed to address these challenges and select informative examples for training a classifier to achieve the optimal performance. The proposed method independently quantifies image noise and integrates it with other measures to guide a pool-based sampling process to determine the most informative examples for training a classifier. In addition, an automatic adaptive cross entropy-based parameter determination scheme is proposed for further optimizing the example sampling strategy. Experimental results on varied medical image datasets and comparisons with other state-of-the-art multi-label active learning methods illustrate the superior performance of the proposed method.  相似文献   

11.
PurposeFour-dimensional computed tomography (4D-CT) plays a useful role in many clinical situations. However, due to the hardware limitation of system, dense sampling along superior–inferior direction is often not practical. In this paper, we develop a novel multiple Gaussian process regression model to enhance the superior-inferior resolution for lung 4D-CT based on transversal structures.MethodsThe proposed strategy is based on the observation that high resolution transversal images can recover missing pixels in the superior-inferior direction. Based on this observation and motived by random forest algorithm, we employ multiple Gaussian process regression model learned from transversal images to improve superior–inferior resolution. Specifically, we first randomly sample 3 × 3 patches from original transversal images. The central pixel of these patches and the eight-neighbour pixels of their corresponding degraded versions form the label and input of training data, respectively. Multiple Gaussian process regression model is then built on the basis of multiple training subsets obtained by random sampling. Finally, the central pixel of the patch is estimated based on the proposed model, with the eight-neighbour pixels of each 3 × 3 patch from interpolated superior-inferior direction images as inputs.ResultsThe performance of our method is extensively evaluated using simulated and publicly available datasets. Our experiments show the remarkable performance of the proposed method.ConclusionsIn this paper, we propose a new approach to improve the 4D-CT resolution, which does not require any external data and hardware support, and can produce clear coronal/sagittal images for easy viewing.  相似文献   

12.
Image processing using traditional photogrammetric methods is a labor-intensive process. The collection of photogrammetry images during aerial surveys is expanding rapidly, creating new challenges to analyze images promptly and efficiently, while reducing human error during processing. Computer vision-assisted photogrammetry, a field of artificial intelligence (AI), can automate image processing, greatly enhancing the efficiency of photogrammetry. Here, we present a practical and efficient program capable of automatically extracting the fine-scale photogrammetry of East Asian finless porpoises (Neophocaena asiaeorientalis sunameri). Our results indicated that computer vision-assisted photogrammetry could achieve the same accuracy as traditional photogrammetry, and the results of the comparisons were validated against the direct measurements. Three-dimensional (3D) models using computer vision-assisted photogrammetric morphometrics generated trustworthy body volume estimates. We also explored the one image-based 3D modeling technique, which is less accurate, but still useful when only one image of the animal is available. Although several limitations exist in the current program, improvements could be made to narrow the virtual-reality gap when more images are available for machine learning and training. We recommend this program for analyzing images of marine mammals possessing a similar morphological contour.  相似文献   

13.
An increasing number of genes have been experimentally confirmed in recent years as causative genes to various human diseases. The newly available knowledge can be exploited by machine learning methods to discover additional unknown genes that are likely to be associated with diseases. In particular, positive unlabeled learning (PU learning) methods, which require only a positive training set P (confirmed disease genes) and an unlabeled set U (the unknown candidate genes) instead of a negative training set N, have been shown to be effective in uncovering new disease genes in the current scenario. Using only a single source of data for prediction can be susceptible to bias due to incompleteness and noise in the genomic data and a single machine learning predictor prone to bias caused by inherent limitations of individual methods. In this paper, we propose an effective PU learning framework that integrates multiple biological data sources and an ensemble of powerful machine learning classifiers for disease gene identification. Our proposed method integrates data from multiple biological sources for training PU learning classifiers. A novel ensemble-based PU learning method EPU is then used to integrate multiple PU learning classifiers to achieve accurate and robust disease gene predictions. Our evaluation experiments across six disease groups showed that EPU achieved significantly better results compared with various state-of-the-art prediction methods as well as ensemble learning classifiers. Through integrating multiple biological data sources for training and the outputs of an ensemble of PU learning classifiers for prediction, we are able to minimize the potential bias and errors in individual data sources and machine learning algorithms to achieve more accurate and robust disease gene predictions. In the future, our EPU method provides an effective framework to integrate the additional biological and computational resources for better disease gene predictions.  相似文献   

14.
Spectral imaging approaches provide new possibilities for measuring and discriminating fluorescent molecules in living cells and tissues. These approaches often employ tunable filters and robust image processing algorithms to identify many fluorescent labels in a single image set. Here, we present results from a novel spectral imaging technology that scans the fluorescence excitation spectrum, demonstrating that excitation‐scanning hyperspectral image data can discriminate among tissue types and estimate the molecular composition of tissues. This approach allows fast, accurate quantification of many fluorescent species from multivariate image data without the need of exogenous labels or dyes. We evaluated the ability of the excitation‐scanning approach to identify endogenous fluorescence signatures in multiple unlabeled tissue types. Signatures were screened using multi‐pass principal component analysis. Endmember extraction techniques revealed conserved autofluorescent signatures across multiple tissue types. We further examined the ability to detect known molecular signatures by constructing spectral libraries of common endogenous fluorophores and applying multiple spectral analysis techniques on test images from lung, liver and kidney. Spectral deconvolution revealed structure‐specific morphologic contrast generated from pure molecule signatures. These results demonstrate that excitation‐scanning spectral imaging, coupled with spectral imaging processing techniques, provides an approach for discriminating among tissue types and assessing the molecular composition of tissues. Additionally, excitation scanning offers the ability to rapidly screen molecular markers across a range of tissues without using fluorescent labels. This approach lays the groundwork for translation of excitation‐scanning technologies to clinical imaging platforms.  相似文献   

15.
The early symptom of lung tumor is always appeared as nodule on CT scans, among which 30% to 40% are malignant according to statistics studies. Therefore, early detection and classification of lung nodules are crucial to the treatment of lung cancer. With the increasing prevalence of lung cancer, large amount of CT images waiting for diagnosis are huge burdens to doctors who may missed or false detect abnormalities due to fatigue. Methods: In this study, we propose a novel lung nodule detection method based on YOLOv3 deep learning algorithm with only one preprocessing step is needed. In order to overcome the problem of less training data when starting a new study of Computer Aided Diagnosis (CAD), we firstly pick up a small number of diseased regions to simulate a limited datasets training procedure: 5 nodule patterns are selected and deformed into 110 nodules by random geometric transformation before fusing into 10 normal lung CT images using Poisson image editing. According to the experimental results, the Poisson fusion method achieves a detection rate of about 65.24% for testing 100 new images. Secondly, 419 slices from common database RIDER are used to train and test our YOLOv3 network. The time of lung nodule detection by YOLOv3 is shortened by 2–3 times compared with the mainstream algorithm, with the detection accuracy rate of 95.17%. Finally, the configuration of YOLOv3 is optimized by the learning data sets. The results show that YOLOv3 has the advantages of high speed and high accuracy in lung nodule detection, and it can access a large amount of CT image data within a short time to meet the huge demand of clinical practice. In addition, the use of Poisson image editing algorithms to generate data sets can reduce the need for raw training data and improve the training efficiency.  相似文献   

16.
《IRBM》2022,43(5):405-413
PurposeLeukaemia is diagnosed conventionally by observing the peripheral blood and bone marrow smear using a microscope and with the help of advanced laboratory tests. Image processing-based methods, which are simple, fast, and cheap, can be used to detect and classify leukemic cells by processing and analysing images of microscopic smear. The proposed study aims to classify Acute Lymphoblastic Leukaemia (ALL) by Deep Learning (DL) based techniques.ProceduresThe study used Deep Convolutional Neural Networks (DNNs) to classify ALL according to WHO classification scheme without using any image segmentation and feature extraction that involves intense computations. Images from an online image bank of American Society of Haematology (ASH) were used for the classification.FindingsA classification accuracy of 94.12% is achieved by the study in isolating the B-cell and T-cell ALL images using a pretrained CNN AlexNet as well as LeukNet, a custom-made deep learning network designed by the proposed work. The study also compared the classification performances using three different training algorithms.ConclusionsThe paper detailed the use of DNNs to classify ALL, without using any image segmentation and feature extraction techniques. Classification of ALL into subtypes according to the WHO classification scheme using image processing techniques is not available in literature to the best of the knowledge of the authors. The present study considered the classification of ALL only, and detection of other types of leukemic images can be attempted in future research.  相似文献   

17.
18.
19.
The morphology of plant root anatomical features is a key factor in effective water and nutrient uptake. Existing techniques for phenotyping root anatomical traits are often based on manual or semi-automatic segmentation and annotation of microscopic images of root cross sections. In this article, we propose a fully automated tool, hereinafter referred to as RootAnalyzer, for efficiently extracting and analyzing anatomical traits from root-cross section images. Using a range of image processing techniques such as local thresholding and nearest neighbor identification, RootAnalyzer segments the plant root from the image’s background, classifies and characterizes the cortex, stele, endodermis and epidermis, and subsequently produces statistics about the morphological properties of the root cells and tissues. We use RootAnalyzer to analyze 15 images of wheat plants and one maize plant image and evaluate its performance against manually-obtained ground truth data. The comparison shows that RootAnalyzer can fully characterize most root tissue regions with over 90% accuracy.  相似文献   

20.
Pathologists and radiologists spend years acquiring and refining their medically essential visual skills, so it is of considerable interest to understand how this process actually unfolds and what image features and properties are critical for accurate diagnostic performance. Key insights into human behavioral tasks can often be obtained by using appropriate animal models. We report here that pigeons (Columba livia)—which share many visual system properties with humans—can serve as promising surrogate observers of medical images, a capability not previously documented. The birds proved to have a remarkable ability to distinguish benign from malignant human breast histopathology after training with differential food reinforcement; even more importantly, the pigeons were able to generalize what they had learned when confronted with novel image sets. The birds’ histological accuracy, like that of humans, was modestly affected by the presence or absence of color as well as by degrees of image compression, but these impacts could be ameliorated with further training. Turning to radiology, the birds proved to be similarly capable of detecting cancer-relevant microcalcifications on mammogram images. However, when given a different (and for humans quite difficult) task—namely, classification of suspicious mammographic densities (masses)—the pigeons proved to be capable only of image memorization and were unable to successfully generalize when shown novel examples. The birds’ successes and difficulties suggest that pigeons are well-suited to help us better understand human medical image perception, and may also prove useful in performance assessment and development of medical imaging hardware, image processing, and image analysis tools.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号