首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Over the last few years, several research works have been performed to monitor fish in the underwater environment aimed for marine research, understanding ocean geography, and primarily for sustainable fisheries. Automating fish identification is very helpful, considering the time and cost of the manual process. However, it can be challenging to differentiate fish from the seabed and fish types from each other due to environmental challenges like low illumination, complex background, high variation in luminosity, free movement of fish, and high diversity of fish species. In this paper, we propose YOLO-Fish, a deep learning based fish detection model. We have proposed two models, YOLO-Fish-1 and YOLO-Fish-2. YOLO-Fish-1 enhances YOLOv3 by fixing the issue of upsampling step sizes of to reduce the misdetection of tiny fish. YOLO-Fish-2 further improves the model by adding Spatial Pyramid Pooling to the first model to add the capability to detect fish appearance in those dynamic environments. To test the models, we introduce two datasets: DeepFish and OzFish. The DeepFish dataset contains around 15k bounding box annotations across 4505 images, where images belong to 20 different fish habitats. The OzFish is another dataset comprised of about 43k bounding box annotations of wide varieties of fish across around 1800 images. YOLO-Fish1 and YOLO-Fish2 achieved average precision of 76.56% and 75.70%, respectively for fish detection in unconstrained real-world marine environments, which is significantly better than YOLOv3. Both of these models are lightweight compared to recent versions of YOLO like YOLOv4, yet the performances are very similar.  相似文献   

2.
The availability of relatively cheap, high-resolution digital cameras has led to an exponential increase in the capture of natural environments and their inhabitants. Video-based surveys are particularly useful in the underwater domain where observation by humans can be expensive, dangerous, inaccessible, or destructive to the natural environment. However, a large majority of marine data has never gone through analysis by human experts – a process that is slow, expensive, and not scalable. We test a Mask R-CNN object detection framework for the automated localisation, classification, counting and tracking of fish in unconstrained underwater environments. We present a novel, labelled image dataset of roman seabream (Chrysoblephus laticeps), a fish species endemic to Southern Africa, to train and validate the accuracy of our model. The Mask R-CNN model accurately detected and classified roman seabream on the training dataset (mAP50 = 80.29%), validation dataset (mAP50 = 80.35%), as well as on previously unseen footage (test dataset) (mAP50 = 81.45%). The fact that the model performs well on previously unseen data suggests that it is capable of generalising to new streams of data not included in this research.  相似文献   

3.
Coral reefs are rich in fisheries and aquatic resources, and the study and monitoring of coral reef ecosystems are of great economic value and practical significance. Due to complex backgrounds and low-quality videos, it is challenging to identify coral reef fish. This study proposed an image enhancement approach for fish detection in complex underwater environments. The method first uses a Siamese network to obtain a saliency map and then multiplies this saliency map by the input image to construct an image enhancement module. Applying this module to the existing mainstream one-stage and two-stage target detection frameworks can significantly improve their detection accuracy. Good detection performance was achieved in a variety of scenarios, such as those with luminosity variations, aquatic plant movements, blurred images, large targets and multiple targets, demonstrating the robustness of the algorithm. The best performance was achieved on the LCF-15 dataset when combining the proposed method with the cascade region-based convolutional neural network (Cascade-RCNN). The average precision at an intersection-over-union (IoU) threshold of 0.5 (AP50) was 0.843, and the F1 score was 0.817, exceeding the best reported results on this dataset. This study provides an automated video analysis tool for marine-related researchers and technical support for downstream applications.  相似文献   

4.
Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.  相似文献   

5.
We propose a fish detection system based on deep network architectures to robustly detect and count fish objects under a variety of benthic background and illumination conditions. The algorithm consists of an ensemble of Region-based Convolutional Neural Networks that are linked in a cascade structure by Long Short-Term Memory networks. The proposed network is efficiently trained as all components are jointly trained by backpropagation. We train and test our system for a dataset of 18 videos taken in the wild. In our dataset, there are around 20 to 100 fish objects per frame with many fish objects having small pixel areas (less than 900 square pixels). From a series of experiments and ablation tests, the proposed system preserves detection accuracy despite multi-scale distortions, cropping and varying background environments. We present analysis that shows how object localization accuracy is increased by an automatic correction mechanism in the deep network's cascaded ensemble structure. The correction mechanism rectifies any errors in the predictions as information progresses through the network cascade. Our findings in this experiment regarding ensemble system architectures can be generalized to other object detection applications.  相似文献   

6.
7.
Fish are a critical component of marine biology; therefore, the accurate identification and counting of fish are essential for the objective monitoring and assessment of marine biological resources. High-frequency adaptive resolution imaging sonar (ARIS) is widely used for underwater object detection and imaging, and it quickly obtains close-up video of free-swimming fish in high-turbidity water environments. Nonetheless, processing the massive data output using imaging sonars remains a major challenge. Here, the authors developed an automatic image-processing programme that fuses K-nearest neighbour background subtraction with DeepSort target tracking to automatically track and count fish. The automatic programme was evaluated using four test data sets with different target sizes and observation ranges and differently deployed sonars. According to the results, the approach successfully counted free-swimming fish targets with an accuracy index of 73% and a completeness index of 70%. Under appropriate conditions, this approach could replace time-consuming semi-automatic approaches and improve the efficiency of imaging sonar data processing, while providing technical support for future real-time data processing.  相似文献   

8.
Planning for resilience is the focus of many marine conservation programs and initiatives. These efforts aim to inform conservation strategies for marine regions to ensure they have inbuilt capacity to retain biological diversity and ecological function in the face of global environmental change--particularly changes in climate and resource exploitation. In the absence of direct biological and ecological information for many marine species, scientists are increasingly using spatially-explicit, predictive-modeling approaches. Through the improved access to multibeam sonar and underwater video technology these models provide spatial predictions of the most suitable regions for an organism at resolutions previously not possible. However, sensible-looking, well-performing models can provide very different predictions of distribution depending on which occurrence dataset is used. To examine this, we construct species distribution models for nine temperate marine sedentary fishes for a 25.7 km(2) study region off the coast of southeastern Australia. We use generalized linear model (GLM), generalized additive model (GAM) and maximum entropy (MAXENT) to build models based on co-located occurrence datasets derived from two underwater video methods (i.e. baited and towed video) and fine-scale multibeam sonar based seafloor habitat variables. Overall, this study found that the choice of modeling approach did not considerably influence the prediction of distributions based on the same occurrence dataset. However, greater dissimilarity between model predictions was observed across the nine fish taxa when the two occurrence datasets were compared (relative to models based on the same dataset). Based on these results it is difficult to draw any general trends in regards to which video method provides more reliable occurrence datasets. Nonetheless, we suggest predictions reflecting the species apparent distribution (i.e. a combination of species distribution and the probability of detecting it). Consequently, we also encourage researchers and marine managers to carefully interpret model predictions.  相似文献   

9.
Seagrasses provide a wide range of ecosystem services in coastal marine environments. Despite their ecological and economic importance, these species are declining because of human impact. This decline has driven the need for monitoring and mapping to estimate the overall health and dynamics of seagrasses in coastal environments, often based on underwater images. However, seagrass detection from underwater digital images is not a trivial task; it requires taxonomic expertise and is time-consuming and expensive. Recently automatic approaches based on deep learning have revolutionised object detection performance in many computer vision applications, and there has been interest in applying this to automated seagrass detection from imagery. Deep learning–based techniques reduce the need for hardcore feature extraction by domain experts which is required in machine learning-based techniques. This study presents a YOLOv5-based one-stage detector and an EfficientDetD7–based two-stage detector for detecting seagrass, in this case, Halophila ovalis, one of the most widely distributed seagrass species. The EfficientDet-D7–based seagrass detector achieves the highest mAP of 0.484 on the ECUHO-2 dataset and mAP of 0.354 on the ECUHO-1 dataset, which are about 7% and 5% better than the state-of-the-art Halophila ovalis detection performance on those datasets, respectively. The proposed YOLOv5-based detector achieves an average inference time of 0.077 s and 0.043 s respectively which are much lower than the state-of-the-art approach on the same datasets.  相似文献   

10.
Image recognition is the process of recognizing and classifying objects with machine learning algorithms. Image binarization is the first and most challenging step in image recognition, in which foreground objects are separated from their background. When foreground objects have complex morphological structure and background noise is strong, foreground objects are often being fractured into subcomponents. To address the over-segmentation issue of organisms with complex structures, we propose a 2-stage adaptive binarization approach based on Sauvola's binarization algorithm. We tested the effectiveness of the new approach on a set of underwater images with jellyfish collected in nearshore waters using a shadowgraph underwater plankton imaging system, PlanktonScope, because jellyfish have relatively complex structure and are often over-segemented. The results showed that the 2-stage approach improved the integrity of extracted jellyfish compared to traditional binarization methods, including Sauvola's algorithm. The analysis of local entropy values showed that the first stage effectively suppresses redundant information in the image and reduces the number of Region of Interests (ROIs), and the second stage preserves relatively weak and low-intensity signals to ensure the integrity of the extracted targets. The 2-stage approach improves hardware resource utilization and computational efficiency. It is robust for images acquired in sub-optimal conditions and enhances the accuracy of analytical results in the study of marine organisms using imaging systems.  相似文献   

11.
Fish species recognition is an important task to preserve ecosystems, feed humans, and tourism. In particular, the Pantanal is a wetland region that harbors hundreds of species and is considered one of the most important ecosystems in the world. In this paper, we present a new method based on convolutional neural networks (CNNs) for Pantanal fish species recognition. A new CNN composed of three branches that classify the fish species, family and order is proposed with the aim of improving the recognition of species with similar characteristics. The branch that classifies the fish species uses information learned from the family and order, which has shown to improve the overall accuracy. Results on unrestricted image dataset showed that the proposed method provides superior results to traditional approaches. Our method obtained an accuracy of 0.873 versus 0.864 of traditional CNN in recognition of 68 fish species. In addition, our method provides fish family and order recognition, which obtained accuracies of 0.938 and 0.96, respectively. We hope that, with these promising results, an automatic tool can be developed to monitor species in an important region such as the Pantanal.  相似文献   

12.
We present CANOES, an algorithm for the detection of rare copy number variants from exome sequencing data. CANOES models read counts using a negative binomial distribution and estimates variance of the read counts using a regression-based approach based on selected reference samples in a given dataset. We test CANOES on a family-based exome sequencing dataset, and show that its sensitivity and specificity is comparable to that of XHMM. Moreover, the method is complementary to Gaussian approximation-based methods (e.g. XHMM or CoNIFER). When CANOES is used in combination with these methods, it will be possible to produce high accuracy calls, as demonstrated by a much reduced and more realistic de novo rate in results from trio data.  相似文献   

13.
Environmental DNA (eDNA) sampling—the detection of intra- or extra-cellular DNA in environmental samples—is a rapid and sensitive survey method for detecting aquatic species. Single-species detection methods (typically based on PCR or LAMP) have been shown to be more sensitive for detecting target species than multi-species detection methods, such as metabarcoding. However, previous studies have generally only compared these two eDNA detection approaches for a single target species and have used different methodological and statistical approaches. Here we present a comparison of single- and multi-species eDNA detection methods, drawing on two published case studies (one fish, one amphibian) and two new extensive datasets on a freshwater mammal (the platypus). To ensure consistent conclusions regarding the sensitivity of each eDNA method, we use the same hierarchical site occupancy-detection model for each dataset, incorporating uncertainty at the site, water sample, and technical replicate level. Overall, qPCR achieved higher detection probabilities than metabarcoding across species and datasets. However, differences in sensitivity between detection methods varied depending on methodological decisions concerning what constitutes a true positive detection (i.e., qPCR and metabarcoding thresholds). The decision as to which eDNA detection method to use should always be influenced by the study aims, but our results suggest that single-species detection methods based on qPCR may be preferable when the aim is to achieve a high detection probability for target species.  相似文献   

14.
Background modeling and foreground detection are key parts of any computer vision system. These problems have been addressed in literature with several probabilistic approaches based on mixture models. Here we propose a new kind of probabilistic background models which is based on probabilistic self-organising maps. This way, the background pixels are modeled with more flexibility. On the other hand, a statistical correlation measure is used to test the similarity among nearby pixels, so as to enhance the detection performance by providing a feedback to the process. Several well known benchmark videos have been used to assess the relative performance of our proposal with respect to traditional neural and non neural based methods, with favourable results, both qualitatively and quantitatively. A statistical analysis of the differences among methods demonstrates that our method is significantly better than its competitors. This way, a strong alternative to classical methods is presented.  相似文献   

15.
Whole-genome genotyping methods are important for breeding. However, it has been challenging to develop a robust method for simultaneous foreground and background genotyping that can easily be adapted to different genes and species. In our study, we accidently discovered that in adapter ligation-mediated PCR, the amplification by primer-template mismatched annealing (PTMA) along the genome could generate thousands of stable PCR products. Based on this observation, we consequently developed a novel method for simultaneous foreground and background integrated genotyping by sequencing (FBI-seq) using one specific primer, in which foreground genotyping is performed by primer-template perfect annealing (PTPA), while background genotyping employs PTMA. Unlike DNA arrays, multiple PCR, or genome target enrichments, FBI-seq requires little preliminary work for primer design and synthesis, and it is easily adaptable to different foreground genes and species. FBI-seq therefore provides a prolific, robust, and accurate method for simultaneous foreground and background genotyping to facilitate breeding in the post-genomics era.  相似文献   

16.
Non-intrusive monitoring of animals in the wild is possible using camera trapping networks. The cameras are triggered by sensors in order to disturb the animals as little as possible. This approach produces a high volume of data (in the order of thousands or millions of images) that demands laborious work to analyze both useless (incorrect detections, which are the most) and useful (images with presence of animals). In this work, we show that as soon as some obstacles are overcome, deep neural networks can cope with the problem of the automated species classification appropriately. As case of study, the most common 26 of 48 species from the Snapshot Serengeti (SSe) dataset were selected and the potential of the Very Deep Convolutional neural networks framework for the species identification task was analyzed. In the worst-case scenario (unbalanced training dataset containing empty images) the method reached 35.4% Top-1 and 60.4% Top-5 accuracy. For the best scenario (balanced dataset, images containing foreground animals only, and manually segmented) the accuracy reached a 88.9% Top-1 and 98.1% Top-5, respectively. To the best of our knowledge, this is the first published attempt on solving the automatic species recognition on the SSe dataset. In addition, a comparison with other approaches on a different dataset was carried out, showing that the architectures used in this work outperformed previous approaches. The limitations of the method, drawbacks, as well as new challenges in automatic camera-trap species classification are widely discussed.  相似文献   

17.
In this work, we describe the development of Polar Gini Curve, a method for characterizing cluster markers by analyzing single-cell RNA sequencing (scRNA-seq) data. Polar Gini Curve combines the gene expression and the 2D coordinates ("spatial") information to detect patterns of uniformity in any clustered cells from scRNA-seq data. We demonstrate that Polar Gini Curve can help users characterize the shape and density distribution of cells in a particular cluster, which can be generated during routine scRNA-seq data analysis. To quantify the extent to which a gene is uniformly distributed in a cell cluster space, we combine two polar Gini curves (PGCs)—one drawn upon the cell-points expressing the gene (the"foreground curve") and the other drawn upon all cell-points in the cluster (the"background curve"). We show that genes with highly dissimilar foreground and background curves tend not to uniformly distributed in the cell cluster—thus having spatially divergent gene expression patterns within the cluster. Genes with similar foreground and background curves tend to uniformly distributed in the cell cluster—thus having uniform gene expression patterns within the cluster. Such quantitative attributes of PGCs can be applied to sensitively discover biomarkers across clusters from scRNA-seq data. We demonstrate the performance of the Polar Gini Curve framework in several simulation case studies. Using this framework to analyze a real-world neonatal mouse heart cell dataset, the detected biomarkers may characterize novel subtypes of cardiac muscle cells. The source code and data for Polar Gini Curve could be found at http://discovery.informatics.uab.edu/PGC/ or https://figshare.com/projects/Polar_Gini_Curve/76749.  相似文献   

18.
19.
Intensity normalization is an important pre-processing step in the study and analysis of DaTSCAN SPECT imaging. As most automatic supervised image segmentation and classification methods base their assumptions regarding the intensity distributions on a standardized intensity range, intensity normalization takes on a very significant role. In this work, a comparison between different novel intensity normalization methods is presented. These proposed methodologies are based on Gaussian Mixture Model (GMM) image filtering and mean-squared error (MSE) optimization. The GMM-based image filtering method is achieved according to a probability threshold that removes the clusters whose likelihood are negligible in the non-specific regions. The MSE optimization method consists of a linear transformation that is obtained by minimizing the MSE in the non-specific region between the intensity normalized image and the template. The proposed intensity normalization methods are compared to: i) a standard approach based on the specific-to-non-specific binding ratio that is widely used, and ii) a linear approach based on the α-stable distribution. This comparison is performed on a DaTSCAN image database comprising analysis and classification stages for the development of a computer aided diagnosis (CAD) system for Parkinsonian syndrome (PS) detection. In addition, these proposed methods correct spatially varying artifacts that modulate the intensity of the images. Finally, using the leave-one-out cross-validation technique over these two approaches, the system achieves results up to a 92.91% of accuracy, 94.64% of sensitivity and 92.65 % of specificity, outperforming previous approaches based on a standard and a linear approach, which are used as a reference. The use of advanced intensity normalization techniques, such as the GMM-based image filtering and the MSE optimization improves the diagnosis of PS.  相似文献   

20.
DNA copy number alterations have been discovered to be key genetic events in development and progression of cancer. No clear data of familial and sporadic breast cancer are available. We focused on looking for an independent platform as a tool to identify the chromosomal profile in familial versus sporadic breast cancer patients. A total of 124 breast cancer patients were studied utilizing aCGH. The dataset was analyzed using Gaussian Mixture Models to determine the thresholds in order to assess gene copy number changes and to minimize the impact of noise on further data analyses. The identification of regions of consistent aberration across samples was carried out with statistical approaches and machine learning tools to draw profiles for familial and sporadic groups. Familial and sporadic cases resulted with a chromosome imbalance of 15% [false discovery rate (FDR): q=718E-5] and 18% (FDR: q=632E-13), respectively. The differential map evidenced two cytogenetic bands (8p23 and 11q13-11q14) significantly altered in familial versus sporadic cases (FDR: q=7E-4). The application of a new bioinformatics tool that discovers fuzzy classification rules (IFRAIS) let to individualize association of genes alterations that identify familial or sporadic cases. These results are comparable to those of the other systems used and are consistent from the biological point of view.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号