首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are ‘pixel-wise classification’ and ‘end-to-end segmentation’. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.  相似文献   

2.
Segmenting three-dimensional (3D) microscopy images is essential for understanding phenomena like morphogenesis, cell division, cellular growth, and genetic expression patterns. Recently, deep learning (DL) pipelines have been developed, which claim to provide high accuracy segmentation of cellular images and are increasingly considered as the state of the art for image segmentation problems. However, it remains difficult to define their relative performances as the concurrent diversity and lack of uniform evaluation strategies makes it difficult to know how their results compare. In this paper, we first made an inventory of the available DL methods for 3D cell segmentation. We next implemented and quantitatively compared a number of representative DL pipelines, alongside a highly efficient non-DL method named MARS. The DL methods were trained on a common dataset of 3D cellular confocal microscopy images. Their segmentation accuracies were also tested in the presence of different image artifacts. A specific method for segmentation quality evaluation was adopted, which isolates segmentation errors due to under- or oversegmentation. This is complemented with a 3D visualization strategy for interactive exploration of segmentation quality. Our analysis shows that the DL pipelines have different levels of accuracy. Two of them, which are end-to-end 3D and were originally designed for cell boundary detection, show high performance and offer clear advantages in terms of adaptability to new data.  相似文献   

3.
Whole-cell protein quantification using MS has proven to be a challenging task. Detection efficiency varies significantly from peptide to peptide, molecular identities are not evident a priori, and peptides are dispersed unevenly throughout the multidimensional data space. To overcome these challenges we developed an open-source software package, MapQuant, to quantify comprehensively organic species detected in large MS datasets. MapQuant treats an LC/MS experiment as an image and utilizes standard image processing techniques to perform noise filtering, watershed segmentation, peak finding, peak fitting, peak clustering, charge-state determination and carbon-content estimation. MapQuant reports abundance values that respond linearly with the amount of sample analyzed on both low- and high-resolution instruments (over a 1000-fold dynamic range). Background noise added to a sample, either as a medium-complexity peptide mixture or as a high-complexity trypsinized proteome, exerts negligible effects on the abundance values reported by MapQuant and with coefficients of variance comparable to other methods. Finally, MapQuant's ability to define accurate mass and retention time features of isotopic clusters on a high-resolution mass spectrometer can increase protein sequence coverage by assigning sequence identities to observed isotopic clusters without corresponding MS/MS data.  相似文献   

4.
The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.  相似文献   

5.
为了探索基于深度神经网络模型的牙形刺图像智能识别效果,研究选取奥陶纪8种牙形刺作为研究对象,通过体视显微镜采集牙形刺图像1188幅,收集整理公开发表文献的牙形刺图像778幅,将图像数据集划分为训练集和测试集。通过对训练集图像进行旋转、翻转、滤波增强处理,解决了训练样本不足的问题。基于ResNet-18、ResNet-34、ResNet-50、ResNet-101、ResNet-152五种残差神经网络模型,采用迁移学习方法,对网络模型进行训练以获取模型参数,五种模型测试Top-1准确率分别为85.37%、85.85%、83.90%、81.95%、80.00%, Top-2准确率分别为94.63%、94.63%、94.15%、93.17%、93.66%,模型对牙形刺图像具有较好的识别效果。通过对比研究发现,ResNet-34识别准确率最高,说明对于特征简单的牙形刺属种,增加网络深度并不一定能提升准确率,而确定合适深度的模型则不仅可以提高识别准确率,还可以节约计算资源。通过ResNet-34模型的迁移学习训练和重新训练效果对比可以看出,迁移学习不仅可以获得较高的准确率,而且可以较快获取模型参...  相似文献   

6.

Background  

Systems biologists work with many kinds of data, from many different sources, using a variety of software tools. Each of these tools typically excels at one type of analysis, such as of microarrays, of metabolic networks and of predicted protein structure. A crucial challenge is to combine the capabilities of these (and other forthcoming) data resources and tools to create a data exploration and analysis environment that does justice to the variety and complexity of systems biology data sets. A solution to this problem should recognize that data types, formats and software in this high throughput age of biology are constantly changing.  相似文献   

7.
Careful visual examination of biological samples is quite powerful, but many visual analysis tasks done in the laboratory are repetitive, tedious, and subjective. Here we describe the use of the open-source software, CellProfiler, to automatically identify and measure a variety of biological objects in images. The applications demonstrated here include yeast colony counting and classifying, cell microarray annotation, yeast patch assays, mouse tumor quantification, wound healing assays, and tissue topology measurement. The software automatically identifies objects in digital images, counts them, and records a full spectrum of measurements for each object, including location within the image, size, shape, color intensity, degree of correlation between colors, texture (smoothness), and number of neighbors. Small numbers of images can be processed automatically on a personal computer and hundreds of thousands can be analyzed using a computing cluster. This free, easy-to-use software enables biologists to comprehensively and quantitatively address many questions that previously would have required custom programming, thereby facilitating discovery in a variety of biological fields of study.  相似文献   

8.
9.
In this work, we describe the CRIMSON (CardiovasculaR Integrated Modelling and SimulatiON) software environment. CRIMSON provides a powerful, customizable and user-friendly system for performing three-dimensional and reduced-order computational haemodynamics studies via a pipeline which involves: 1) segmenting vascular structures from medical images; 2) constructing analytic arterial and venous geometric models; 3) performing finite element mesh generation; 4) designing, and 5) applying boundary conditions; 6) running incompressible Navier-Stokes simulations of blood flow with fluid-structure interaction capabilities; and 7) post-processing and visualizing the results, including velocity, pressure and wall shear stress fields. A key aim of CRIMSON is to create a software environment that makes powerful computational haemodynamics tools accessible to a wide audience, including clinicians and students, both within our research laboratories and throughout the community. The overall philosophy is to leverage best-in-class open source standards for medical image processing, parallel flow computation, geometric solid modelling, data assimilation, and mesh generation. It is actively used by researchers in Europe, North and South America, Asia, and Australia. It has been applied to numerous clinical problems; we illustrate applications of CRIMSON to real-world problems using examples ranging from pre-operative surgical planning to medical device design optimization.  相似文献   

10.
With the achievements of deep learning, applications of deep convolutional neural networks for the image denoising problem have been widely studied. However, these methods are typically limited by GPU in terms of network layers and other aspects. This paper proposes a multi-level network that can efficiently utilize GPU memory, named Double Enhanced Residual Network (DERNet), for biological-image denoising. The network consists of two sub-networks, and U-Net inspires the basic structure. For each sub-network, the encoder-decoder hierarchical structure is used for down-scaling and up-scaling feature maps so that GPU can yield large receptive fields. In the encoder process, the convolution layers are used for down-sampling to obtain image information, and residual blocks are superimposed for preliminary feature extraction. In the operation of the decoder, transposed convolution layers have the capability to up-sampling and combine with the Residual Dense Instance Normalization (RDIN) block that we propose, extract deep features and restore image details. Finally, both qualitative experiments and visual effects demonstrate the effectiveness of our proposed algorithm.  相似文献   

11.
Kuzmina M  Manykin E  Surina I 《Bio Systems》2004,76(1-3):43-53
An oscillatory network of columnar architecture located in 3D spatial lattice was recently designed by the authors as oscillatory model of the brain visual cortex. Single network oscillator is a relaxational neural oscillator with internal dynamics tunable by visual image characteristics - local brightness and elementary bar orientation. It is able to demonstrate either activity state (stable undamped oscillations) or "silence" (quickly damped oscillations). Self-organized nonlocal dynamical connections of oscillators depend on oscillator activity levels and orientations of cortical receptive fields. Network performance consists in transfer into a state of clusterized synchronization. At current stage grey-level image segmentation tasks are carried out by 2D oscillatory network, obtained as a limit version of the source model. Due to supplemented network coupling strength control the 2D reduced network provides synchronization-based image segmentation. New results on segmentation of brightness and texture images presented in the paper demonstrate accurate network performance and informative visualization of segmentation results, inherent in the model.  相似文献   

12.
Analyzing the dynamical properties of mobile objects requires to extract trajectories from recordings, which is often done by tracking movies. We compiled a database of two-dimensional movies for very different biological and physical systems spanning a wide range of length scales and developed a general-purpose, optimized, open-source, cross-platform, easy to install and use, self-updating software called FastTrack. It can handle a changing number of deformable objects in a region of interest, and is particularly suitable for animal and cell tracking in two-dimensions. Furthermore, we introduce the probability of incursions as a new measure of a movie’s trackability that doesn’t require the knowledge of ground truth trajectories, since it is resilient to small amounts of errors and can be computed on the basis of an ad hoc tracking. We also leveraged the versatility and speed of FastTrack to implement an iterative algorithm determining a set of nearly-optimized tracking parameters—yet further reducing the amount of human intervention—and demonstrate that FastTrack can be used to explore the space of tracking parameters to optimize the number of swaps for a batch of similar movies. A benchmark shows that FastTrack is orders of magnitude faster than state-of-the-art tracking algorithms, with a comparable tracking accuracy. The source code is available under the GNU GPLv3 at https://github.com/FastTrackOrg/FastTrack and pre-compiled binaries for Windows, Mac and Linux are available at http://www.fasttrack.sh.  相似文献   

13.
Biological measurements frequently involve measuring parameters as a function of time, space, or frequency. Later, during the analysis phase of the study, the researcher splits the recorded data trace into smaller sections, analyzes each section separately by finding a mean or fitting against a specified function, and uses the analysis results in the study. Here, we present the software that allows to analyze these data traces in a manner that ensures repeatability of the analysis and simplifies the application of FAIR (findability, accessibility, interoperability, and reusability) principles in such studies. At the same time, it simplifies the routine data analysis pipeline and gives access to a fast overview of the analysis results. For that, the software supports reading the raw data, processing the data as specified in the protocol, and storing all intermediate results in the laboratory database. The software can be extended by study- or hardware-specific modules to provide the required data import and analysis facilities. To simplify the development of the data entry web interfaces, that can be used to enter data describing the experiments, we released a web framework with an example implementation of such a site. The software is covered by open-source license and is available through several online channels.  相似文献   

14.
In this paper, a network of coupled chaotic maps for multi-scale image segmentation is proposed. Time evolutions of chaotic maps that correspond to a pixel cluster are synchronized with one another, while this synchronized evolution is desynchronized with respect to time evolution of chaotic maps corresponding to other pixel clusters in the same image. The number of pixel clusters is previously unknown and the adaptive pixel moving technique introduced in the model makes it robust enough to classify ambiguous pixels.  相似文献   

15.
How to build your own complete working biocomputing platform with nothing more than a desktop computer and an Internet connection.  相似文献   

16.
17.
Diatoms are a crucial component in the study of aquatic ecosystems and ancient environmental records. However, traditional methods for identifying diatoms, such as morphological taxonomy and molecular detection, are costly, are time consuming, and have limitations. To address these issues, we developed an extensive collection of diatom images, consisting of 7983 images from 160 genera and 1042 species, which we expanded to 49,843 through preprocessing, segmentation, and data augmentation. Our study compared the performance of different algorithms, including backbones, batch sizes, dynamic data augmentation, and static data augmentation on experimental results. We determined that the ResNet152 network outperformed other networks, producing the most accurate results with top-1 and top-5 accuracies of 85.97% and 95.26%, respectively, in identifying 1042 diatom species. Additionally, we propose a method that combines model prediction and cosine similarity to enhance the model's performance in low-probability predictions, achieving an 86.07% accuracy rate in diatom identification. Our research contributes significantly to the recognition and classification of diatom images and has potential applications in water quality assessment, ecological monitoring, and detecting changes in aquatic biodiversity.  相似文献   

18.

Background

Efficient computational recognition and segmentation of target organ from medical images are foundational in diagnosis and treatment, especially about pancreas cancer. In practice, the diversity in appearance of pancreas and organs in abdomen, makes detailed texture information of objects important in segmentation algorithm. According to our observations, however, the structures of previous networks, such as the Richer Feature Convolutional Network (RCF), are too coarse to segment the object (pancreas) accurately, especially the edge.

Method

In this paper, we extend the RCF, proposed to the field of edge detection, for the challenging pancreas segmentation, and put forward a novel pancreas segmentation network. By employing multi-layer up-sampling structure replacing the simple up-sampling operation in all stages, the proposed network fully considers the multi-scale detailed contexture information of object (pancreas) to perform per-pixel segmentation. Additionally, using the CT scans, we supply and train our network, thus get an effective pipeline.

Result

Working with our pipeline with multi-layer up-sampling model, we achieve better performance than RCF in the task of single object (pancreas) segmentation. Besides, combining with multi scale input, we achieve the 76.36% DSC (Dice Similarity Coefficient) value in testing data.

Conclusion

The results of our experiments show that our advanced model works better than previous networks in our dataset. On the other words, it has better ability in catching detailed contexture information. Therefore, our new single object segmentation model has practical meaning in computational automatic diagnosis.
  相似文献   

19.
Primary crop losses in agriculture are due to leaf diseases, which farmers cannot identify early. If the diseases are not detected early and correctly, then the farmer will have to undergo huge losses. Therefore, in the field of agriculture, the detection of leaf diseases in tomato crops plays a vital role. Recent advances in computer vision and deep learning techniques have made disease prediction easy in agriculture. Tomato crop front side leaf images are considered for research due to their high exposure to diseases. The image segmentation process assumes a significant role in identifying disease affected areas on tomato leaf images. Therefore, this paper develops an efficient tomato crop leaf disease segmentation model using an enhanced radial basis function neural network (ERBFNN). The proposed ERBFNN is enhanced using the modified sunflower optimization (MSFO) algorithm. Initially, the noise present in the images is removed by a Gaussian filter followed by CLAHE (contrast-limited adaptive histogram equalization) based on contrast enhancement and un-sharp masking. Then, color features are extracted from each leaf image and given to the segmentation stage to segment the disease portion of the input image. The performance of the proposed ERBFNN approach is estimated using different metrics such as accuracy, Jaccard coefficient (JC), Dice's coefficient (DC), precision, recall, F-Measure, sensitivity, specificity, and mean intersection over union (MIoU) and are compared with existing state-of-the-art methods of radial basis function (RBF), fuzzy c-means (FCM), and region growing (RG). The experimental results show that the proposed ERBFNN segmentation model outperformed with an accuracy of 98.92% compared to existing state-of-the-art methods like RBFNN, FCM, and RG, as well as previous research work.  相似文献   

20.
X-windows based microscopy image processing package (Xmipp) is a specialized suit of image processing programs, primarily aimed at obtaining the 3D reconstruction of biological specimens from large sets of projection images acquired by transmission electron microscopy. This public-domain software package was introduced to the electron microscopy field eight years ago, and since then it has changed drastically. New methodologies for the analysis of single-particle projection images have been added to classification, contrast transfer function correction, angular assignment, 3D reconstruction, reconstruction of crystals, etc. In addition, the package has been extended with functionalities for 2D crystal and electron tomography data. Furthermore, its current implementation in C++, with a highly modular design of well-documented data structures and functions, offers a convenient environment for the development of novel algorithms. In this paper, we present a general overview of a new generation of Xmipp that has been re-engineered to maximize flexibility and modularity, potentially facilitating its integration in future standardization efforts in the field. Moreover, by focusing on those developments that distinguish Xmipp from other packages available, we illustrate its added value to the electron microscopy community.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号