首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
BackgroundAtomic Force Microscopy (AFM) is an experimental technique to study structure-function relationship of biomolecules. AFM provides images of biomolecules at nanometer resolution. High-speed AFM experiments produce a series of images following dynamics of biomolecules. To further understand biomolecular functions, information on three-dimensional (3D) structures is beneficial.MethodWe aim to recover 3D information from an AFM image by computational modeling. The AFM image includes only low-resolution representation of a molecule; therefore we represent the structures by a coarse grained model (Gaussian mixture model). Using Monte-Carlo sampling, candidate models are generated to increase similarity between AFM images simulated from the models and target AFM image.ResultsThe algorithm was tested on two proteins to model their conformational transitions. Using a simulated AFM image as reference, the algorithm can produce a low-resolution 3D model of the target molecule. Effect of molecular orientations captured in AFM images on the 3D modeling performance was also examined and it is shown that similar accuracy can be obtained for many orientations.ConclusionsThe proposed algorithm can generate 3D low-resolution protein models, from which conformational transitions observed in AFM images can be interpreted in more detail.General significanceHigh-speed AFM experiments allow us to directly observe biomolecules in action, which provides insights on biomolecular function through dynamics. However, as only partial structural information can be obtained from AFM data, this new AFM based hybrid modeling method would be useful to retrieve 3D information of the entire biomolecule.  相似文献   

2.
MOTIVATION: Caenorhabditis elegans, a roundworm found in soil, is a widely studied model organism with about 1000 cells in the adult. Producing high-resolution fluorescence images of C.elegans to reveal biological insights is becoming routine, motivating the development of advanced computational tools for analyzing the resulting image stacks. For example, worm bodies usually curve significantly in images. Thus one must 'straighten' the worms if they are to be compared under a canonical coordinate system. RESULTS: We develop a worm straightening algorithm (WSA) that restacks cutting planes orthogonal to a 'backbone' that models the anterior-posterior axis of the worm. We formulate the backbone as a parametric cubic spline defined by a series of control points. We develop two methods for automatically determining the locations of the control points. Our experimental methods show that our approaches effectively straighten both 2D and 3D worm images.  相似文献   

3.
Tissue microarray (TMA) is a high throughput analysis tool to identify new diagnostic and prognostic markers in human cancers. However, standard automated method in tumour detection on both routine histochemical and immunohistochemistry (IHC) images is under developed. This paper presents a robust automated tumour cell segmentation model which can be applied to both routine histochemical tissue slides and IHC slides and deal with finer pixel-based segmentation in comparison with blob or area based segmentation by existing approaches. The presented technique greatly improves the process of TMA construction and plays an important role in automated IHC quantification in biomarker analysis where excluding stroma areas is critical. With the finest pixel-based evaluation (instead of area-based or object-based), the experimental results show that the proposed method is able to achieve 80% accuracy and 78% accuracy in two different types of pathological virtual slides, i.e., routine histochemical H&E and IHC images, respectively. The presented technique greatly reduces labor-intensive workloads for pathologists and highly speeds up the process of TMA construction and provides a possibility for fully automated IHC quantification.  相似文献   

4.
An algorithm for associating the features of two images.   总被引:12,自引:0,他引:12  
In this paper we describe an algorithm that operates on the distances between features in the two related images and delivers a set of correspondences between them. The algorithm maximizes the inner product of two matrices, one of which is the desired 'pairing matrix' and the other a 'proximity matrix' with elements exp (-rij2/2 sigma 2), where rij is the distance between two features, one in each image, and sigma is an adjustable scale parameter. The output of the algorithm may be compared with the movements that people perceive when viewing two images in quick succession, and it is found that an increase in sigma affects the computed correspondences in much the same way as an increase in interstimulus interval alters the perceived displacements. Provided that sigma is not too small the algorithm will recover the feature mappings that result from image translation, expansion or shear deformation--transformations of common occurrence in image sequences--even when the displacements of individual features depart slightly from the general trend.  相似文献   

5.
Higuchi dimension of digital images   总被引:1,自引:0,他引:1  
Ahammer H 《PloS one》2011,6(9):e24796
There exist several methods for calculating the fractal dimension of objects represented as 2D digital images. For example, Box counting, Minkowski dilation or Fourier analysis can be employed. However, there appear to be some limitations. It is not possible to calculate only the fractal dimension of an irregular region of interest in an image or to perform the calculations in a particular direction along a line on an arbitrary angle through the image. The calculations must be made for the whole image. In this paper, a new method to overcome these limitations is proposed. 2D images are appropriately prepared in order to apply 1D signal analyses, originally developed to investigate nonlinear time series. The Higuchi dimension of these 1D signals is calculated using Higuchi's algorithm, and it is shown that both regions of interests and directional dependencies can be evaluated independently of the whole picture. A thorough validation of the proposed technique and a comparison of the new method to the Fourier dimension, a common two dimensional method for digital images, are given. The main result is that Higuchi's algorithm allows a direction dependent as well as direction independent analysis. Actual values for the fractal dimensions are reliable and an effective treatment of regions of interests is possible. Moreover, the proposed method is not restricted to Higuchi's algorithm, as any 1D method of analysis, can be applied.  相似文献   

6.
Fast rotational matching of single-particle images   总被引:1,自引:0,他引:1  
The presence of noise and absence of contrast in electron micrographs lead to a reduced resolution of the final 3D reconstruction, due to the inherent limitations of single-particle image alignment. The fast rotational matching (FRM) algorithm was introduced recently for an accurate alignment of 2D images under such challenging conditions. Here, we implemented this algorithm for the first time in a standard 3D reconstruction package used in electron microscopy. This allowed us to carry out exhaustive tests of the robustness and reliability in iterative orientation determination, classification, and 3D reconstruction on simulated and experimental image data. A classification test on GroEL chaperonin images demonstrates that FRM assigns up to 13% more images to their correct reference orientation, compared to the classical self-correlation function method. Moreover, at sub-nanometer resolution, GroEL and rice dwarf virus reconstructions exhibit a remarkable resolution gain of 10-20% that is attributed to the novel image alignment kernel.  相似文献   

7.
[目的]具有复杂背景的蝴蝶图像前背景分割难度大.本研究旨在探索基于深度学习显著性目标检测的蝴蝶图像自动分割方法.[方法]应用DUTS-TR数据集训练F3Net显著性目标检测算法构建前背景预测模型,然后将模型用于具有复杂背景的蝴蝶图像数据集实现蝴蝶前背景自动分割.在此基础上,采用迁移学习方法,保持ResNet骨架不变,利...  相似文献   

8.
Automatic analysis of DNA microarray images using mathematical morphology   总被引:10,自引:0,他引:10  
MOTIVATION: DNA microarrays are an experimental technology which consists in arrays of thousands of discrete DNA sequences that are printed on glass microscope slides. Image analysis is an important aspect of microarray experiments. The aim of this step is to reduce an image of spots into a table with a measure of the intensity for each spot. Efficient, accurate and automatic analysis of DNA spot images is essential in order to use this technology in laboratory routines. RESULTS: We present an automatic non-supervised set of algorithms for a fast and accurate spot data extraction from DNA microarrays using morphological operators which are robust to both intensity variation and artefacts. The approach can be summarised as follows. Initially, a gridding algorithm yields the automatic segmentation of the microarray image into spot quadrants which are later individually analysed. Then the analysis of the spot quadrant images is achieved in five steps. First, a pre-quantification, the spot size distribution law is calculated. Second, the background noise extraction is performed using a morphological filtering by area. Third, an orthogonal grid provides the first approach to the spot locus. Fourth, the spot segmentation or spot boundaries definition is carried out using the watershed transformation. And fifth, the outline of detected spots allows the signal quantification or spot intensities extraction; in this respect, a noise model has been investigated. The performance of the algorithm has been compared with two packages: ScanAlyze and Genepix, showing its robustness and precision.  相似文献   

9.
A line profile of fluorescent intensities in confocal images is frequently examined. We have developed the computer software tool to analyse the profiles of intensities of fluorescent probes in confocal images. The software averages neighbouring pixels, adjacent to the central line, without reducing the spatial resolution of the image. As an experimental model, we have used the skeletal muscle fibre isolated from the rat skeletal muscle extensor digitorum brevis. As a marker of myofibrils' structure, we have used phalloidin–rhodamine staining and the anti-TIM antibody to label mitochondria. We also tested the distribution of the protein kinase B/Akt. Since signalling is ordered in modules and large protein complexes appear to direct signalling to organelles and regulate specific physiological functions, a software tool to analyse such complexes in fluorescent confocal images is required. The software displays the image, and the user defines the line for analysis. The image is rotated by the angle of the line. The line profile is calculated by averaging one dimension of the cropped rotated image matrix. The spatial resolution in averaged line profile is not decreased compared with single-pixel line profile, which was confirmed by the discrete Fourier transform computed with a fast Fourier transform algorithm. We conclude that the custom software tool presented here is a useful tool to analyse line profiles of fluorescence intensities in confocal images.  相似文献   

10.
PurposeTo develop an automatic multimodal method for segmentation of parotid glands (PGs) from pre-registered computed tomography (CT) and magnetic resonance (MR) images and compare its results to the results of an existing state-of-the-art algorithm that segments PGs from CT images only.MethodsMagnetic resonance images of head and neck were registered to the accompanying CT images using two different state-of-the-art registration procedures. The reference domains of registered image pairs were divided on the complementary PG regions and backgrounds according to the manual delineation of PGs on CT images, provided by a physician. Patches of intensity values from both image modalities, centered around randomly sampled voxels from the reference domain, served as positive or negative samples in the training of the convolutional neural network (CNN) classifier. The trained CNN accepted a previously unseen (registered) image pair and classified its voxels according to the resemblance of its patches to the patches used for training. The final segmentation was refined using a graph-cut algorithm, followed by the dilate-erode operations.ResultsUsing the same image dataset, segmentation of PGs was performed using the proposed multimodal algorithm and an existing monomodal algorithm, which segments PGs from CT images only. The mean value of the achieved Dice overlapping coefficient for the proposed algorithm was 78.8%, while the corresponding mean value for the monomodal algorithm was 76.5%.ConclusionsAutomatic PG segmentation on the planning CT image can be augmented with the MR image modality, leading to an improved RT planning of head and neck cancer.  相似文献   

11.
We compared the results of recognition of fragmented contour images in the presence of noise and without noise. Both the contour images and the visual noise were synthesized with Gabor elements. The spacing between fragments in contour images and between noise elements, as well as the sizes of images, varied irrespective of one another. The percentage of recognition did not depend on the size of stimuli, but it differed for various objects in the presence and absence of noise. The percentage of recognition was higher for images with lots of turns in the absence of noise and, on the contrary, for images with lengthy contours with a lightly varying curvature in the presence of noise. The thresholds of recognition in noise depended, in general, on the ratio of the spacing between the elements in noise to the spacing between contour fragments.  相似文献   

12.
Automatic registration of microarray images. II. Hexagonal grid   总被引:3,自引:0,他引:3  
MOTIVATION: In the first part of this paper the author presented an efficient, robust and completely automated algorithm for spot and block indexing in microarray images with rectangular grids. Although the rectangular grid is currently the most common type of grouping the probes on microarray slides, there is another microarray technology based on bundles of optical fibers where the probes are packed in hexagonal grids. The hexagonal grid provides both advantages and drawbacks over the standard rectangular packing and of course requires adaptation and/or modification of the algorithm of spot indexing presented in the first part of the paper. RESULTS: In the second part of the paper the author presents a version of the spot indexing algorithm adapted for microarray images with spots packed in hexagonal structures. The algorithm is completely automated, works with hexagonal grids of different types and with different parameters of grid spacing and rotation as well as spot sizes. It can successfully trace the local and global distortions of the grid, including non-orthogonal transformations. Similar to the algorithm from part I, it scales linearly with the grid size, the time complexity is O(M), where M is total number of grid points in hexagonal grid. The algorithm has been tested both on CCD and scanned images with spot expression rates as low as 2%. The processing time of an image with about 50 000 hex grid points was less than a second. For images with high expression rates ( approximately 90%) the registration time is even smaller, around a quarter of a second. Supplementary information: http://fleece.ucsd.edu/~vit/Registration_Supplement.pdf  相似文献   

13.
For cDNA array methods that depend on imaging of a radiolabel, we show that bleedover of one spot onto another, due to the gap between the array and the imaging media, can be a major problem. The images can be sharpened, however, using a blind convolution method based on the EM algorithm. The sharpened images look like a set of donuts, which concurs with our knowledge of the spotting process. Oversharpened images are actually useful as well, in locating the centers of each spot.  相似文献   

14.
In this paper, an image restoration algorithm is proposed to identify noncausal blur function. Image degradation processes include both linear and nonlinear phenomena. A neural network model combining an adaptive auto-associative network with a random Gaussian process is proposed to restore the blurred image and blur function simultaneously. The noisy and blurred images are modeled as continuous associative networks, whereas auto-associative part determines the image model coefficients and the hetero-associative part determines the blur function of the system. The self-organization like structure provides the potential solution of the blind image restoration problem. The estimation and restoration are implemented by using an iterative gradient based algorithm to minimize the error function.  相似文献   

15.
Zebrafish is widely used to understand neural development and model various neurodegenerative diseases. Zebrafish embryos are optically transparent, have a short development period, and can be kept alive in microplates for days, making them amenable to high-throughput microscopic imaging. As a result of high-throughput experiments, a large number of images can be generated in a single experiment, posing a challenge to researchers to analyze them efficiently and quantitatively. In this work, we develop an image processing focused on detecting and quantifying pigments in zebrafish embryos. The algorithm automatically detects a region of interest (ROI) enclosing an area around the pigments and then segment the pigments for quantification. In this process, the algorithm identifies the head and torso at first, and then finds the boundaries corresponding to the back and abdomen by taking advantage of a priori information about the anatomy of zebrafish embryos. The method is robust in terms that it can detect and quantify pigments even when the embryos have different orientations and curvatures. We used real data to demonstrate the performance of the method to extract phenotypic information from zebrafish embryo images and compared its results with manual analysis for verification.  相似文献   

16.
In this work, we present a technique to semi-automatically quantify the epicardial fat in non-contrasted computed tomography (CT) images. The epicardial fat is very close to the pericardial fat, being separated only by the pericardium that appears in the image as a very thin line, which is hard to detect. Therefore, an algorithm that uses the anatomy of the heart was developed to detect the pericardium line via control points of the line. From the points detected an interpolation was applied based on the cubic interpolation, which was also improved to avoid incorrect interpolation that occurs when the two variables are non-monotonic. The method is validated by using a set of 40 CT images of the heart of 40 human subjects. In 62.5% of the cases only minimal user intervention was required and the results compared favourably with the results obtained by the manual process.  相似文献   

17.
PurposePositron emission tomography (PET) images tend to be significantly degraded by the partial volume effect (PVE) resulting from the limited spatial resolution of the reconstructed images. Our purpose is to propose a partial volume correction (PVC) method to tackle this issue.MethodsIn the present work, we explore a voxel-based PVC method under the least squares framework (LS) employing anatomical non-local means (NLMA) regularization. The well-known non-local means (NLM) filter utilizes the high degree of information redundancy that typically exists in images, and is typically used to directly reduce image noise by replacing each voxel intensity with a weighted average of its non-local neighbors. Here we explore NLM as a regularization term within iterative-deconvolution model to perform PVC. Further, an anatomical-guided version of NLM was proposed that incorporates MRI information into NLM to improve resolution and suppress image noise. The proposed approach makes subtle usage of the accompanying MRI information to define a more appropriate search space within the prior model. To optimize the regularized LS objective function, we used the Gauss-Seidel (GS) algorithm with the one-step-late (OSL) technique.ResultsAfter the import of NLMA, the visual and quality results are all improved. With a visual check, we notice that NLMA reduce the noise compared to other PVC methods. This is also validated in bias-noise curve compared to non-MRI-guided PVC framework. We can see NLMA gives better bias-noise trade-off compared to other PVC methods.ConclusionsOur efforts were evaluated in the base of amyloid brain PET imaging using the BrainWeb phantom and in vivo human data. We also compared our method with other PVC methods. Overall, we demonstrated the value of introducing subtle MRI-guidance in the regularization process, the proposed NLMA method resulting in promising visual as well as quantitative performance improvements.  相似文献   

18.

Background

Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue.

Methods

The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time.

Results

We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time.

Conclusions

This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.
  相似文献   

19.
In this work, we present a technique to semi-automatically quantify the epicardial fat in non-contrasted computed tomography (CT) images. The epicardial fat is very close to the pericardial fat, being separated only by the pericardium that appears in the image as a very thin line, which is hard to detect. Therefore, an algorithm that uses the anatomy of the heart was developed to detect the pericardium line via control points of the line. From the points detected an interpolation was applied based on the cubic interpolation, which was also improved to avoid incorrect interpolation that occurs when the two variables are non-monotonic. The method is validated by using a set of 40 CT images of the heart of 40 human subjects. In 62.5% of the cases only minimal user intervention was required and the results compared favourably with the results obtained by the manual process.  相似文献   

20.
W D Niles  Q Li    F S Cohen 《Biophysical journal》1992,63(3):710-722
We have developed an algorithm for automated detection of the dynamic pattern characterizing flashes of fluorescence in video images of membrane fusion. The algorithm detects the spatially localized, transient increases and decreases in brightness that result from the dequenching of fluorescent dye in phospholipid vesicles or lipid-enveloped virions fusing with a planar membrane. The flash is identified in video images by its nonzero time derivative and the symmetry of its spatial profile. Differentiation is implemented by forward and backward subtractions of video frames. The algorithm groups spatially connected pixels brighter than a user-specified threshold into distinct objects in forward- and backward-differentiated images. Objects are classified as either flashes or noise particles by comparing the symmetries of matched forward and backward difference profiles and then by tracking each profile in successive difference images. The number of flashes identified depends on the brightness threshold, the size of the convolution kernel used to filter the image, and the time difference between the subtracted video frames. When these parameters are changed so that the algorithm identifies an increasing percentage of the flashes recognized by eye, an increasing number of noise objects are mistakenly identified as flashes. These mistaken flashes can be eliminated by a human observer. The algorithm considerably shortens the time needed to analyze video data. Tested extensively with phospholipid vesicle and virion fusion with planar membranes, our implementation of the algorithm accurately determined the rate of fusion of influenza virions labeled with the lipophilic dye octadecylrhodamine (R18).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号