首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively.  相似文献   

2.
Observers moving through a three-dimensional environment can use optic flow to determine their direction of heading. Existing heading algorithms use cartesian flow fields in which image flow is the displacement of image features over time. I explore a heading algorithm that uses affine flow instead. The affine flow at an image feature is its displacement modulo an affine transformation defined by its neighborhood. Modeling the observer's instantaneous motion by a translation and a rotation about an axis through its eye, affine flow is tangent to the translational field lines on the observer's viewing sphere. These field lines form a radial flow field whose center is the direction of heading. The affine flow heading algorithm has characteristics that can be used to determine whether the human visual system relies on it. The algorithm is immune to observer rotation and arbitrary affine transformations of its input images; its accuracy improves with increasing variation in environmental depth; and it cannot recover heading in an environment consisting of a single plane because affine flow vanishes in this case. Translational field lines can also be approximated through differential cartesian motion. I compare the performance of heading algorithms based on affine flow, differential cartesian flow, and least-squares search.  相似文献   

3.
Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells.  相似文献   

4.
《Médecine Nucléaire》2007,31(4):153-159
Respiratory motion reduces overall qualitative and quantitative accuracy in emission tomography imaging. The impact of respiratory motion has been further highlighted in the use of multi-modality imaging devices, where differences in respiratory conditions between the acquisition of anatomical and functional datasets can lead to significant artefacts. Current state of the art in accounting for such effects is the use of respiratory-gated acquisitions. Although such acquisitions may lead to a certain reduction in respiratory motion effects, the improvement is reduced as a result of using only part of the available data to reconstruct the individual gated frames. Approaches to correct the differences in the respiratory motion between the individual gated frames, in order to allow their combination, can be divided in two categories, namely, image or raw data based. The image-based approaches make use of registration algorithms to realign the gated images and, subsequently, sum them together; while the raw data approaches, based on the incorporation of transformations, account for differences in the respiratory motion between individual frames, either prior or during the reconstruction of all of the acquired data. Previous research in this field has demonstrated that a non-rigid local-based model leads to better results compared with an affine model in accounting for respiratory motion between gated frames. In addition, a superior image contrast can be obtained by incorporating the necessary transformation in the reconstruction process in comparison to an image-based approach.  相似文献   

5.

Background

Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively.

Results

We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest.

Conclusions

The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without compromise in image quality or information loss in associated spectra. These results motivate further use of label free microscopy techniques in real-time imaging of live immune cells.  相似文献   

6.
The graphics processing unit (GPU), which originally was used exclusively for visualization purposes, has evolved into an extremely powerful co-processor. In the meanwhile, through the development of elaborate interfaces, the GPU can be used to process data and deal with computationally intensive applications. The speed-up factors attained compared to the central processing unit (CPU) are dependent on the particular application, as the GPU architecture gives the best performance for algorithms that exhibit high data parallelism and high arithmetic intensity. Here, we evaluate the performance of the GPU on a number of common algorithms used for three-dimensional image processing. The algorithms were developed on a new software platform called "CUDA", which allows a direct translation from C code to the GPU. The implemented algorithms include spatial transformations, real-space and Fourier operations, as well as pattern recognition procedures, reconstruction algorithms and classification procedures. In our implementation, the direct porting of C code in the GPU achieves typical acceleration values in the order of 10-20 times compared to a state-of-the-art conventional processor, but they vary depending on the type of the algorithm. The gained speed-up comes with no additional costs, since the software runs on the GPU of the graphics card of common workstations.  相似文献   

7.
Automated image detection and segmentation in blood smears.   总被引:4,自引:0,他引:4  
S S Poon  R K Ward  B Palcic 《Cytometry》1992,13(7):766-774
A simple technique which automatically detects and then segments nucleated cells in Wright's giemsa-stained blood smears is presented. Our method differs from others in 1) the simplicity of our algorithms; 2) inclusion of touching (as well as nontouching) cells; and 3) use of these algorithms to segment as well as to detect nucleated cells employing conventionally prepared smears. Our method involves: 1) acquisition of spectral images; 2) preprocessing the acquired images; 3) detection of single and touching cells in the scene; 4) segmentation of the cells into nuclear and cytoplasmic regions; and 5) postprocessing of the segmented regions. The first two steps of this algorithm are employed to obtain high-quality images, to remove random noise, and to correct aberration and shading effects. Spectral information of the image is used in step 3 to segment the nucleated cells from the rest of the scene. Using the initial cell masks, nucleated cells which are just touching are detected and separated. Simple features are then extracted and conditions applied such that single nucleated cells are finally selected. In step 4, the intensity variations of the cells are then used to segment the nucleus from the cytoplasm. The success rate in segmenting the nucleated cells is between 81 and 93%. The major errors in segmentation of the nucleus and the cytoplasm in the recognized nucleated cells are 3.5% and 2.2%, respectively.  相似文献   

8.

Background  

High content screening (HCS) is a powerful method for the exploration of cellular signalling and morphology that is rapidly being adopted in cancer research. HCS uses automated microscopy to collect images of cultured cells. The images are subjected to segmentation algorithms to identify cellular structures and quantitate their morphology, for hundreds to millions of individual cells. However, image analysis may be imperfect, especially for "HCS-unfriendly" cell lines whose morphology is not well handled by current image segmentation algorithms. We asked if segmentation errors were common for a clinically relevant cell line, if such errors had measurable effects on the data, and if HCS data could be improved by automated identification of well-segmented cells.  相似文献   

9.
SNOMAD is a collection of algorithms for the normalization and standardization of gene expression datasets derived from diverse biological and technological sources. In addition to conventional transformations and visualization tools, SNOMAD includes two non-linear transformations which correct for bias and variance which are non-uniformly distributed across the range of microarray element signal intensities: (1). Local mean normalization; and (2). Local variance correction (Z-score generation using a locally calculated standard deviation).  相似文献   

10.
11.
Crystal unbending, the process that aims to recover a perfect crystal from experimental data, is one of the more important steps in electron crystallography image processing. The unbending process involves three steps: estimation of the unit cell displacements from their ideal positions, extension of the deformation field to the whole image and transformation of the image in order to recover an ideal crystal. In this work, we present a systematic analysis of the second step oriented to address two issues. First, whether the unit cells remain undistorted and only the distance between them should be changed (rigid case) or should be modified with the same deformation suffered by the whole crystal (elastic case). Second, the performance of different extension algorithms (interpolation versus approximation) is explored. Our experiments show that there is no difference between elastic and rigid cases or among the extension algorithms. This implies that the deformation fields are constant over large areas. Furthermore, our results indicate that the main source of error is the transformation of the crystal image.  相似文献   

12.
The library POLCA implements the averaging of biological structureswhose images are recorded in digital form from electron micrographs.The averaging protocol is based upon a method developed aboutten years ago, which allows one to operate on a sequence ofobjects oriented and displaced at random within their frame;the relative rotations and the displacements of the structuresare detected with the use of correlation algorithms and modifiedto make all objects appear the same, apart from their noisycomponents. The average image is then obtained by a simple additionand the signal-to-noise ratio is improved by a factor equalto the square root of the number of objects used to calculatethe average. With respect to the original implementation ofthe method, two novel features characterize the library: thefirst one deals with the functions that are cross-correlatedto determine the relative rotations of the structures; the functionsused here are the inverse transforms of the amplitude spectra(IAS functions), which give rise to sharp maxima when they arecross-correlated. The second peculiarity is the systematic adoption,in the transformations of coordinates and in other circumstances,of an interpolation technique based upon the Fourier serieskernel. POLCA is written in C and runs on a VME machine underthe UNIX V/68 operating system. A programming style has beenadopted to exploit fully the machine resources. Received on December 8, 1989; accepted on January 31, 1990  相似文献   

13.
MOTIVATION: Advances in microscopy technology have led to the creation of high-throughput microscopes that are capable of generating several hundred gigabytes of images in a few days. Analyzing such wealth of data manually is nearly impossible and requires an automated approach. There are at present a number of open-source and commercial software packages that allow the user to apply algorithms of different degrees of sophistication to the images and extract desired metrics. However, the types of metrics that can be extracted are severely limited by the specific image processing algorithms that the application implements, and by the expertise of the user. In most commercial software, code unavailability prevents implementation by the end user of newly developed algorithms better suited for a particular type of imaging assay. While it is possible to implement new algorithms in open-source software, rewiring an image processing application requires a high degree of expertise. To obviate these limitations, we have developed an open-source high-throughput application that allows implementation of different biological assays such as cell tracking or ancestry recording, through the use of small, relatively simple image processing modules connected into sophisticated imaging pipelines. By connecting modules, non-expert users can apply the particular combination of well-established and novel algorithms developed by us and others that are best suited for each individual assay type. In addition, our data exploration and visualization modules make it easy to discover or select specific cell phenotypes from a heterogeneous population. AVAILABILITY: CellAnimation is distributed under the Creative Commons Attribution-NonCommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/). CellAnimationsource code and documentation may be downloaded from www.vanderbilt.edu/viibre/software/documents/CellAnimation.zip. Sample data are available at www.vanderbilt.edu/viibre/software/documents/movies.zip. CONTACT: walter.georgescu@vanderbilt.edu SUPPLEMENTARY INFORMATION: Supplementary data available at Bioinformatics online.  相似文献   

14.
We introduce a fast error-free tracking method applicable to sequences of two and three dimensional images. The core idea is to use Quadtree (resp. Octree) data structures for representing the spatial discretization of an image in two (resp. three) spatial dimensions. This representation enables one to merge into large computational cells the regions that can be faithfully described with such a coarse representation, thus significantly reducing the total number of degrees of freedom that are processed, without compromising accuracy. This encoding is particularly effective in the case of algorithms based on moving fronts, since the adaptive refinement provides a natural means to focus the processing resources on information near the moving front. In this paper, we use an existing contour based tracker and reformulate it to the case of Quad-/Oc-tree data structures. Relevant mathematical assumptions and derivations are presented for this purpose. We then demonstrate that, on standard bio-medical image sequences, a speed up of 5X is easily achieved in 2D and about 10X in 3D.  相似文献   

15.
《FEBS letters》2014,588(8):1331-1338
Formation of metastases negatively impacts the survival prognosis of cancer patients. Globally, if the various steps involved in their formation are relatively well identified, the molecular mechanisms responsible for the emergence of invasive cancer cells are still incompletely resolved. Elucidating what are the mechanisms that allow cancer cells to evade from the tumor is a crucial point since it is the first step of the metastatic potential of a solid tumor. In order to be invasive, cancer cells have to undergo transformations such as down-regulation of cell-cell adhesions, modification of cell-matrix adhesions and acquisition of proteolytic properties. These transformations are accompanied by the capacity to “activate” stromal cells, which may favor the motility of the invasive cells through the extracellular matrix. Since modulation of gap junctional intercellular communication is known to be involved in cancer, we were interested to consider whether these different transformations necessary for the acquisition of invasive phenotype are related with gap junctions and their structural proteins, the connexins. In this review, emerging roles of connexins and gap junctions in the process of tissue invasion are proposed.  相似文献   

16.
A method to assess the cost of character state transformations based on their congruence is proposed. Measuring the distortion of different transformations with a convex increasing function of the number of transformations, and choosing those reconstructions which minimize the distortion for all transformations, may provide a better optimality criterion than the linear functions implemented in currently used methods for optimization. If trees are optimized using such a measure, transformation costs are dynamically determined during reconstructions; this leads to selecting trees implying that the possible state transformations are as reliable as possible. The present method is not iterative (thus avoiding the concern of different final results for different starting points), and it has an explicit optimality criterion. It has a high computational cost; algorithms to lessen the computations required for optimizations and searches are described.  相似文献   

17.
One of the main applications of electrophoretic 2-D gels is the analysis of differential responses between different conditions. For this reason, specific spots are present in one of the images, but not in the other. In some other occasions, the same experiment is repeated between 2 and 12 times in order to increase statistical significance. In both situations, one of the major difficulties of these analysis is that 2-D gels are affected by spatial distortions due to run-time differences and dye-front deformations, resulting in images that are significantly dissimilar not only because of their content, but also because of their geometry. In this technical brief, we show how to use free, state-of-the-art image registration and fusion algorithms developed by us for solving the problem of comparing differential expression profiles, or computing an "average" image from a series of virtually identical gels.  相似文献   

18.
An overview of image-processing methods for Affymetrix GeneChips   总被引:2,自引:0,他引:2  
We present an overview of image-processing methods for Affymetrix GeneChips. All GeneChips are affected to some extent by spatially coherent defects and image processing has a number of potential impacts on the downstream analysis of GeneChip data. Fortunately, there are now a number of robust and accurate algorithms, which identify the most disabling defects. One group of algorithms concentrate on the transformation from the original hybridisation DAT image to the representative CEL file. Another set uses dedicated pattern recognition routines to detect different types of hybridisation defect in replicates. A third type exploits the information provided by public repositories of GeneChips (such as GEO). The use of these algorithms improves the sensitivity of GeneChips, and should be a prerequisite for studies in which there are only few probes per relevant biological signal, such as exon arrays and SNP chips.  相似文献   

19.
The combination of digitized microscopy, algorithms for object recognition and fluorescent labeling is a promising approach for reliable, quick, automated and cost-effective screening of clinical specimens. We describe two conceptually different algorithms for detecting objects in fluorescence microscopic images. One, which is partially automated, compares a mask that represents a typical object with every position in the image; the other, which is fully automated, calculates threshold intensities to segment the image into regions of objects and background. Applications of the algorithms in conjunction with a prototype image-based cytometer are demonstrated for determining the DNA ploidy distribution of cultured human endometrial cells and determining the DNA ploidy distribution and the fraction of cells expressing the E6 antigen of human papilloma virus serotypes 16 and 18 in a PAP smear. The encouraging results from this study suggest that automated image-based cytometry utilizing fluorescent stains will be a valuable asset for clinical screening.  相似文献   

20.
Homoeotic transformations are substitutions of one body part for another which arise during embryogenesis or regeneration. They are well known among the Arthropoda but are not generally thought to occur in Man or other vertebrates. In this paper the occurrence and characteristics of 21 types of epithelial heterotopia and metaplasia are reviewed and it is concluded that they are fully comparable with the homoeotic transformations of the arthropods.. The transformations are concentrated in the gastrointestinal, urinary and female reproductive systems and typically appear as foci of ectopic epithelium with a sharp discontinuity of cell type at the edges of the patches. Most of the transformations occur in renewal tissues and must therefore be interpreted as changes in the states of determination (epigenetic codings) of the stem cells rather than changes between already differentiated cells. Most, but not all, of the transformations are between tissues whose precursors are neighbouring regions of a common cell sheet during early embryogenesis and which are therefore likely to have neighbouring epigenetic codings. Following the Cairns hypothesis for epithelial organization it is proposed that stem cells themselves are protected against changes in epigenetic coding but their daughter cells, normally destined to differentiate and die, are not. Homoeotic transformations may thus occur in situations in which daughter cells become promoted to stem cells which happens either during the growth phase of the organism or during tissue regeneration in the adult.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号