首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The purpose of the present study was to establish a rapid and reproducible method for quantification of tissue-infiltrating leukocytes using computerized image analysis. To achieve this, the staining procedure, the image acquisition, and the image analysis method were optimized. Because of the adaptive features of the human eye, computerized image analysis is more sensitive to variations in staining compared with manual image analysis. To minimize variations in staining, an automated immunostainer was used. With a digital scanner camera, low-magnification images could be sampled at high resolution, thus making it possible to analyze larger tissue sections. Image analysis was performed by color thresholding of the digital images based on values of hue, saturation, and intensity color mode, which we consider superior to the red, green, and blue color mode for analysis of most histological stains. To evaluate the method, we compared computerized analysis of images with a x100 or a x12.5 magnification to assess leukocytes infiltrating rat brain tumors after peripheral immunizations with tumor cells genetically modified to express rat interferon-gamma (IFN-gamma) or medium controls. The results generated by both methods correlated well and did not show any significant differences. The method allows efficient and reproducible processing of large tissue sections that is less time-consuming than conventional methods and can be performed with standard equipment and software.(J Histochem Cytochem 49:1073-1079, 2001)  相似文献   

2.
We describe a new light microscopic imaging system and method to perform high through put color image analysis on histological tissue sections. The system features a computer-controlled, random-access liquid crystal tunable filter and high-resolution digital camera on a conventional brightfield microscope. For any combination of stains, the method determines the spectral transmittance of each stain on the slide and selects two or more wavelengths at which the differential absorption between stain and counterstain is greatest and the exposure time is reasonably short. Flatfield corrected digital images at these wavelengths are acquired and divided to produce a gray scale ratio image. The ratio image is calculated such that the stained features of interest are highlighted above a uniform background and the counterstained features are highlighted below background. Image threshold procedures using either visual inspection or a threshold value determined by the image mean intensity and standard deviation are used to segment the stained features of interest for subsequent morphometry. Results are presented for peroxidase-AEC-labeled tumor tissue and trichrome-stained biomaterial implant tissues. In principle, the method should work for any combination of colored stains. (J Histochem Cytochem 47:1307-1313, 1999)  相似文献   

3.
A critical step in the analysis of images is identifying the area of interest e.g. nuclei. When the nuclei are brighter than the remainder of the image an intensity can be chosen to identify the nuclei. Intensity thresholding is complicated by variations in the intensity of individual nuclei and their intensity relative to their surroundings. To compensate thresholds can be based on local rather than global intensities. By testing local thresholding methods we found that the local mean performed poorly while the Phansalkar method and a new method based on identifying the local background were superior. A new colocalization coefficient, the Hcoef, highlights a number of controversial issues. (i) Are molecular interactions measurable (ii) whether to include voxels without fluorophores in calculations, and (iii) the meaning of negative correlations. Negative correlations can arise biologically (a) because the two fluorophores are in different places or (b) when high intensities of one fluorophore coincide with low intensities of a second. The cases are distinct and we argue that it is only relevant to measure correlation using pixels that contain both fluorophores and, when the fluorophores are in different places, to just report the lack of co-occurrence and omit these uninformative negative correlation. The Hcoef could report molecular interactions in a homogenous medium. But biology is not homogenous and distributions also reflect physico-chemical properties, targeted delivery and retention. The Hcoef actually measures a mix of correlation and co-occurrence, which makes its interpretation problematic and in the absence of a convincing demonstration we advise caution, favouring separate measurements of correlation and of co-occurrence.  相似文献   

4.
Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells.  相似文献   

5.
Quantifying the anatomical data acquired from three‐dimensional (3D) images has become increasingly important in recent years. Visualization and image segmentation are essential for acquiring accurate and detailed anatomical data from images; however, plant tissues such as leaves are difficult to image by confocal or multi‐photon laser scanning microscopy because their airspaces generate optical aberrations. To overcome this problem, we established a staining method based on Nile Red in silicone‐oil solution. Our staining method enables color differentiation between lipid bilayer membranes and airspaces, while minimizing any damage to leaf development. By repeated applications of our staining method we performed time‐lapse imaging of a leaf over 5 days. To counteract the drastic decline in signal‐to‐noise ratio at greater tissue depths, we also developed a local thresholding method (direction‐selective local thresholding, DSLT) and an automated iterative segmentation algorithm. The segmentation algorithm uses the DSLT to extract the anatomical structures. Using the proposed methods, we accurately segmented 3D images of intact leaves to single‐cell resolution, and measured the airspace volumes in intact leaves.  相似文献   

6.
IJ_Rhizo: an open-source software to measure scanned images of root samples   总被引:1,自引:0,他引:1  

Background and aims

This paper provides an overview of the measuring capabilities of IJ_Rhizo, an ImageJ macro that measures scanned images of washed root samples. IJ_Rhizo is open-source, platform-independent and offers a simple graphic user interface (GUI) for a main audience of non-programmer scientists. Being open-source based, it is also fully modifiable to accommodate the specific needs of the more computer-literate users. A comparison of IJ_Rhizo’s performance with that of the widely used commercial package WinRHIZO? is discussed.

Methods

We compared IJ_Rhizo’s performance with that of the commercial package WinRHIZO? using two sets of images, one comprising test-line images, the second consisting of images of root samples collected in the field. IJ_Rhizo and WinRHIZO? estimates were compared by means of correlation and regression analysis.

Results

IJ_Rhizo “Kimura” and WinRHIZO? “Tennant” were the length estimates that were best linearly correlated with each other. Correlation between average root diameter estimates was weaker, due to the sensitivity of this parameter to thresholding and filtering of image background noise.

Conclusions

Overall, IJ_Rhizo offers new opportunities for researchers who cannot afford the cost of commercial software packages to carry out automated measurement of scanned images of root samples, without sacrificing accuracy.  相似文献   

7.
An automatic method for quantification of images of microvessels by computing area proportions and number of objects is presented. The objects are segmented from the background using dynamic thresholding of the average component size histogram. To be able to count the objects, fragmented objects are connected, all objects are filled, and touching objects are separated using a watershed segmentation algorithm. The method is fully automatic and robust with respect to illumination and focus settings. A test set consisting of images grabbed with different focus and illumination for each field of view, was used to test the method, and the proposed method showed less variation than the intraoperator variation using manual threshold. Further, the method showed good correlation to manual object counting (r = 0.80) on an other test set.  相似文献   

8.
In this paper, we demonstrate a comprehensive method for segmenting the retinal vasculature in camera images of the fundus. This is of interest in the area of diagnostics for eye diseases that affect the blood vessels in the eye. In a departure from other state-of-the-art methods, vessels are first pre-grouped together with graph partitioning, using a spectral clustering technique based on morphological features. Local curvature is estimated over the whole image using eigenvalues of Hessian matrix in order to enhance the vessels, which appear as ridges in images of the retina. The result is combined with a binarized image, obtained using a threshold that maximizes entropy, to extract the retinal vessels from the background. Speckle type noise is reduced by applying a connectivity constraint on the extracted curvature based enhanced image. This constraint is varied over the image according to each region''s predominant blood vessel size. The resultant image exhibits the central light reflex of retinal arteries and veins, which prevents the segmentation of whole vessels. To address this, the earlier entropy-based binarization technique is repeated on the original image, but crucially, with a different threshold to incorporate the central reflex vessels. The final segmentation is achieved by combining the segmented vessels with and without central light reflex. We carry out our approach on DRIVE and REVIEW, two publicly available collections of retinal images for research purposes. The obtained results are compared with state-of-the-art methods in the literature using metrics such as sensitivity (true positive rate), selectivity (false positive rate) and accuracy rates for the DRIVE images and measured vessel widths for the REVIEW images. Our approach out-performs the methods in the literature.  相似文献   

9.
Current research in cell biology frequently uses light microscopy to study intracellular organelles. To segment and count organelles, most investigators have used a global thresholding method, which relies on homogeneous background intensity values within a cell. Because this is not always the case, we developed WatershedCounting3D, a program that uses a modified watershed algorithm to more accurately identify intracellular structures from confocal image data, even in the presence of an inhomogeneous background. We give examples of segmenting and counting endoplasmic reticulum exit sites and the Golgi apparatus.  相似文献   

10.
Several segmentation methods of lesion uptake in 18F-FDG PET imaging have been proposed in the literature. Their principles are presented along with their clinical results. The main approach proposed in the literature is the thresholding method. The most commonly used is a constant threshold around 40% of the maximum uptake within the lesion. This simple approach is not valid for small (< 4 or 5 mL), poorly contrasted positive tissue (SUV < 2) or lesion in movement. To limit these problems, more complex thresholding algorithms have been proposed to define the optimal threshold value to be applied to segment the lesion. The principle is to adapt the threshold following a fitting model according to one or two characteristic image parameters. Those algorithms based on iterative approaches to find the optimal threshold value are preferred as they take into account patient data. The main drawback is the need of a calibration step depending on the PET device, the acquisition conditions and the algorithm used for image reconstruction. To avoid this problem, some more sophisticated segmentation methods have been proposed in the literature: derivative methods, watershed and pattern recognition algorithms. The delineation of positive tissue on FDG-PET images is a complex problem, always under investigation.  相似文献   

11.
Cell proliferation and apoptosis indices are important indicators for the prognosis and treatment of a variety of cancers. A method is described using differential absorption color image analysis to measure proliferation and apoptosis in tumor sections using BrdU (5' bromodeoxyuridine) incorporation and immunohistochemistry and terminal deoxytransferase nick end-labeling (TUNEL). Nuclei were labeled with streptavidin-peroxidase-diaminobenzidine (DAB) secondary detection. The differential absorption method uses a computer-controlled microscope equipped with a tunable filter and digital camera to take advantage of the spectral differences of stained objects of interest. Images collected at defined wavelengths are divided and scaled to form ratio images in which the hematoxylin- or DAB-stained nuclei have intensity ranges far above those of surrounding structures. Using brightness thresholding followed by selection based on nuclear size and shape parameters, binary images were formed of the BrdU/apoptotic-positive tumor and all the tumor nuclei for subsequent counting and calculations of proliferation and apoptotic indices.  相似文献   

12.
Leaf area and its derivatives (e.g. specific leaf area) are widely used in ecological assessments, especially in the fields of plant–animal interactions, plant community assembly, ecosystem functioning and global change. Estimating leaf area is highly time-consuming, even when using specialized software to process scanned leaf images, because manual inputs are invariably required for scale detection and leaf surface digitisation. We introduce Black Spot Leaf Area Calculator (hereafter, Black Spot), a technique and stand-alone software package for rapid and automated leaf area assessment from images of leaves taken with standard flatbed scanners. Black Spot operates on comprehensive rule-sets for colour band ratios to carry out pixel-based classification which isolates leaf surfaces from the image background. Importantly, the software extracts information from associated image meta-data to detect image scale, thereby eliminating the need for time-consuming manual scale calibration. Black Spot’s output provides the user with estimates of leaf area as well as classified images for error checking. We tested this method and software combination on a set of 100 leaves of 51 different plant species collected from the field. Leaf area estimates generated using Black Spot and by manual processing of the images using an image editing software generated statistically identical results. Mean error rate in leaf area estimates from Black Spot relative to manual processing was ?0.4 % (SD = 0.76). The key advantage of Black Spot is the ability to rapidly batch process multi-species datasets with minimal user effort and at low cost, thus making it a valuable tool for field ecologists.  相似文献   

13.
The applicability of Feulgen-based parameters to detect variant metaphase chromosomes involved in deletions or translocations, was investigated and algorithms developed to compute such parameters. This report is focused primarily on the magnitude of the errors involved during the prerequisite procedures of photography, measurement and computation. Measurements were performed by stage-scanning of photographic negatives of Feulgen-stained metaphases. In the scanned images the initial chromosome boundaries were obtained by thresholding, while definite chromosomal areas and local background values were obtained by expansion of the initial boundaries. The integrated density profiles and the relative DNA content were computed for the individual chromosomes (straight as well as bent). Total DNA content, DNA arm ratio, as well as length and centromere index can be obtained from the profile. It was shown that under such conditions the experimental errors associated with the measurements are small compared to biologic variations (e.g., differences between homologues) and that the procedures applied allow to detect polymorphisms. In addition to this, mean and standard deviations of both DNA and length parameters are given for metaphases of five subjects. Comparison of the applicability of DNA and length parameters is realized by a classification experiment.  相似文献   

14.
One way for breast cancer diagnosis is provided by taking radiographic (X-ray) images (termed mammograms) for suspect patients, images further used by physicians to identify potential abnormal areas thorough visual inspection. When digital mammograms are available, computer-aided based diagnostic may help the physician in having a more accurate decision. This implies automatic abnormal areas detection using segmentation, followed by tumor classification. This work aims at describing an approach to deal with the classification of digital mammograms. Patches around tumors are manually extracted to segment the abnormal areas from the remaining of the image, considered as background. The mammogram images are filtered using Gabor wavelets and directional features are extracted at different orientation and frequencies. Principal Component Analysis is employed to reduce the dimension of filtered and unfiltered high-dimensional data. Proximal Support Vector Machines are used to final classify the data. Superior mammogram image classification performance is attained when Gabor features are extracted instead of using original mammogram images. The robustness of Gabor features for digital mammogram images distorted by Poisson noise with different intensity levels is also addressed.  相似文献   

15.
In this study we aimed at the development of a cytometric system for quantification of specific DNA sequences using fluorescence in situ hybridization (ISH) and digital imaging microscopy. The cytochemical and cytometric aspects of a quantitative ISH procedure were investigated, using human peripheral blood lymphocyte interphase nuclei and probes detecting high copy number target sequences as a model system. These chromosome-specific probes were labeled with biotin, digoxigenin, or fluorescein. The instrumentation requirements are evaluated. Quantification of the fluorescence ISH signals was performed using an epi-fluorescence microscope with a multi-wavelength illuminator, equipped with a cooled charge couple device (CCD) camera. The performance of the system was evaluated using fluorescing beads and a homogeneously fluorescing specimen. Specific image analysis programs were developed for the automated segmentation and analysis of the images provided by ISH. Non-uniform background fluorescence of the nuclei introduces problems in the image analysis segmentation procedures. Different procedures were tested. Up to 95% of the hybridization signals could be correctly segmented using digital filtering techniques (min-max filter) to estimate local background intensities. The choice of the objective lens used for the collection of images was found to be extremely important. High magnification objectives with high numerical aperture, which are frequently used for visualization of fluorescence, are not optimal, since they do not have a sufficient depth of field. The system described was used for quantification of ISH signals and allowed accurate measurement of fluorescence spot intensities, as well as of fluorescence ratios obtained with double-labeled probes.  相似文献   

16.
The morphology of plant root anatomical features is a key factor in effective water and nutrient uptake. Existing techniques for phenotyping root anatomical traits are often based on manual or semi-automatic segmentation and annotation of microscopic images of root cross sections. In this article, we propose a fully automated tool, hereinafter referred to as RootAnalyzer, for efficiently extracting and analyzing anatomical traits from root-cross section images. Using a range of image processing techniques such as local thresholding and nearest neighbor identification, RootAnalyzer segments the plant root from the image’s background, classifies and characterizes the cortex, stele, endodermis and epidermis, and subsequently produces statistics about the morphological properties of the root cells and tissues. We use RootAnalyzer to analyze 15 images of wheat plants and one maize plant image and evaluate its performance against manually-obtained ground truth data. The comparison shows that RootAnalyzer can fully characterize most root tissue regions with over 90% accuracy.  相似文献   

17.
This paper evaluates the degree of saliency of texts in natural scenes using visual saliency models. A large scale scene image database with pixel level ground truth is created for this purpose. Using this scene image database and five state-of-the-art models, visual saliency maps that represent the degree of saliency of the objects are calculated. The receiver operating characteristic curve is employed in order to evaluate the saliency of scene texts, which is calculated by visual saliency models. A visualization of the distribution of scene texts and non-texts in the space constructed by three kinds of saliency maps, which are calculated using Itti''s visual saliency model with intensity, color and orientation features, is given. This visualization of distribution indicates that text characters are more salient than their non-text neighbors, and can be captured from the background. Therefore, scene texts can be extracted from the scene images. With this in mind, a new visual saliency architecture, named hierarchical visual saliency model, is proposed. Hierarchical visual saliency model is based on Itti''s model and consists of two stages. In the first stage, Itti''s model is used to calculate the saliency map, and Otsu''s global thresholding algorithm is applied to extract the salient region that we are interested in. In the second stage, Itti''s model is applied to the salient region to calculate the final saliency map. An experimental evaluation demonstrates that the proposed model outperforms Itti''s model in terms of captured scene texts.  相似文献   

18.
Background: Analyzing MR scans of low-grade glioma, with highly accurate segmentation will have an enormous potential in neurosurgery for diagnosis and therapy planning. Low-grade gliomas are mainly distinguished by their infiltrating character and irregular contours, which make the analysis, and therefore the segmentation task, more difficult. Moreover, MRI images show some constraints such as intensity variation and the presence of noise.Methods: To tackle these issues, a novel segmentation method built from the local properties of image is presented in this paper. Phase-based edge detection is estimated locally by the monogenic signal using quadrature filters. This way of detecting edges is, from a theoretical point of view, intensity invariant and responds well to the MR images. To strengthen the tumor detection process, a region-based term is designated locally in order to achieve a local maximum likelihood segmentation of the region of interest. A Gaussian probability distribution is considered to model local images intensities.Results: The proposed model is evaluated using a set of real subjects and synthetic images derived from the Brain Tumor Segmentation challenge –BraTS 2015. In addition, the obtained results are compared to the manual segmentation performed by two experts. Quantitative evaluations are performed using the proposed approach with regard to four related existing methods.Conclusion: The comparison of the proposed method, shows more accurate results than the four existing methods.  相似文献   

19.
Calcium sparks and embers are localized intracellular events of calcium release in muscle cells studied frequently by confocal microscopy using line-scan imaging. The large quantity of images and large number of events require automatic detection procedures based on signal processing methods. In the past decades these methods were based on thresholding procedures. Although, recently, wavelet transforms were also introduced, they have not become widespread. We have implemented a set of algorithms based on one- and two-dimensional versions of the à trous wavelet transform. The algorithms were used to perform spike filtering, denoising and detection procedures. Due to the dependence of the algorithms on user adjustable parameters, their effect on the efficiency of the algorithm was studied in detail. We give methods to avoid false positive detections which are the consequence of the background noise in confocal images. In order to establish the efficiency and reliability of the algorithms, various tests were performed on artificial and experimental images. Spark parameters (amplitude, full width-at-half maximum) calculated using the traditional and the wavelet methods were compared. We found that the latter method is capable of identifying more events with better accuracy on experimental images. Furthermore, we extended the wavelet-based transform from calcium sparks to long-lasting small-amplitude events as calcium embers. The method not only solved their automatic detection but enabled the identification of events with small amplitude that otherwise escaped the eye, rendering the determination of their characteristic parameters more accurate.  相似文献   

20.
An image segmentation process was derived from an image model that assumed that cell images represent objects having characteristic relationships, limited shape properties and definite local color features. These assumptions allowed the design of a region-growing process in which the color features were used to iteratively aggregate image points in alternation with a test of the convexity of the aggregate obtained. The combination of both local and global criteria allowed the self-adaptation of the algorithm to segmentation difficulties and led to a self-assessment of the adequacy of the final segmentation result. The quality of the segmentation was evaluated by visual control of the match between cell images and the corresponding segmentation masks proposed by the algorithm. A comparison between this region-growing process and the conventional gray-level thresholding is illustrated. A field test involving 700 bone marrow cells, randomly selected from May-Grünwald-Giemsa-stained smears, allowed the evaluation of the efficiency, effectiveness and confidence of the algorithm: 96% of the cells were evaluated as correctly segmented by the algorithm's self-assessment of adequacy, with a 98% confidence. The principles of the other major segmentation algorithms are also reviewed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号