首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Quantitative analysis of digitized IHC-stained tissue sections is increasingly used in research studies and clinical practice. Accurate quantification of IHC staining, however, is often complicated by conventional tissue counterstains caused by the color convolution of the IHC chromogen and the counterstain. To overcome this issue, we implemented a new counterstain, Acid Blue 129, which provides homogeneous tissue background staining. Furthermore, we combined this counterstaining technique with a simple, robust, fully automated image segmentation algorithm, which takes advantage of the high degree of color separation between the 3-amino-9-ethyl-carbazole (AEC) chromogen and the Acid Blue 129 counterstain. Rigorous validation of the automated technique against manual segmentation data, using Ki-67 IHC sections from rat C6 glioma and β-amyloid IHC sections from transgenic mice with amyloid precursor protein (APP) mutations, has shown the automated method to produce highly accurate results compared with ground truth estimates based on the manually segmented images. The synergistic combination of the novel tissue counterstaining and image segmentation techniques described in this study will allow for accurate, reproducible, and efficient quantitative IHC studies for a wide range of antibodies and tissues. (J Histochem Cytochem 56:873–880, 2008)  相似文献   

2.
HER2 assessment is routinely used to select patients with invasive breast cancer that might benefit from HER2-targeted therapy. The aim of this study was to validate a fully automated in situ hybridization (ISH) procedure that combines the automated Leica HER2 fluorescent ISH system for Bond with supervised automated analysis with the Visia imaging D-Sight digital imaging platform. HER2 assessment was performed on 328 formalin-fixed/paraffin-embedded invasive breast cancer tumors on tissue microarrays (TMA) and 100 (50 selected IHC 2+ and 50 random IHC scores) full-sized slides of resections/biopsies obtained for diagnostic purposes previously. For digital analysis slides were pre-screened at 20x and 100x magnification for all fluorescent signals and supervised-automated scoring was performed on at least two pictures (in total at least 20 nuclei were counted) with the D-Sight HER2 FISH analysis module by two observers independently. Results were compared to data obtained previously with the manual Abbott FISH test. The overall agreement with Abbott FISH data among TMA samples and 50 selected IHC 2+ cases was 98.8% (κ = 0.94) and 93.8% (κ = 0.88), respectively. The results of 50 additionally tested unselected IHC cases were concordant with previously obtained IHC and/or FISH data. The combination of the Leica FISH system with the D-Sight digital imaging platform is a feasible method for HER2 assessment in routine clinical practice for patients with invasive breast cancer.  相似文献   

3.
4.
The increased use of immunohistochemistry (IHC) in both clinical and basic research settings has led to the development of techniques for acquiring quantitative information from immunostains. Staining correlates with absolute protein levels and has been investigated as a clinical tool for patient diagnosis and prognosis. For these reasons, automated imaging methods have been developed in an attempt to standardize IHC analysis. We propose a novel imaging technique in which brightfield images of diaminobenzidene (DAB)-labeled antigens are converted to normalized blue images, allowing automated identification of positively stained tissue. A statistical analysis compared our method with seven previously published imaging techniques by measuring each one's agreement with manual analysis by two observers. Eighteen DAB-stained images showing a range of protein levels were used. Accuracy was assessed by calculating the percentage of pixels misclassified using each technique compared with a manual standard. Bland-Altman analysis was then used to show the extent to which misclassification affected staining quantification. Many of the techniques were inconsistent in classifying DAB staining due to background interference, but our method was statistically the most accurate and consistent across all staining levels.  相似文献   

5.
Manual offline analysis, of a scanning electron microscopy (SEM) image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM). The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD) calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm.  相似文献   

6.

Background

Tissue MicroArrays (TMAs) represent a potential high-throughput platform for the analysis and discovery of tissue biomarkers. As TMA slides are produced manually and subject to processing and sectioning artefacts, the layout of TMA cores on the final slide and subsequent digital scan (TMA digital slide) is often disturbed making it difficult to associate cores with their original position in the planned TMA map. Additionally, the individual cores can be greatly altered and contain numerous irregularities such as missing cores, grid rotation and stretching. These factors demand the development of a robust method for de-arraying TMAs which identifies each TMA core, and assigns them to their appropriate coordinates on the constructed TMA slide.

Methodology

This study presents a robust TMA de-arraying method consisting of three functional phases: TMA core segmentation, gridding and mapping. The segmentation of TMA cores uses a set of morphological operations to identify each TMA core. Gridding then utilises a Delaunay Triangulation based method to find the row and column indices of each TMA core. Finally, mapping correlates each TMA core from a high resolution TMA whole slide image with its name within a TMAMap.

Conclusion

This study describes a genuine robust TMA de-arraying algorithm for the rapid identification of TMA cores from digital slides. The result of this de-arraying algorithm allows the easy partition of each TMA core for further processing. Based on a test group of 19 TMA slides (3129 cores), 99.84% of cores were segmented successfully, 99.81% of cores were gridded correctly and 99.96% of cores were mapped with their correct names via TMAMaps. The gridding of TMA cores were also extensively tested using a set of 113 pseudo slide (13,536 cores) with a variety of irregular grid layouts including missing cores, rotation and stretching. 100% of the cores were gridded correctly.  相似文献   

7.
OBJECTIVE: To develop a method for the automated segmentation of images of routinely hematoxylin-eosin (H-E)-stained microscopic sections to guarantee correct results in computer-assisted microscopy. STUDY DESIGN: Clinical material was composed 50 H-E-stained biopsies of astrocytomas and 50 H-E-stained biopsies of urinary bladder cancer. The basic idea was to use a support vector machine clustering (SVMC) algorithm to provide gross segmentation of regions holding nuclei and subsequently to refine nuclear boundary detection with active contours. The initialization coordinates of the active contour model were defined using a SVMC pixel-based classification algorithm that discriminated nuclear regions from the surrounding tissue. Starting from the boundaries of these regions, the snake fired and propagated until converging to nuclear boundaries. RESULTS: The method was validated for 2 different types of H-E-stained images. Results were evaluated by 2 histopathologists. On average, 94% of nuclei were correctly delineated. CONCLUSION: The proposed algorithm could be of value in computer-based systems for automated interpretation of microscopic images.  相似文献   

8.
Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard.  相似文献   

9.
Cardiovascular diseases are closely associated with deteriorating atherosclerotic plaques. Optical coherence tomography (OCT) is a recently developed intravascular imaging technique with high resolution approximately 10 microns and could provide accurate quantification of coronary plaque morphology. However, tissue segmentation of OCT images in clinic is still mainly performed manually by physicians which is time consuming and subjective. To overcome these limitations, two automatic segmentation methods for intracoronary OCT image based on support vector machine (SVM) and convolutional neural network (CNN) were performed to identify the plaque region and characterize plaque components. In vivo IVUS and OCT coronary plaque data from 5 patients were acquired at Emory University with patient’s consent obtained. Seventy-seven matched IVUS and OCT slices with good image quality and lipid cores were selected for this study. Manual OCT segmentation was performed by experts using virtual histology IVUS as guidance, and used as gold standard in the automatic segmentations. The overall classification accuracy based on CNN method achieved 95.8%, and the accuracy based on SVM was 71.9%. The CNN-based segmentation method can better characterize plaque compositions on OCT images and greatly reduce the time spent by doctors in segmenting and identifying plaques.  相似文献   

10.
Vegetation is an integral component of wetland ecosystems. Mapping distribution, quality and quantity of wetland vegetation is important for wetland protection, management and restoration. This study evaluated the performance of object-based and pixel-based Random Forest (RF) algorithms for mapping wetland vegetation using a new Chinese high spatial resolution Gaofen-1 (GF-1) satellite image, L-band PALSAR and C-band Radarsat-2 data. This research utilized the wavelet-principal component analysis (PCA) image fusion technique to integrate multispectral GF-1 and synthetic aperture radar (SAR) images. Comparison of six classification scenarios indicates that the use of additional multi-source datasets achieved higher classification accuracy. The specific conclusions of this study include the followings:(1) the classification of GF-1, Radarsat-2 and PALSAR images found statistically significant difference between pixel-based and object-based methods; (2) object-based and pixel-based RF classifications both achieved greater 80% overall accuracy for both GF-1 and GF-1 fused with SAR images; (3) object-based classifications improved overall accuracy between 3%-10% in all scenarios when compared to pixel-based classifications; (4) object-based classifications produced by the integration of GF-1, Radarsat-2 and PALSAR images outperformed any of the lone datasets, and achieved 89.64% overall accuracy.  相似文献   

11.
A novel pre-treatment process for image segmentation, based on anisotropic diffusion and robust statistics, is presented in this paper. Image smoothing with edge preservation is shown to help upper limb segmentation (shoulder segmentation in particular) in MRI datasets. The anisotropic diffusion process is mainly controlled by an automated stopping function that depends on the values of voxel gradient. Voxel gradients are divided into two classes: one for high values, corresponding to edge voxels or noisy voxels, one for low values. The anisotropic diffusion process is also controlled by a threshold on voxel gradients that separates both classes. A global estimation of this threshold parameter is classically used. In this paper, we propose a new method based on a local robust estimation. It allows a better removing of noise while preserving edges in the images. An entropy criterion is used to quantify the ability of the algorithm to remove noise with different signal to noise ratios in synthetic images. Another quantitative evaluation criterion based on the Pratt Figure of Merit (FOM) is proposed to evaluate the edge preservation and their location accuracy with respect to a manual segmentation. The results on synthetic and MRI data of shoulder show the assets of the local model in terms of areas homogeneity and edges locations.  相似文献   

12.
Efficient use of whole slide imaging in pathology needs automated region of interest (ROI) retrieval and classification, through the use of image analysis and data sorting tools. One possible method for data sorting uses Spectral Analysis for Dimensionality Reduction. We present some interesting results in the field of histopathology and cytohematology. In histopathology, we developed a Computer-Aided Diagnosis system applied to low-resolution images representing the totality of histological breast tumour sections. The images can be digitized directly at low resolution or be obtained from sub-sampled high-resolution virtual slides. Spectral Analysis is used (1) for image segmentation (stroma, tumour epithelium), by determining a "distance" between all the images of the database, (2) for choosing representative images and characteristic patterns of each histological type in order to index them, and (3) for visualizing images or features similar to a sample provided by the pathologist. In cytohematology, we studied a blood smear virtual slide acquired through high resolution oil scanning and Spectral Analysis is used to sort selected nucleated blood cell classes so that the pathologist may easily focus on specific classes whose morphology could then be studied more carefully or which can be analyzed through complementary instruments, like Multispectral Imaging or Raman MicroSpectroscopy.  相似文献   

13.
Manual quantification of immunohistochemically stained nuclear markers is still laborious and subjective and the use of computerized systems for digital image analysis have not yet resolved the problems of nuclear clustering. In this study, we designed a new automatic procedure for quantifying various immunohistochemical nuclear markers with variable clustering complexity. This procedure consisted of two combined macros. The first, developed with a commercial software, enabled the analysis of the digital images using color and morphological segmentation including a masking process. All information extracted with this first macro was automatically exported to an Excel datasheet, where a second macro composed of four different algorithms analyzed all the information and calculated the definitive number of positive nuclei for each image. One hundred and eighteen images with different levels of clustering complexity was analyzed and compared with the manual quantification obtained by a trained observer. Statistical analysis indicated a great reliability (intra-class correlation coefficient > 0.950) and no significant differences between the two methods. Bland–Altman plot and Kaplan–Meier curves indicated that the results of both methods were concordant around 90% of analyzed images. In conclusion, this new automated procedure is an objective, faster and reproducible method that has an excellent level of accuracy, even with digital images with a high complexity.  相似文献   

14.
The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.  相似文献   

15.
Automated gray matter segmentation of magnetic resonance imaging data is essential for morphometric analyses of the brain, particularly when large sample sizes are investigated. However, although detection of small structural brain differences may fundamentally depend on the method used, both accuracy and reliability of different automated segmentation algorithms have rarely been compared. Here, performance of the segmentation algorithms provided by SPM8, VBM8, FSL and FreeSurfer was quantified on simulated and real magnetic resonance imaging data. First, accuracy was assessed by comparing segmentations of twenty simulated and 18 real T1 images with corresponding ground truth images. Second, reliability was determined in ten T1 images from the same subject and in ten T1 images of different subjects scanned twice. Third, the impact of preprocessing steps on segmentation accuracy was investigated. VBM8 showed a very high accuracy and a very high reliability. FSL achieved the highest accuracy but demonstrated poor reliability and FreeSurfer showed the lowest accuracy, but high reliability. An universally valid recommendation on how to implement morphometric analyses is not warranted due to the vast number of scanning and analysis parameters. However, our analysis suggests that researchers can optimize their individual processing procedures with respect to final segmentation quality and exemplifies adequate performance criteria.  相似文献   

16.
Fluorescent in-situ hybridization (FISH) and immunohistochemistry (IHC) constitute a pair of complimentary techniques for detecting gene amplification and overexpression, respectively. The advantages of IHC include relatively cheap materials and high sample durability, while FISH is the more accurate and reproducible method. Evaluation of FISH and IHC images is still largely performed manually, with automated or semiautomated techniques increasing in popularity. Here, we provide a comprehensive review of a number of (semi-) automated FISH and IHC image processing systems, focusing on the algorithmic aspects of each technique. Our review verifies the increasingly important role of such methods in FISH and IHC; however, manual intervention is still necessary in order to resolve particularly challenging or ambiguous cases. In addition, large-scale validation is required in order for these systems to enter standard clinical practice.  相似文献   

17.
IntroductionAccurate activity quantification is applied in radiation dosimetry. Planar images are important for quantification of whole-body images, enabling assessment of biodistribution from radionuclide administrations. We evaluated the effect of tumour geometry on quantification accuracy of 123I planar phantom studies, including various tumour sizes, tumour-liver distances and two tumour-background ratios.Methods and materialsAn in-house manufactured abdominal phantom was equipped with a liver, different size cylindrical tumours, and a rod for tumour-liver distance variation. The geometric mean method with scatter and attenuation corrections was used for image processing. Scatter and attenuation corrections were made using the triple energy window scatter correction technique and a printed transmission sheet source, respectively. Region definitions for tumour activity distribution compensated for the partial volume effect (PVE). Activity measured in the dose calibrator served as reference for determining quantification accuracy.ResultsThe smallest tumour had the largest percentage deviation with an average activity underestimation of 34.6 ± 1.2%. Activity values for the largest tumour were overestimated by 3.1 ± 3.0%. PVE compensation improved quantification accuracy for all tumour sizes yielding accuracies of <12.4%. Scatter contribution to the tumours from the liver had minimal effect on quantification accuracy at tumour-liver distances >3 cm. With PVE compensation, increased tumour-background ratio resulted in a percentage increase of up to 26.3%.ConclusionWhen applying relevant corrections for scatter, attenuation and PVE without background activity, quantification accuracy of <13% was obtained. We demonstrated the successful implementation of a practical technique to obtain quantitative information from 123I planar images.  相似文献   

18.
Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.  相似文献   

19.
Biomarker research relies on tissue microarrays (TMA). TMAs are produced by repeated transfer of small tissue cores from a ‘donor’ block into a ‘recipient’ block and then used for a variety of biomarker applications. The construction of conventional TMAs is labor intensive, imprecise, and time-consuming. Here, a protocol using next-generation Tissue Microarrays (ngTMA) is outlined. ngTMA is based on TMA planning and design, digital pathology, and automated tissue microarraying. The protocol is illustrated using an example of 134 metastatic colorectal cancer patients. Histological, statistical and logistical aspects are considered, such as the tissue type, specific histological regions, and cell types for inclusion in the TMA, the number of tissue spots, sample size, statistical analysis, and number of TMA copies. Histological slides for each patient are scanned and uploaded onto a web-based digital platform. There, they are viewed and annotated (marked) using a 0.6-2.0 mm diameter tool, multiple times using various colors to distinguish tissue areas. Donor blocks and 12 ‘recipient’ blocks are loaded into the instrument. Digital slides are retrieved and matched to donor block images. Repeated arraying of annotated regions is automatically performed resulting in an ngTMA. In this example, six ngTMAs are planned containing six different tissue types/histological zones. Two copies of the ngTMAs are desired. Three to four slides for each patient are scanned; 3 scan runs are necessary and performed overnight. All slides are annotated; different colors are used to represent the different tissues/zones, namely tumor center, invasion front, tumor/stroma, lymph node metastases, liver metastases, and normal tissue. 17 annotations/case are made; time for annotation is 2-3 min/case. 12 ngTMAs are produced containing 4,556 spots. Arraying time is 15-20 hr. Due to its precision, flexibility and speed, ngTMA is a powerful tool to further improve the quality of TMAs used in clinical and translational research.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号