首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Neuron morphology is frequently used to classify cell-types in the mammalian cortex. Apart from the shape of the soma and the axonal projections, morphological classification is largely defined by the dendrites of a neuron and their subcellular compartments, referred to as dendritic spines. The dimensions of a neuron’s dendritic compartment, including its spines, is also a major determinant of the passive and active electrical excitability of dendrites. Furthermore, the dimensions of dendritic branches and spines change during postnatal development and, possibly, following some types of neuronal activity patterns, changes depending on the activity of a neuron. Due to their small size, accurate quantitation of spine number and structure is difficult to achieve (Larkman, J Comp Neurol 306:332, 1991). Here we follow an analysis approach using high-resolution EM techniques. Serial block-face scanning electron microscopy (SBFSEM) enables automated imaging of large specimen volumes at high resolution. The large data sets generated by this technique make manual reconstruction of neuronal structure laborious. Here we present NeuroStruct, a reconstruction environment developed for fast and automated analysis of large SBFSEM data sets containing individual stained neurons using optimized algorithms for CPU and GPU hardware. NeuroStruct is based on 3D operators and integrates image information from image stacks of individual neurons filled with biocytin and stained with osmium tetroxide. The focus of the presented work is the reconstruction of dendritic branches with detailed representation of spines. NeuroStruct delivers both a 3D surface model of the reconstructed structures and a 1D geometrical model corresponding to the skeleton of the reconstructed structures. Both representations are a prerequisite for analysis of morphological characteristics and simulation signalling within a neuron that capture the influence of spines.  相似文献   

2.
Tomography emerges as a powerful methodology for determining the complex architectures of biological specimens that are better regarded from the structural point of view as singular entities. However, once the structure of a sufficiently large number of singular specimens is solved, quite possibly structural patterns start to emerge. This latter situation is addressed here, where the clustering of a set of 3D reconstructions using a novel quantitative approach is presented. In general terms, we propose a new variant of a self-organizing neural network for the unsupervised classification of 3D reconstructions. The novelty of the algorithm lies in its rigorous mathematical formulation that, starting from a large set of noisy input data, finds a set of "representative" items, organized onto an ordered output map, such that the probability density of this set of representative items resembles at its possible best the probability density of the input data. In this study, we evaluate the feasibility of application of the proposed neural approach to the problem of identifying similar 3D motifs within tomograms of insect flight muscle. Our experimental results prove that this technique is suitable for this type of problem, providing the electron microscopy community with a new tool for exploring large sets of tomogram data to find complex patterns.  相似文献   

3.
4.
Accurate automated cell fate analysis of immunostained human stem cells from 2- and 3-dimensional (2D-3D) images would improve efficiency in the field of stem cell research. Development of an accurate and precise tool that reduces variability and the time needed for human stem cell fate analysis will improve productivity and interpretability of the data across research groups. In this study, we have created protocols for high performance image analysis software Volocity? to classify and quantify cytoplasmic and nuclear cell fate markers from 2D-3D images of human neural stem cells after in vitro differentiation. To enhance 3D image capture efficiency, we optimized the image acquisition settings of an Olympus FV10i? confocal laser scanning microscope to match our quantification protocols and improve cell fate classification. The methods developed in this study will allow for a more time efficient and accurate software based, operator validated, stem cell fate classification and quantification from 2D and 3D images, and yield the highest ≥94.4% correspondence with human recognized objects.  相似文献   

5.
6.
7.
8.
Fast rotational matching of single-particle images   总被引:1,自引:0,他引:1  
The presence of noise and absence of contrast in electron micrographs lead to a reduced resolution of the final 3D reconstruction, due to the inherent limitations of single-particle image alignment. The fast rotational matching (FRM) algorithm was introduced recently for an accurate alignment of 2D images under such challenging conditions. Here, we implemented this algorithm for the first time in a standard 3D reconstruction package used in electron microscopy. This allowed us to carry out exhaustive tests of the robustness and reliability in iterative orientation determination, classification, and 3D reconstruction on simulated and experimental image data. A classification test on GroEL chaperonin images demonstrates that FRM assigns up to 13% more images to their correct reference orientation, compared to the classical self-correlation function method. Moreover, at sub-nanometer resolution, GroEL and rice dwarf virus reconstructions exhibit a remarkable resolution gain of 10-20% that is attributed to the novel image alignment kernel.  相似文献   

9.

Introduction

Electrical impedance tomography (EIT) is an emerging clinical tool for monitoring ventilation distribution in mechanically ventilated patients, for which many image reconstruction algorithms have been suggested. We propose an experimental framework to assess such algorithms with respect to their ability to correctly represent well-defined physiological changes. We defined a set of clinically relevant ventilation conditions and induced them experimentally in 8 pigs by controlling three ventilator settings (tidal volume, positive end-expiratory pressure and the fraction of inspired oxygen). In this way, large and discrete shifts in global and regional lung air content were elicited.

Methods

We use the framework to compare twelve 2D EIT reconstruction algorithms, including backprojection (the original and still most frequently used algorithm), GREIT (a more recent consensus algorithm for lung imaging), truncated singular value decomposition (TSVD), several variants of the one-step Gauss-Newton approach and two iterative algorithms. We consider the effects of using a 3D finite element model, assuming non-uniform background conductivity, noise modeling, reconstructing for electrode movement, total variation (TV) reconstruction, robust error norms, smoothing priors, and using difference vs. normalized difference data.

Results and Conclusions

Our results indicate that, while variation in appearance of images reconstructed from the same data is not negligible, clinically relevant parameters do not vary considerably among the advanced algorithms. Among the analysed algorithms, several advanced algorithms perform well, while some others are significantly worse. Given its vintage and ad-hoc formulation backprojection works surprisingly well, supporting the validity of previous studies in lung EIT.  相似文献   

10.
Advances in reporters for gene expression have made it possible to document and quantify expression patterns in 2D-4D. In contrast to microarrays, which provide data for many genes but averaged and/or at low resolution, images reveal the high spatial dynamics of gene expression. Developing computational methods to compare, annotate, and model gene expression based on images is imperative, considering that available data are rapidly increasing. We have developed a sparse Bayesian factor analysis model in which the observed expression diversity of among a large set of high-dimensional images is modeled by a small number of hidden common factors. We apply this approach on embryonic expression patterns from a Drosophila RNA in situ image database, and show that the automatically inferred factors provide for a meaningful decomposition and represent common co-regulation or biological functions. The low-dimensional set of factor mixing weights is further used as features by a classifier to annotate expression patterns with functional categories. On human-curated annotations, our sparse approach reaches similar or better classification of expression patterns at different developmental stages, when compared to other automatic image annotation methods using thousands of hard-to-interpret features. Our study therefore outlines a general framework for large microscopy data sets, in which both the generative model itself, as well as its application for analysis tasks such as automated annotation, can provide insight into biological questions.  相似文献   

11.
In many situations, 3D cell cultures mimic the natural organization of tissues more closely than 2D cultures. Conventional methods for phenotyping such 3D cultures use either single or multiple simple parameters based on morphology and fluorescence staining intensity. However, due to their simplicity many details are not taken into account which limits system-level study of phenotype characteristics. Here, we have developed a new image analysis platform to automatically profile 3D cell phenotypes with 598 parameters including morphology, topology, and texture parameters such as wavelet and image moments. As proof of concept, we analyzed mouse breast cancer cells (4T1 cells) in a 384-well plate format following exposure to a diverse set of compounds at different concentrations. The result showed concentration dependent phenotypic trajectories for different biologically active compounds that could be used to classify compounds based on their biological target. To demonstrate the wider applicability of our method, we analyzed the phenotypes of a collection of 44 human breast cancer cell lines cultured in 3D and showed that our method correctly distinguished basal-A, basal-B, luminal and ERBB2+ cell lines in a supervised nearest neighbor classification method.  相似文献   

12.
Image classification is a challenging problem in organizing a large image database. However, an effective method for such an objective is still under investigation. A method based on wavelet analysis to extract features for image classification is presented in this paper. After an image is decomposed by wavelet, the statistics of its features can be obtained by the distribution of histograms of wavelet coefficients, which are respectively projected onto two orthogonal axes, i.e., x and y directions. Therefore, the nodes of tree representation of images can be represented by the distribution. The high level features are described in low dimensional space including 16 attributes so that the computational complexity is significantly decreased. 2,800 images derived from seven categories are used in experiments. Half of the images were used for training neural network and the other images used for testing. The features extracted by wavelet analysis and the conventional features are used in the experiments to prove the efficacy of the proposed method. The classification rate on the training data set with wavelet analysis is up to 91%, and the classification rate on the testing data set reaches 89%. Experimental results show that our proposed approach for image classification is more effective.  相似文献   

13.
The large amount of image data necessary for high-resolution 3D reconstruction of macromolecular assemblies leads to significant increases in the computational time. One of the most time consuming operations is 3D density map reconstruction, and software optimization can greatly reduce the time required for any given structural study. The majority of algorithms proposed for improving the computational effectiveness of a 3D reconstruction are based on a ray-by-ray projection of each image into the reconstructed volume. In this paper, we propose a novel fast implementation of the "filtered back-projection" algorithm based on a voxel-by-voxel principle. Our version of this implementation has been exhaustively tested using both model and real data. We compared 3D reconstructions obtained by the new approach with results obtained by the filtered Back-Projections algorithm and the Fourier-Bessel algorithm commonly used for reconstructing icosahedral viruses. These computational experiments demonstrate the robustness, reliability, and efficiency of this approach.  相似文献   

14.
The paper describes a method in which two data-collecting systems, medical imaging and electrogoniometry, are combined to allow the accurate and simultaneous modeling of both the spatial kinematics and the morphological surface of a particular joint. The joint of interest (JOI) is attached to a Plexiglas jig that includes four metallic markers defining a local reference system (R(GONIO)) for the kinematics data. Volumetric data of the JOI and the R(GONIO) markers are collected from medical imaging. The spatial location and orientation of the markers in the global reference system (R(CT)) of the medical-imaging environment are obtained by applying object-recognition and classification methods on the image dataset. Segmentation and 3D isosurfacing of the JOI are performed to produce a 3D model including two anatomical objects-the proximal and distal JOI segments. After imaging, one end of a custom-made 3D electrogoniometer is attached to the distal segment of the JOI, and the other end is placed at the R(GONIO) origin; the JOI is displaced and the spatial kinematics data is recorded by the goniometer. After recording, data registration from R(GONIO) to R(CT) occurred prior to simulation. Data analysis was performed using both joint coordinate system (JCS) and instantaneous helical axis (IHA).Finally, the 3D joint model is simulated in real time using the experimental kinematics data. The system is integrated into a computer graphics interface, allowing free manipulation of the 3D scene.The overall accuracy of the method has been validated with two other kinematics data collection methods including a 3D digitizer and interpolation of the kinematics data from discrete positions obtained from medical imaging. Validation has been performed on both superior and inferior radio-ulna joints (i.e. prono-supination motion). Maximal RMS error was 1 degrees and 1.2mm on the helical axis rotation and translation, respectively. Prono-supination of the forearm showed a total rotation of 132 degrees for 0.8mm of translation. The method reproducibility using JCS parameters was in average 1 degrees (maximal deviation=2 degrees ) for rotation, and 1mm (maximal deviation=2mm) for translation. In vitro experiments have been performed on both knee joint and ankle joint. Averaged JCS parameters for the knee were 109 degrees, 17 degrees and 4 degrees for flexion, internal rotation and abduction, respectively. Averaged maximal translation values for the knee were 12, 3 and 4mm posteriorly, medially and proximally, respectively. Averaged JCS parameters for the ankle were 43 degrees, 9 degrees and 3 degrees for plantarflexion, adduction and internal rotation, respectively. Averaged maximal translation values for the ankle were 4, 2 and 1mm anteriorly, medially and proximally, respectively.  相似文献   

15.

Background

Detailed knowledge of the subcellular location of each expressed protein is critical to a full understanding of its function. Fluorescence microscopy, in combination with methods for fluorescent tagging, is the most suitable current method for proteome-wide determination of subcellular location. Previous work has shown that neural network classifiers can distinguish all major protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Building on these results, we evaluate here new classifiers and features to improve the recognition of protein subcellular location patterns in both 2D and 3D fluorescence microscope images.

Results

We report here a thorough comparison of the performance on this problem of eight different state-of-the-art classification methods, including neural networks, support vector machines with linear, polynomial, radial basis, and exponential radial basis kernel functions, and ensemble methods such as AdaBoost, Bagging, and Mixtures-of-Experts. Ten-fold cross validation was used to evaluate each classifier with various parameters on different Subcellular Location Feature sets representing both 2D and 3D fluorescence microscope images, including new feature sets incorporating features derived from Gabor and Daubechies wavelet transforms. After optimal parameters were chosen for each of the eight classifiers, optimal majority-voting ensemble classifiers were formed for each feature set. Comparison of results for each image for all eight classifiers permits estimation of the lower bound classification error rate for each subcellular pattern, which we interpret to reflect the fraction of cells whose patterns are distorted by mitosis, cell death or acquisition errors. Overall, we obtained statistically significant improvements in classification accuracy over the best previously published results, with the overall error rate being reduced by one-third to one-half and with the average accuracy for single 2D images being higher than 90% for the first time. In particular, the classification accuracy for the easily confused endomembrane compartments (endoplasmic reticulum, Golgi, endosomes, lysosomes) was improved by 5–15%. We achieved further improvements when classification was conducted on image sets rather than on individual cell images.

Conclusions

The availability of accurate, fast, automated classification systems for protein location patterns in conjunction with high throughput fluorescence microscope imaging techniques enables a new subfield of proteomics, location proteomics. The accuracy and sensitivity of this approach represents an important alternative to low-resolution assignments by curation or sequence-based prediction.
  相似文献   

16.
17.
18.

Background  

While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps.  相似文献   

19.
Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号