首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Single particle analysis, which can be regarded as an average of signals from thousands or even millions of particle projections, is an efficient method to study the three-dimensional structures of biological macromolecules. An intrinsic assumption in single particle analysis is that all the analyzed particles must have identical composition and conformation. Thus specimen heterogeneity in either composition or conformation has raised great challenges for high-resolution analysis. For particles with multiple conformations, inaccurate alignments and orientation parameters will yield an averaged map with diminished resolution and smeared density. Besides extensive classification approaches, here based on the assumption that the macromolecular complex is made up of multiple rigid modules whose relative orientations and positions are in slight fluctuation around equilibriums, we propose a new method called as local optimization refinement to address this conformational heterogeneity for an improved resolution. The key idea is to optimize the orientation and shift parameters of each rigid module and then reconstruct their three-dimensional structures individually. Using simulated data of 80S/70S ribosomes with relative fluctuations between the large (60S/50S) and the small (40S/30S) subunits, we tested this algorithm and found that the resolutions of both subunits are significantly improved. Our method provides a proof-of-principle solution for high-resolution single particle analysis of macromolecular complexes with dynamic conformations.  相似文献   

2.
A 26 Å resolution map of the structure of human low-density lipoprotein (LDL) was obtained from electron cryomicroscopy and single-particle image reconstruction. The structure showed a discoidal-shaped LDL particle with high-density regions mainly distributed at the edge of the particle and low-density regions at the flat surface that covers the core region. To determine the chemical components that correspond to these density regions and to delineate the distribution of protein and phospholipid located at the particle surface at the resolution of the map, we used Mono-Sulfo-NHS-Undecagold labeling to increase preferentially the contrast of the apolipoprotein B component on the LDL particle. In the three-dimensional map from the image reconstruction of the undecagold-labeled LDL particles, the high-density region from the undecagold label was distributed mainly at the edge of the particle, and lower density regions were found at the flat surfaces that cover the neutral lipid core. This suggests that apolipoprotein B mainly encircles LDL at the edge of the particle and the phospholipid monolayers are located at the flat surfaces, which are parallel to the cholesterol ester layers in the core and may interact with the core lipid layers through the acyl chains.  相似文献   

3.
Cryo-electron microscopy (cryo-EM), combined with image processing, is an increasingly powerful tool for structure determination of macromolecular protein complexes and assemblies. In fact, single particle electron microscopy1 and two-dimensional (2D) electron crystallography2 have become relatively routine methodologies and a large number of structures have been solved using these methods. At the same time, image processing and three-dimensional (3D) reconstruction of helical objects has rapidly developed, especially, the iterative helical real-space reconstruction (IHRSR) method3, which uses single particle analysis tools in conjunction with helical symmetry. Many biological entities function in filamentous or helical forms, including actin filaments4, microtubules5, amyloid fibers6, tobacco mosaic viruses7, and bacteria flagella8, and, because a 3D density map of a helical entity can be attained from a single projection image, compared to the many images required for 3D reconstruction of a non-helical object, with the IHRSR method, structural analysis of such flexible and disordered helical assemblies is now attainable.In this video article, we provide detailed protocols for obtaining a 3D density map of a helical protein assembly (HIV-1 capsid9 is our example), including protocols for cryo-EM specimen preparation, low dose data collection by cryo-EM, indexing of helical diffraction patterns, and image processing and 3D reconstruction using IHRSR. Compared to other techniques, cryo-EM offers optimal specimen preservation under near native conditions. Samples are embedded in a thin layer of vitreous ice, by rapid freezing, and imaged in electron microscopes at liquid nitrogen temperature, under low dose conditions to minimize the radiation damage. Sample images are obtained under near native conditions at the expense of low signal and low contrast in the recorded micrographs. Fortunately, the process of helical reconstruction has largely been automated, with the exception of indexing the helical diffraction pattern. Here, we describe an approach to index helical structure and determine helical symmetries (helical parameters) from digitized micrographs, an essential step for 3D helical reconstruction. Briefly, we obtain an initial 3D density map by applying the IHRSR method. This initial map is then iteratively refined by introducing constraints for the alignment parameters of each segment, thus controlling their degrees of freedom. Further improvement is achieved by correcting for the contrast transfer function (CTF) of the electron microscope (amplitude and phase correction) and by optimizing the helical symmetry of the assembly.  相似文献   

4.
Traditional single particle reconstruction methods use either the Fourier or the delta function basis to represent the particle density map. This paper proposes a more flexible algorithm that adaptively chooses the basis based on the data. Because the basis adapts to the data, the reconstruction resolution and signal-to-noise ratio (SNR) is improved compared to a reconstruction with a fixed basis. Moreover, the algorithm automatically masks the particle, thereby separating it from the background. This eliminates the need for ad hoc filtering or masking in the refinement loop. The algorithm is formulated in a Bayesian maximum-a-posteriori framework and uses an efficient optimization algorithm for the maximization. Evaluations using simulated and actual cryogenic electron microscopy data show resolution and SNR improvements as well as the effective masking of particle from background.  相似文献   

5.
冷冻电镜单颗粒三维重构技术是用来解析生物大分子三维结构的常用方法.然而目前在单颗粒三维重构过程中,溶剂平滑操作还存在一定缺陷:没有一款主流的单颗粒三维重构程序能够自动寻找掩模(mask)三维密度图,使得三维重构过程难免受到噪音统计学模型计算偏差的干扰.为解决这一问题,本研究借鉴X射线晶体学中解析优化相位所广泛采用的溶剂平滑方法,采用高斯滤波、坎尼边缘检测、最小误差阈值处理等方法处理重构所得三维密度图,优化溶剂平滑操作,发展在单颗粒三维重构过程中自动寻找mask三维密度图的方法.运用三维密度图傅里叶壳层相关系数(fourier shell correlation,FSC)曲线图、模拟颗粒数据重构角度误差散点图等指标评估此方法的效果.结果表明,自动寻找mask密度图的方法能够较好地找到涵盖分子结构信号区域的mask密度图,较为明显提高三维重构所得密度图分辨率.  相似文献   

6.
Cryo-electron microscopy and three-dimensional image reconstruction are powerful tools for analyzing icosahedral virus capsids at resolutions that now extend below 1 nm. However, the validity of such density maps depends critically on correct identification of the viewing geometry of each particle in the data set. In some cases-for example, round capsids with low surface relief-it is difficult to identify orientations by conventional application of the two most widely used approaches-"common lines" and model-based iterative refinement. We describe here a strategy for determining the orientations of such refractory specimens. The key step is to determine reliable orientations for a base set of particles. For each particle, a list of candidate orientations is generated by common lines: correct orientations are then identified by computing a single-particle reconstruction for each candidate and then systematically matching their reprojections with the original images by visual criteria and cross-correlation analysis. This base set yields a first-generation reconstruction that is fed into the model-based procedure. This strategy has led to the structural determination of two viruses that, in our hands, resisted solution by other means.  相似文献   

7.
Strategies to achieve the highest resolutions in structures of protein complexes determined by cryo-electron microscopy generally involve averaging information from large numbers of individual molecular images. However, significant limitations are posed by heterogeneity in image quality and in protein conformation that are inherent to large data sets of images. Here, we demonstrate that the combination of iterative refinement and stringent molecular sorting is an effective method to obtain substantial improvements in map quality of the 1.8 MDa icosahedral catalytic core of the pyruvate dehydrogenase complex from Bacillus stearothermophilus. From a starting set of 42,945 images of the core complex, we show that using only the best 139 particles in the data set produces a map that is superior to those constructed with greater numbers of images, and that the location of many of the alpha-helices in the structure can be unambiguously visualized in a map constructed from as few as 9 particles.  相似文献   

8.
Generating reliable initial models is a critical step in the reconstruction of asymmetric single-particles by 3D electron microscopy. This is particularly difficult to do if heterogeneity is present in the sample. The Random Conical Tilt (RCT) method, arguably the most robust presently to accomplish this task, requires significant user intervention to solve the "missing cone" problem. We present here a novel approach, termed the orthogonal tilt reconstruction method, that eliminates the missing cone altogether, making it possible for single-class volumes to be used directly as initial references in refinement without further processing. The method involves collecting data at +45 degrees and -45 degrees tilts and only requires that particles adopt a relatively large number of orientations on the grid. One tilted data set is used for alignment and classification and the other set--which provides views orthogonal to those in the first--is used for reconstruction, resulting in the absence of a missing cone. We have tested this method with synthetic data and compared its performance to that of the RCT method. We also propose a way of increasing the level of homogeneity in individual 2D classes (and volumes) in a heterogeneous data set and identifying the most homogeneous volumes.  相似文献   

9.
A method is presented that reliably detects spherical viruses from a wide variety of noisy low-contrast electron micrographs. Such detection is one of the first image analysis steps in the computer-aided reconstruction of three-dimensional density distribution models of viruses. Particle detection is based on the comparison of intensity in a circular area and in the surrounding ring followed by a number of tests to validate the potential particles. The only required input from the user in addition to the micrograph is an approximate radius of the particle. The method has been implemented as program ETHAN that has been tested for several different data sets. ETHAN has also successfully been used to detect DNA-less virus particles for an actual reconstruction.  相似文献   

10.
In protein crystallography, much time and effort are often required to trace an initial model from an interpretable electron density map and to refine it until it best agrees with the crystallographic data. Here, we present a method to build and refine a protein model automatically and without user intervention, starting from diffraction data extending to resolution higher than 2.3 A and reasonable estimates of crystallographic phases. The method is based on an iterative procedure that describes the electron density map as a set of unconnected atoms and then searches for protein-like patterns. Automatic pattern recognition (model building) combined with refinement, allows a structural model to be obtained reliably within a few CPU hours. We demonstrate the power of the method with examples of a few recently solved structures.  相似文献   

11.

Background

Images of frozen hydrated [vitrified] virus particles were taken close-to-focus in an electron microscope containing structural signals at high spatial frequencies. These images had very low contrast due to the high levels of noise present in the image. The low contrast made particle selection, classification and orientation determination very difficult. The final purpose of the classification is to improve the signal-to-noise ratio of the particle representing the class, which is usually the average. In this paper, the proposed method is based on wavelet filtering and multi-resolution processing for the classification and reconstruction of this very noisy data. A multivariate statistical analysis (MSA) is used for this classification.

Results

The MSA classification method is noise dependant. A set of 2600 projections from a 3D map of a herpes simplex virus -to which noise was added- was classified by MSA. The classification shows the power of wavelet filtering in enhancing the quality of class averages (used in 3D reconstruction) compared to Fourier band pass filtering. A 3D reconstruction of a recombinant virus (VP5-VP19C) is presented as an application of multi-resolution processing for classification and reconstruction.

Conclusion

The wavelet filtering and multi-resolution processing method proposed in this paper offers a new way for processing very noisy images obtained from electron cryo-microscopes. The multi-resolution and filtering improves the speed and accuracy of classification, which is vital for the 3D reconstruction of biological objects. The VP5-VP19C recombinant virus reconstruction presented here is an example, which demonstrates the power of this method. Without this processing, it is not possible to get the correct 3D map of this virus.
  相似文献   

12.
Gene set analysis methods, which consider predefined groups of genes in the analysis of genomic data, have been successfully applied for analyzing gene expression data in cross-sectional studies. The time-course gene set analysis (TcGSA) introduced here is an extension of gene set analysis to longitudinal data. The proposed method relies on random effects modeling with maximum likelihood estimates. It allows to use all available repeated measurements while dealing with unbalanced data due to missing at random (MAR) measurements. TcGSA is a hypothesis driven method that identifies a priori defined gene sets with significant expression variations over time, taking into account the potential heterogeneity of expression within gene sets. When biological conditions are compared, the method indicates if the time patterns of gene sets significantly differ according to these conditions. The interest of the method is illustrated by its application to two real life datasets: an HIV therapeutic vaccine trial (DALIA-1 trial), and data from a recent study on influenza and pneumococcal vaccines. In the DALIA-1 trial TcGSA revealed a significant change in gene expression over time within 69 gene sets during vaccination, while a standard univariate individual gene analysis corrected for multiple testing as well as a standard a Gene Set Enrichment Analysis (GSEA) for time series both failed to detect any significant pattern change over time. When applied to the second illustrative data set, TcGSA allowed the identification of 4 gene sets finally found to be linked with the influenza vaccine too although they were found to be associated to the pneumococcal vaccine only in previous analyses. In our simulation study TcGSA exhibits good statistical properties, and an increased power compared to other approaches for analyzing time-course expression patterns of gene sets. The method is made available for the community through an R package.  相似文献   

13.
MOTIVATION: Algorithms for phylogenetic tree reconstruction based on gene order data typically repeatedly solve instances of the reversal median problem (RMP) which is to find for three given gene orders a fourth gene order (called median) with a minimal sum of reversal distances. All existing algorithms of this type consider only one median for each RMP instance even when a large number of medians exist. A careful selection of one of the medians might lead to better phylogenetic trees. RESULTS: We propose a heuristic algorithm amGRP for solving the multiple genome rearrangement problem (MGRP) by repeatedly solving instances of the RMP taking all medians into account. Algorithm amGRP uses a branch-and-bound method that branches over medians from a selected subset of all medians for each RMP instance. Different heuristics for selecting the subsets have been investigated. To show that the medians for RMP vary strongly with respect to different properties that are likely to be relevant for phylogenetic tree reconstruction, the set of all medians has been investigated for artificial datasets and mitochondrial DNA (mtDNA) gene orders. Phylogenetic trees have been computed for a large set of randomly generated gene orders and two sets of mtDNA gene order data for different animal taxa with amGRP and with two standard approaches for solving the MGRP (GRAPPA-DCM and MGR). The results show that amGRP outperforms both other methods with respect to solution quality and computation time on the test data. AVAILABILITY: The source code of amGRP, additional results and the test instances used in this paper are freely available from the authors.  相似文献   

14.
Cryo-EM observation of biological samples enables visualization of sample heterogeneity, in the form of discrete states that are separable, or continuous heterogeneity as a result of local protein motion before flash freezing. Variability analysis of this continuous heterogeneity describes the variance between a particle stack and a volume, and results in a map series describing the various steps undertaken by the sample in the particle stack. While this observation is absolutely stunning, it is very hard to pinpoint structural details to elements of the maps. In order to bridge the gap between observation and explanation, we designed a tool that refines an ensemble of structures into all the maps from variability analysis. Using this bundle of structures, it is easy to spot variable parts of the structure, as well as the parts that are not moving. Comparison with molecular dynamics simulations highlights the fact that the movements follow the same directions, albeit with different amplitudes. Ligand can also be investigated using this method. Variability refinement is available in the Phenix software suite, accessible under the program name phenix.varref.  相似文献   

15.
We present a reconstruction of native GroEL by electron cryomicroscopy (cryo-EM) and single particle analysis at 6 A resolution. alpha helices are clearly visible and beta sheet density is also visible at this resolution. While the overall conformation of this structure is quite consistent with the published X-ray data, a measurable shift in the positions of three alpha helices in the intermediate domain is observed, not consistent with any of the 7 monomeric structures in the Protein Data Bank model (1OEL). In addition, there is evidence for slight rearrangement or flexibility in parts of the apical domain. The 6 A resolution cryo-EM GroEL structure clearly demonstrates the veracity and expanding scope of cryo-EM and the single particle reconstruction technique for macromolecular machines.  相似文献   

16.
Measuring the quality of three-dimensional (3D) reconstructed biological macromolecules by transmission electron microscopy is still an open problem. In this article, we extend the applicability of the spectral signal-to-noise ratio (SSNR) to the evaluation of 3D volumes reconstructed with any reconstruction algorithm. The basis of the method is to measure the consistency between the data and a corresponding set of reprojections computed for the reconstructed 3D map. The idiosyncrasies of the reconstruction algorithm are taken explicitly into account by performing a noise-only reconstruction. This results in the definition of a 3D SSNR which provides an objective indicator of the quality of the 3D reconstruction. Furthermore, the information to build the SSNR can be used to produce a volumetric SSNR (VSSNR). Our method overcomes the need to divide the data set in two. It also provides a direct measure of the performance of the reconstruction algorithm itself; this latter information is typically not available with the standard resolution methods which are primarily focused on reproducibility alone.  相似文献   

17.
Morphologic heterogeneity within an individual tumor is well-recognized by histopathologists in surgical practice. While this often takes the form of areas of distinct differentiation into recognized histological subtypes, or different pathological grade, often there are more subtle differences in phenotype which defy accurate classification (Figure 1). Ultimately, since morphology is dictated by the underlying molecular phenotype, areas with visible differences are likely to be accompanied by differences in the expression of proteins which orchestrate cellular function and behavior, and therefore, appearance. The significance of visible and invisible (molecular) heterogeneity for prognosis is unknown, but recent evidence suggests that, at least at the genetic level, heterogeneity exists in the primary tumor(1,2), and some of these sub-clones give rise to metastatic (and therefore lethal) disease. Moreover, some proteins are measured as biomarkers because they are the targets of therapy (for instance ER and HER2 for tamoxifen and trastuzumab (Herceptin), respectively). If these proteins show variable expression within a tumor then therapeutic responses may also be variable. The widely used histopathologic scoring schemes for immunohistochemistry either ignore, or numerically homogenize the quantification of protein expression. Similarly, in destructive techniques, where the tumor samples are homogenized (such as gene expression profiling), quantitative information can be elucidated, but spatial information is lost. Genetic heterogeneity mapping approaches in pancreatic cancer have relied either on generation of a single cell suspension(3), or on macrodissection(4). A recent study has used quantum dots in order to map morphologic and molecular heterogeneity in prostate cancer tissue(5), providing proof of principle that morphology and molecular mapping is feasible, but falling short of quantifying the heterogeneity. Since immunohistochemistry is, at best, only semi-quantitative and subject to intra- and inter-observer bias, more sensitive and quantitative methodologies are required in order to accurately map and quantify tissue heterogeneity in situ. We have developed and applied an experimental and statistical methodology in order to systematically quantify the heterogeneity of protein expression in whole tissue sections of tumors, based on the Automated QUantitative Analysis (AQUA) system(6). Tissue sections are labeled with specific antibodies directed against cytokeratins and targets of interest, coupled to fluorophore-labeled secondary antibodies. Slides are imaged using a whole-slide fluorescence scanner. Images are subdivided into hundreds to thousands of tiles, and each tile is then assigned an AQUA score which is a measure of protein concentration within the epithelial (tumor) component of the tissue. Heatmaps are generated to represent tissue expression of the proteins and a heterogeneity score assigned, using a statistical measure of heterogeneity originally used in ecology, based on the Simpson's biodiversity index(7). To date there have been no attempts to systematically map and quantify this variability in tandem with protein expression, in histological preparations. Here, we illustrate the first use of the method applied to ER and HER2 biomarker expression in ovarian cancer. Using this method paves the way for analyzing heterogeneity as an independent variable in studies of biomarker expression in translational studies, in order to establish the significance of heterogeneity in prognosis and prediction of responses to therapy.  相似文献   

18.
Quantitative analysis of two-dimensional electrophoretograms.   总被引:6,自引:0,他引:6  
A method for quantitative analysis of complex film density distributions in autoradiograms is described. The method is intended particularly for measuring the distribution of radioactivity among the proteins resolved by two-dimensional gel electrophoresis but should, of course, be suited to analyzing other two dimensional separations. The film density distribution is first digitized by a high speed rotating drum scanner to generate the image data array that is stored on a magnetic disk. Subsequent analysis involves: 1) data averaging, 2) detection of contours and of their locations, 3) splitting of overlapping spots, 4) conversion of film density to radioactive intensity by means of calibration films, and 5) differentiation and integration to measure the total radioactivity contained in the protein which generates a spot in the autoradiogram. The product of the analysis is a numbered contour map and a table listing coordinates and radioactivity content of each resolved spot. Coordinate transformations for comparison and matching of autoradiograms are also described. A set of utility programs print and graph the data at intermediate stages of the analysis in order to facilitate the checking of procedures and programs.  相似文献   

19.
We used electron tomography to determine the three-dimensional (3D) structure of integrin alphaIIbbeta3 in the active state. We found that we obtained better density maps when we reconstructed a 3D volume for each individual particle in the tilt series rather than to extract the particle-containing subvolumes from a 3D reconstruction of the entire specimen area. The 3D tomographic reconstructions of 100 particles revealed that activated alphaIIbbeta3 adopts many different conformations. An average of all the individual 3D reconstructions nicely accommodated the crystal structure of the alphaVbeta3 headpiece, confirming the locations assigned to the alpha- and beta-subunit in the density map. The most striking finding of our study is the structural flexibility of the lower leg of the beta-subunit as opposed to the conformational stability of the leg of the alpha-subunit. The good fit of the atomic structure of the betaI domain and the hybrid domain in the active state showed that the hybrid domain swings out, and most particles used for tomography are in the active state. Multivariate statistical analysis and classification applied to the set of 3D reconstructions revealed that more than 90% reconstructions are grouped into the classes that show the active state. Our results demonstrate that electron tomography can be used to classify complexes with a flexible structure such as integrins.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号