首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Fast rotational matching of single-particle images   总被引:1,自引:0,他引:1  
The presence of noise and absence of contrast in electron micrographs lead to a reduced resolution of the final 3D reconstruction, due to the inherent limitations of single-particle image alignment. The fast rotational matching (FRM) algorithm was introduced recently for an accurate alignment of 2D images under such challenging conditions. Here, we implemented this algorithm for the first time in a standard 3D reconstruction package used in electron microscopy. This allowed us to carry out exhaustive tests of the robustness and reliability in iterative orientation determination, classification, and 3D reconstruction on simulated and experimental image data. A classification test on GroEL chaperonin images demonstrates that FRM assigns up to 13% more images to their correct reference orientation, compared to the classical self-correlation function method. Moreover, at sub-nanometer resolution, GroEL and rice dwarf virus reconstructions exhibit a remarkable resolution gain of 10-20% that is attributed to the novel image alignment kernel.  相似文献   

2.
A maximum likelihood approach to two-dimensional crystals   总被引:1,自引:0,他引:1  
Maximum likelihood (ML) processing of transmission electron microscopy images of protein particles can produce reconstructions of superior resolution due to a reduced reference bias. We have investigated a ML processing approach to images centered on the unit cells of two-dimensional (2D) crystal images. The implemented software makes use of the predictive lattice node tracking in the MRC software, which is used to window particle stacks. These are then noise-whitened and subjected to ML processing. Resulting ML maps are translated into amplitudes and phases for further processing within the 2dx software package. Compared with ML processing for randomly oriented single particles, the required computational costs are greatly reduced as the 2D crystals restrict the parameter search space. The software was applied to images of negatively stained or frozen hydrated 2D crystals of different crystal order. We find that the ML algorithm is not free from reference bias, even though its sensitivity to noise correlation is lower than for pure cross-correlation alignment. Compared with crystallographic processing, the newly developed software yields better resolution for 2D crystal images of lower crystal quality, and it performs equally well for well-ordered crystal images.  相似文献   

3.
The low radiation conditions and the predominantly phase-object image formation of cryo-electron microscopy (cryo-EM) result in extremely high noise levels and low contrast in the recorded micrographs. The process of single particle or tomographic 3D reconstruction does not completely eliminate this noise and is even capable of introducing new sources of noise during alignment or when correcting for instrument parameters. The recently developed Digital Paths Supervised Variance (DPSV) denoising filter uses local variance information to control regional noise in a robust and adaptive manner. The performance of the DPSV filter was evaluated in this review qualitatively and quantitatively using simulated and experimental data from cryo-EM and tomography in two and three dimensions. We also assessed the benefit of filtering experimental reconstructions for visualization purposes and for enhancing the accuracy of feature detection. The DPSV filter eliminates high-frequency noise artifacts (density gaps), which would normally preclude the accurate segmentation of tomography reconstructions or the detection of alpha-helices in single-particle reconstructions. This collaborative software development project was carried out entirely by virtual interactions among the authors using publicly available development and file sharing tools.  相似文献   

4.
J Timmer  T Müller    W Melzer 《Biophysical journal》1998,74(4):1694-1707
Several methods are currently in use to estimate the rate of depolarization-induced calcium release in muscle cells from measured calcium transients. One approach first characterizes calcium removal of the cell. This is done by determining parameters of a reaction scheme from a fit to the decay of elevated calcium after the depolarizing stimulus. In a second step, the release rate during depolarization is estimated based on the fitted model. Using simulated calcium transients with known underlying release rates, we tested the fidelity of this analysis in determining the time course of calcium release under different conditions. The analysis reproduced in a satisfactory way the characteristics of the input release rate, even when the assumption that release had ended before the start of the fitting interval was severely violated. Equally good reconstructions of the release rate time course could be obtained when the model used for the analysis differed in structure from the one used for simulating the data. We tested the application of a new strategy (multiple shooting) for fitting parameters in nonlinear differential equation systems. This procedure rendered the analysis less sensitive to ill-chosen initial guesses of the parameters and to noise. A locally adaptive kernel estimator for calculating numerical derivatives allowed good reconstructions of the original release rate time course from noisy calcium transients when other methods failed.  相似文献   

5.
A method for fitting regression models to data that exhibit spatial correlation and heteroskedasticity is proposed. It is well known that ignoring a nonconstant variance does not bias least-squares estimates of regression parameters; thus, data analysts are easily lead to the false belief that moderate heteroskedasticity can generally be ignored. Unfortunately, ignoring nonconstant variance when fitting variograms can seriously bias estimated correlation functions. By modeling heteroskedasticity and standardizing by estimated standard deviations, our approach eliminates this bias in the correlations. A combination of parametric and nonparametric regression techniques is used to iteratively estimate the various components of the model. The approach is demonstrated on a large data set of predicted nitrogen runoff from agricultural lands in the Midwest and Northern Plains regions of the U.S.A. For this data set, the model comprises three main components: (1) the mean function, which includes farming practice variables, local soil and climate characteristics, and the nitrogen application treatment, is assumed to be linear in the parameters and is fitted by generalized least squares; (2) the variance function, which contains a local and a spatial component whose shapes are left unspecified, is estimated by local linear regression; and (3) the spatial correlation function is estimated by fitting a parametric variogram model to the standardized residuals, with the standardization adjusting the variogram for the presence of heteroskedasticity. The fitting of these three components is iterated until convergence. The model provides an improved fit to the data compared with a previous model that ignored the heteroskedasticity and the spatial correlation.  相似文献   

6.
Recent experimental advances in producing density maps from cryo-electron microscopy (cryo-EM) have challenged theorists to develop improved techniques to provide structural models that are consistent with the data and that preserve all the local stereochemistry associated with the biomolecule. We develop a new technique that maintains the local geometry and chemistry at each stage of the fitting procedure. A geometric simulation is used to drive the structure from some appropriate starting point (a nearby experimental structure or a modeled structure) toward the experimental density, via a set of small incremental motions. Structural motifs such as α-helices can be held rigid during the fitting procedure as the starting structure is brought into alignment with the experimental density. After validating this procedure on simulated data for adenylate kinase and lactoferrin, we show how cryo-EM data for two different GroEL structures can be fit using a starting x-ray crystal structure. We show that by incorporating the correct local stereochemistry in the modeling, structures can be obtained with effective resolution that is significantly higher than might be expected from the nominal cryo-EM resolution.  相似文献   

7.
We present a detailed statistical analysis of fluorescence correlation spectroscopy for a wide range of timescales. The derivation is completely analytical and can provide an excellent tool for planning and analysis of FCS experiments. The dependence of the signal-to-noise ratio on different measurement conditions is extensively studied. We find that in addition to the shot noise and the noise associated with correlated molecular dynamics there is another source of noise that appears at very large lag times. We call this the "particle noise," as its behavior is governed by the number of particles that have entered and left the laser beam sample volume during large dwell times. The standard deviations of all the points on the correlation function are calculated analytically and shown to be in good agreement with experiments. We have also investigated the bias associated with experimental correlation function measurements. A "phase diagram" for FCS experiments is constructed that demonstrates the significance of the bias for any given experiment. We demonstrate that the value of the bias can be calculated and added back as a first-order correction to the experimental correlation function.  相似文献   

8.
Electron cryomicroscopy (cryo-EM) has emerged as a powerful structural biology instrument to solve near-atomic three-dimensional structures. Despite the fast growth in the number of density maps generated from cryo-EM data, comparison tools among these reconstructions are still lacking. Current proposals to compare cryo-EM data derived volumes perform map subtraction based on adjustment of each volume grey level to the same scale. We present here a more sophisticated way of adjusting the volumes before comparing, which implies adjustment of grey level scale and spectrum energy, but keeping phases intact inside a mask and imposing the results to be strictly positive. The adjustment that we propose leaves the volumes in the same numeric frame, allowing to perform operations among the adjusted volumes in a more reliable way. This adjustment can be a preliminary step for several applications such as comparison through subtraction, map sharpening, or combination of volumes through a consensus that selects the best resolved parts of each input map. Our development might also be used as a sharpening method using an atomic model as a reference. We illustrate the applicability of this algorithm with the reconstructions derived of several experimental examples. This algorithm is implemented in Xmipp software package and its applications are user-friendly accessible through the cryo-EM image processing framework Scipion.  相似文献   

9.
Cryo-electron microscopy (cryo-EM) has been widely used to explore conformational states of large biomolecular assemblies. The detailed interpretation of cryo-EM data requires the flexible fitting of a known high-resolution protein structure into a low-resolution cryo-EM map. To this end, we have developed what we believe is a new method based on a two-bead-per-residue protein representation, and a modified form of the elastic network model that allows large-scale conformational changes while maintaining pseudobonds and secondary structures. Our method minimizes a pseudo-energy which linearly combines various terms of the modified elastic network model energy with a cryo-EM-fitting score and a collision energy that penalizes steric collisions. Unlike previous flexible fitting efforts using the lowest few normal modes, our method effectively utilizes all normal modes so that both global and local structural changes can be fully modeled. We have validated our method for a diverse set of 10 pairs of protein structures using simulated cryo-EM maps with a range of resolutions and in the absence/presence of random noise. We have shown that our method is both accurate and efficient compared with alternative techniques, and its performance is robust to the addition of random noise. Our method is also shown to be useful for the flexible fitting of three experimental cryo-EM maps.  相似文献   

10.
Due to the sensitivity of biological sample to the radiation damage, the low dose imaging conditions used for electron microscopy result in extremely noisy images. The processes of digitization, image alignment, and 3D reconstruction also introduce additional sources of noise in the final 3D structure. In this paper, we investigate the effectiveness of a bilateral denoising filter in various biological electron microscopy applications. In contrast to the conventional low pass filters, which inevitably smooth out both noise and structural features simultaneously, we found that bilateral filter holds a distinct advantage in being capable of effectively suppressing noise without blurring the high resolution details. In as much, we have applied this technique to individual micrographs, entire 3D reconstructions, segmented proteins, and tomographic reconstructions.  相似文献   

11.
A method for flexible fitting of molecular models into three-dimensional electron microscopy (3D-EM) reconstructions at a resolution range of 8-12 A is proposed. The approach uses the evolutionarily related structural variability existing among the protein domains of a given superfamily, according to structural databases such as CATH. A structural alignment of domains belonging to the superfamily, followed by a principal components analysis, is performed, and the first three principal components of the decomposition are explored. Using rigid body transformations for the secondary structure elements (SSEs) plus the cyclic coordinate descent algorithm to close the loops, stereochemically correct models are built for the structure to fit. All of the models are fitted into the 3D-EM map, and the best one is selected based on crosscorrelation measures. This work applies the method to both simulated and experimental data and shows that the flexible fitting was able to produce better results than rigid body fitting.  相似文献   

12.
We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.  相似文献   

13.
14.
We present a substantial improvement of S-flexfit, our recently proposed method for flexible fitting in three dimensional electron microscopy (3D-EM) at a resolution range of 8-12A, together with a comparison of the method capabilities with Normal Mode Analysis (NMA), application examples and a user's guide. S-flexfit uses the evolutionary information contained in protein domain databases like CATH, by means of the structural alignment of the elements of a protein superfamily. The added development is based on a recent extension of the Singular Value Decomposition (SVD) algorithm specifically designed for situations with missing data: Incremental Singular Value Decomposition (ISVD). ISVD can manage gaps and allows considering more aminoacids in the structural alignment of a superfamily, extending the range of application and producing better models for the fitting step of our methodology. Our previous SVD-based flexible fitting approach can only take into account positions with no gaps in the alignment, being appropriate when the superfamily members are relatively similar and there are few gaps. However, with new data coming from structural proteomics works, the later situation is becoming less likely, making ISVD the technique of choice for further works. We present the results of using ISVD in the process of flexible fitting with both simulated and experimental 3D-EM maps (GroEL and Poliovirus 135S cell entry intermediate).  相似文献   

15.
In electron tomography the sample is tilted in the electron microscope and projections are recorded at different viewing angles. In the correct geometric setting, the tilt-axis of the object under scrutiny is perpendicular to the beam direction. However, we will demonstrate that this does not necessarily apply to all electron microscopes equipped with the default column alignment. The resulting effect is that a conical tilt is performed, which has to be considered in the reconstruction to avoid artifacts and to improve the resolution. A novel solution, with significantly improved convergence properties, will be introduced for calculating the three-dimensional marker model, which is necessary for the alignment of the tilt-series. Thereby, the angle between the beam direction and the tilt-axis is calculated, together with other geometrical distortions, like magnification and rotation changes, and incorporated in the reconstruction. Hereby, artifacts can be eliminated at the image processing basis, and the resolution can be significantly improved at the medium to high range frequencies. Synthetical and real data are used to demonstrate the obstructions caused by this effect and the quality improvement of the reconstructions. Finally, we also present a way to align the hardware of the microscope to correct for the non-perpendicularity between the beam direction and the tilt-axis, which is specifically tailored for tomographic applications.  相似文献   

16.

Background and Aims

Automatic acquisition of plant architecture is a major challenge for the construction of quantitative models of plant development. Recently, 3-D laser scanners have made it possible to acquire 3-D images representing a sampling of an object''s surface. A number of specific methods have been proposed to reconstruct plausible branching structures from this new type of data, but critical questions remain regarding their suitability and accuracy before they can be fully exploited for use in biological applications.

Methods

In this paper, an evaluation framework to assess the accuracy of tree reconstructions is presented. The use of this framework is illustrated on a selection of laser scans of trees. Scanned data were manipulated by experienced researchers to produce reference tree reconstructions against which comparisons could be made. The evaluation framework is given two tree structures and compares both their elements and their topological organization. Similar elements are identified based on geometric criteria using an optimization algorithm. The organization of these elements is then compared and their similarity quantified. From these analyses, two indices of geometrical and structural similarities are defined, and the automatic reconstructions can thus be compared with the reference structures in order to assess their accuracy.

Key Results

The evaluation framework that was developed was successful at capturing the variation in similarities between two structures as different levels of noise were introduced. The framework was used to compare three different reconstruction methods taken from the literature, and allowed sensitive parameters of each one to be determined. The framework was also generalized for the evaluation of root reconstruction from 2-D images and demonstrated its sensitivity to higher architectural complexity of structure which was not detected with a global evaluation criterion.

Conclusions

The evaluation framework presented quantifies geometric and structural similarities between two structures. It can be applied to the characterization and comparison of automatic reconstructions of plant structures from laser scanner data and 2-D images. As such, it can be used as a reference test for comparing and assessing reconstruction procedures.  相似文献   

17.
Hidden Markov modeling (HMM) can be applied to extract single channel kinetics at signal-to-noise ratios that are too low for conventional analysis. There are two general HMM approaches: traditional Baum's reestimation and direct optimization. The optimization approach has the advantage that it optimizes the rate constants directly. This allows setting constraints on the rate constants, fitting multiple data sets across different experimental conditions, and handling nonstationary channels where the starting probability of the channel depends on the unknown kinetics. We present here an extension of this approach that addresses the additional issues of low-pass filtering and correlated noise. The filtering is modeled using a finite impulse response (FIR) filter applied to the underlying signal, and the noise correlation is accounted for using an autoregressive (AR) process. In addition to correlated background noise, the algorithm allows for excess open channel noise that can be white or correlated. To maximize the efficiency of the algorithm, we derive the analytical derivatives of the likelihood function with respect to all unknown model parameters. The search of the likelihood space is performed using a variable metric method. Extension of the algorithm to data containing multiple channels is described. Examples are presented that demonstrate the applicability and effectiveness of the algorithm. Practical issues such as the selection of appropriate noise AR orders are also discussed through examples.  相似文献   

18.
基于秦岭样区的四种时序EVI函数拟合方法对比研究   总被引:3,自引:0,他引:3  
刘亚南  肖飞  杜耘 《生态学报》2016,36(15):4672-4679
函数曲线拟合方法是植被指数时间序列重建的一个重要方法,已经广泛应用于森林面积动态变化监测、农作物估产、遥感物候信息提取、生态系统碳循环研究等领域。基于秦岭样区多年MODIS EVI遥感数据及其质量控制数据,探讨并改进了时序EVI重建过程中噪声点优化和对原始高质量数据保真能力的评价方法;在此基础上,比较了常用的非对称性高斯函数拟合法(AG)、双Logistic函数拟合法(DL)和单Logistic函数拟合法(SL)。基于SL方法,调整了模型形式并重新定义d的参数意义,提出了最值优化单Logistic函数拟合法(MSL),并与其他3种方法进行对比。结果表明;在噪声点优化及保留原始高质量数据方面,AG方法和DL方法二者整体差别不大,而在部分像元的处理上AG方法表现出更好的拟合效果;MSL方法和SL方法相比于AG方法和DL方法其效果更为突出;在地形气候复杂,植被指数噪声较多的山区,MSL方法表现出更好的适用性。  相似文献   

19.
Synopsis Gastric evacuation rate estimates often suffer from an important bias caused by fitting experimentally-derived data distributions that are inherently constricted by the X-axis (Y = 0). Monte Carlo simulations were used to evaluate the bias. Truncating constricted distributions prior to curve fitting was suggested as a means to circumvent the problem. A food consumption model developed by D.S. Robson was presented. It employs the integral of the function fit to percentage gastric evacuation data, and does not require an a priori assumption of exponential gastric evacuation. The methods were illustrated using experimental gastric evacuation data and stomach contents data for fishery-caught yellowfin tuna, Thunnus albacares.  相似文献   

20.
Amplification of a cDNA product by quantitative polymerase chain reaction (qPCR) gives rise to fluorescence sigmoidal curves from which absolute or relative target gene content of the sample is inferred. Besides comparative C(t) methods that require the construction of a reference standard curve, other methods that focus on the analysis of the sole amplification curve have been proposed more recently. Among them, the so-called sigmoidal curve fitting (SCF) method rests on the fitting of an empirical sigmoidal model to the experimental amplification data points, leading to the prediction of the amplification efficiency and to the calculation of the initial copy number in the sample. The implicit assumption of this method is that the sigmoidal model may describe an amplification curve quantitatively even in the portion of the curve where the fluorescence signal is hidden in the noise band. The theoretical basis of the SCF method was revisited here for defining the class of experimental amplification curves for which the method might be relevant. Applying the SCF method to six well-characterized different PCR assays illustrated possible pitfalls leading to biased estimates of the amplification efficiency and, thus, of the target gene content of a sample.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号