首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multidimensional NMR can provide unmatched spectral resolution, which is crucial when dealing with samples of biological macromolecules. The resolution, however, comes at the high price of long experimental time. Non-uniform sampling (NUS) of the evolution time domain allows to suppress this limitation by sampling only a small fraction of the data, but requires sophisticated algorithms to reconstruct omitted data points. A significant group of such algorithms known as compressed sensing (CS) is based on the assumption of sparsity of a reconstructed spectrum. Several papers on the application of CS in multidimensional NMR have been published in the last years, and the developed methods have been implemented in most spectral processing software. However, the publications rarely show the cases when NUS reconstruction does not work perfectly or explain how to solve the problem. On the other hand, every-day users of NUS develop their rules-of-thumb, which help to set up the processing in an optimal way, but often without a deeper insight. In this paper, we discuss several sources of problems faced in CS reconstructions: low sampling level, missassumption of spectral sparsity, wrong stopping criterion and attempts to extrapolate the signal too much. As an appendix, we provide MATLAB codes of several CS algorithms used in NMR. We hope that this work will explain the mechanism of NUS reconstructions and help readers to set up acquisition and processing parameters. Also, we believe that it might be helpful for algorithm developers.  相似文献   

2.
Recent advances in electron cryomicroscopy instrumentation and single particle reconstruction have created opportunities for high-throughput and high-resolution three-dimensional (3D) structure determination of macromolecular complexes. However, it has become impractical and inefficient to rely on conventional text file data management and command-line programs to organize and process the increasing numbers of image data required in high-resolution studies. Here, we present a distributed relational database for managing complex datasets and its integration into our high-resolution software package IMIRS (Image Management and Icosahedral Reconstruction System). IMIRS consists of a complete set of modular programs for icosahedral reconstruction organized under a graphical user interface and provides options for user-friendly, step-by-step data processing as well as automatic reconstruction. We show that the integration of data management with processing in IMIRS automates the tedious tasks of data management, enables data coherence, and facilitates information sharing in a distributed computer and user environment without significantly increasing the time of program execution. We demonstrate the applicability of IMIRS in icosahedral reconstruction toward high resolution by using it to obtain an 8-A 3D structure of an intermediate-sized dsRNA virus.  相似文献   

3.
Rapid data collection, spectral referencing, processing by time domain deconvolution, peak picking and editing, and assignment of NMR spectra are necessary components of any efficient integrated system for protein NMR structure analysis. We have developed a set of software tools designated AutoProc, AutoPeak, and AutoAssign, which function together with the data processing and peak-picking programs NMRPipe and Sparky, to provide an integrated software system for rapid analysis of protein backbone resonance assignments. In this paper we demonstrate that these tools, together with high-sensitivity triple resonance NMR cryoprobes for data collection and a Linux-based computer cluster architecture, can be combined to provide nearly complete backbone resonance assignments and secondary structures (based on chemical shift data) for a 59-residue protein in less than 30 hours of data collection and processing time. In this optimum case of a small protein providing excellent spectra, extensive backbone resonance assignments could also be obtained using less than 6 hours of data collection and processing time. These results demonstrate the feasibility of high throughput triple resonance NMR for determining resonance assignments and secondary structures of small proteins, and the potential for applying NMR in large scale structural proteomics projects.Abbreviations: BPTI – bovine pancreatic trypsin inhibitor; LP – linear prediction; FT – Fourier transform; S/N – signal-to-noise ratio; FID – free induction decay  相似文献   

4.
Implementation of a new algorithm, SMILE, is described for reconstruction of non-uniformly sampled two-, three- and four-dimensional NMR data, which takes advantage of the known phases of the NMR spectrum and the exponential decay of underlying time domain signals. The method is very robust with respect to the chosen sampling protocol and, in its default mode, also extends the truncated time domain signals by a modest amount of non-sampled zeros. SMILE can likewise be used to extend conventional uniformly sampled data, as an effective multidimensional alternative to linear prediction. The program is provided as a plug-in to the widely used NMRPipe software suite, and can be used with default parameters for mainstream application, or with user control over the iterative process to possibly further improve reconstruction quality and to lower the demand on computational resources. For large data sets, the method is robust and demonstrated for sparsities down to ca 1%, and final all-real spectral sizes as large as 300 Gb. Comparison between fully sampled, conventionally processed spectra and randomly selected NUS subsets of this data shows that the reconstruction quality approaches the theoretical limit in terms of peak position fidelity and intensity. SMILE essentially removes the noise-like appearance associated with the point-spread function of signals that are a default of five-fold above the noise level, but impacts the actual thermal noise in the NMR spectra only minimally. Therefore, the appearance and interpretation of SMILE-reconstructed spectra is very similar to that of fully sampled spectra generated by Fourier transformation.  相似文献   

5.
6.
Projection-reconstruction NMR (PR-NMR) has attracted growing attention as a method for collecting multidimensional NMR data rapidly. The PR-NMR procedure involves measuring lower-dimensional projections of a higher-dimensional spectrum, which are then used for the mathematical reconstruction of the full spectrum. We describe here the program PR-CALC, for the reconstruction of NMR spectra from projection data. This program implements a number of reconstruction algorithms, highly optimized to achieve maximal performance, and manages the reconstruction process automatically, producing either full spectra or subsets, such as regions or slices, as requested. The ability to obtain subsets allows large spectra to be analyzed by reconstructing and examining only those subsets containing peaks, offering considerable savings in processing time and storage space. PR-CALC is straightforward to use, and integrates directly into the conventional pipeline for data processing and analysis. It was written in standard C+ + and should run on any platform. The organization is flexible, and permits easy extension of capabilities, as well as reuse in new software. PR-CALC should facilitate the widespread utilization of PR-NMR in biomedical research. Electronic supplementary material Electronic supplementary material is available for this article at and accessible for authorised users.  相似文献   

7.
Efficient analysis of protein 2D NMR spectra using the software packageEASY   总被引:10,自引:0,他引:10  
Summary The programEASY supports the spectral analysis of biomacromolecular two-dimensional (2D) nuclear magnetic resonance (NMR) data. It provides a user-friendly, window-based environment in which to view spectra for interactive interpretation. In addition, it includes a number of automated routines for peakpicking, spin-system identification, sequential resonance assignment in polypeptide chains, and cross peak integration. In this uniform environment, all resulting parameter lists can be recorded on disk, so that the paper plots and handwritten notes which normally accompany manual assignment of spectra can be largely eliminated. For example, in a protein structure determination by 2D1H NMR,EASY accepts the frequency domain datasets as input, and after combined use of the automated and interactive routines it can yield a listing of conformational constraints in the format required as input for the calculation of the 3D structure. The program was extensively tested with current protein structure determinations in our laboratory. In this paper, its main features are illustrated with data on the protein basic pancreatic trypsin inhibitor.  相似文献   

8.
Sparse sampling in biomolecular multidimensional NMR offers increased acquisition speed and resolution and, if appropriate conditions are met, an increase in sensitivity. Sparse sampling of indirectly detected time domains combined with the direct truly multidimensional Fourier transform has elicited particular attention because of the ability to generate a final spectrum amenable to traditional analysis techniques. A number of sparse sampling schemes have been described including radial sampling, random sampling, concentric sampling and variations thereof. A fundamental feature of these sampling schemes is that the resulting time domain data array is not amenable to traditional Fourier transform based processing and phasing correction techniques. In addition, radial sampling approaches offer a number of advantages and capabilities that are also not accessible using standard NMR processing techniques. These include sensitivity enhancement, sub-matrix processing and determination of minimal sets of sampling angles. Here we describe a new software package (Al NMR) that enables these capabilities in the context of a general NMR data processing environment.  相似文献   

9.
The fast Fourier transformation has been the gold standard for transforming data from time to frequency domain in many spectroscopic methods, including NMR. While reliable, it has as a drawback that it requires a grid of uniformly sampled data points. This needs very long measuring times for sampling in multidimensional experiments in all indirect dimensions uniformly and even does not allow reaching optimal evolution times that would match the resolution power of modern high-field instruments. Thus, many alternative sampling and transformation schemes have been proposed. Their common challenges are the suppression of the artifacts due to the non-uniformity of the sampling schedules, the preservation of the relative signal amplitudes, and the computing time needed for spectra reconstruction. Here we present a fast implementation of the Iterative Soft Thresholding approach (istHMS) that can reconstruct high-resolution non-uniformly sampled NMR data up to four dimensions within a few hours and make routine reconstruction of high-resolution NUS 3D and 4D spectra convenient. We include a graphical user interface for generating sampling schedules with the Poisson-Gap method and an estimation of optimal evolution times based on molecular properties. The performance of the approach is demonstrated with the reconstruction of non-uniformly sampled medium and high-resolution 3D and 4D protein spectra acquired with sampling densities as low as 0.8%. The method presented here facilitates acquisition, reconstruction and use of multidimensional NMR spectra at otherwise unreachable spectral resolution in indirect dimensions.  相似文献   

10.
In order to reduce the acquisition time of multidimensional NMR spectra of biological macromolecules, projected spectra (or in other words, spectra sampled in polar coordinates) can be used. Their standard processing involves a regular FFT of the projections followed by a reconstruction, i.e. a non-linear process. In this communication, we show that a 2D discrete Fourier transform can be implemented in polar coordinates to obtain directly a frequency domain spectrum. Aliasing due to local violations of the Nyquist sampling theorem gives rise to base line ridges but the peak line-shapes are not distorted as in most reconstruction methods. The sampling scheme is not linear and the data points in the time domain should thus be weighted accordingly in the polar FT; however, artifacts can be reduced by additional data weighting of the undersampled regions. This processing does not require any parameter tuning and is straightforward to use. The algorithm written for polar sampling can be adapted to any sampling scheme and will permit to investigate better compromises in terms of experimental time and lack of artifacts.  相似文献   

11.
Despite advances in metabolic and postmetabolic labeling methods for quantitative proteomics, there remains a need for improved label-free approaches. This need is particularly pressing for workflows that incorporate affinity enrichment at the peptide level, where isobaric chemical labels such as isobaric tags for relative and absolute quantitation and tandem mass tags may prove problematic or where stable isotope labeling with amino acids in cell culture labeling cannot be readily applied. Skyline is a freely available, open source software tool for quantitative data processing and proteomic analysis. We expanded the capabilities of Skyline to process ion intensity chromatograms of peptide analytes from full scan mass spectral data (MS1) acquired during HPLC MS/MS proteomic experiments. Moreover, unlike existing programs, Skyline MS1 filtering can be used with mass spectrometers from four major vendors, which allows results to be compared directly across laboratories. The new quantitative and graphical tools now available in Skyline specifically support interrogation of multiple acquisitions for MS1 filtering, including visual inspection of peak picking and both automated and manual integration, key features often lacking in existing software. In addition, Skyline MS1 filtering displays retention time indicators from underlying MS/MS data contained within the spectral library to ensure proper peak selection. The modular structure of Skyline also provides well defined, customizable data reports and thus allows users to directly connect to existing statistical programs for post hoc data analysis. To demonstrate the utility of the MS1 filtering approach, we have carried out experiments on several MS platforms and have specifically examined the performance of this method to quantify two important post-translational modifications: acetylation and phosphorylation, in peptide-centric affinity workflows of increasing complexity using mouse and human models.  相似文献   

12.
13.
Global lipidomics analysis across large sample sizes produces high-content datasets that require dedicated software tools supporting lipid identification and quantification, efficient data management and lipidome visualization. Here we present a novel software-based platform for streamlined data processing, management and visualization of shotgun lipidomics data acquired using high-resolution Orbitrap mass spectrometry. The platform features the ALEX framework designed for automated identification and export of lipid species intensity directly from proprietary mass spectral data files, and an auxiliary workflow using database exploration tools for integration of sample information, computation of lipid abundance and lipidome visualization. A key feature of the platform is the organization of lipidomics data in ”database table format” which provides the user with an unsurpassed flexibility for rapid lipidome navigation using selected features within the dataset. To demonstrate the efficacy of the platform, we present a comparative neurolipidomics study of cerebellum, hippocampus and somatosensory barrel cortex (S1BF) from wild-type and knockout mice devoid of the putative lipid phosphate phosphatase PRG-1 (plasticity related gene-1). The presented framework is generic, extendable to processing and integration of other lipidomic data structures, can be interfaced with post-processing protocols supporting statistical testing and multivariate analysis, and can serve as an avenue for disseminating lipidomics data within the scientific community. The ALEX software is available at www.msLipidomics.info.  相似文献   

14.
A method for five-dimensional spectral reconstruction of non-uniformly sampled NMR data sets is proposed. It is derived from the previously published signal separation algorithm, with major alterations to avoid unfeasible processing of an entire five-dimensional spectrum. The proposed method allows credible reconstruction of spectra from as little as a few hundred data points and enables sensitive resonance detection in experiments with a high dynamic range of peak intensities. The efficiency of the method is demonstrated on two high-resolution spectra for rapid sequential assignment of intrinsically disordered proteins, namely 5D HN(CA)CONH and 5D (HACA)CON(CO)CONH.  相似文献   

15.
NvAssign: protein NMR spectral assignment with NMRView   总被引:2,自引:0,他引:2  
MOTIVATION: Nuclear magnetic resonance (NMR) protein studies rely on the accurate assignment of resonances. The general procedure is to (1) pick peaks, (2) cluster data from various experiments or spectra, (3) assign peaks to the sequence and (4) verify the assignments with the spectra. Many algorithms already exist for automating the assignment process (step 3). What is lacking is a flexible interface to help a spectroscopist easily move from clustering (step 2) to assignment algorithms (step 3) and back to verification of the algorithm output with spectral analysis (step 4). RESULTS: A software module, NvAssign, was written for use with NMRView. It is a significant extension of the previous CBCA module. The module provides a flexible interface to cluster data and interact with the existing assignment algorithms. Further, the software module is able to read the results of other algorithms so that the data can be easily verified by spectral analysis. The generalized interface is demonstrated by connecting the clustered data with the assignment algorithms PACES and MONTE using previously assigned data for the lyase domain of DNA polymerase lambda. The spectral analysis program NMRView is now able to read the output of these programs for simplified analysis and verification. AVAILABILITY: NvAssign is available from http://dir.niehs.nih.gov/dirnmr/nvassign  相似文献   

16.
Clean absorption mode NMR data acquisition is presented based on mirrored time domain sampling and widely used time-proportional phase incrementation (TPPI) for quadrature detection. The resulting NMR spectra are devoid of dispersive frequency domain peak components. Those peak components exacerbate peak identification and shift peak maxima, and thus impede automated spectral analysis. The new approach is also of unique value for obtaining clean absorption mode reduced-dimensionality projection NMR spectra, which can rapidly provide high-dimensional spectral information for high-throughput NMR structure determination.  相似文献   

17.
Nmrglue, an open source Python package for working with multidimensional NMR data, is described. When used in combination with other Python scientific libraries, nmrglue provides a highly flexible and robust environment for spectral processing, analysis and visualization and includes a number of common utilities such as linear prediction, peak picking and lineshape fitting. The package also enables existing NMR software programs to be readily tied together, currently facilitating the reading, writing and conversion of data stored in Bruker, Agilent/Varian, NMRPipe, Sparky, SIMPSON, and Rowland NMR Toolkit file formats. In addition to standard applications, the versatility offered by nmrglue makes the package particularly suitable for tasks that include manipulating raw spectrometer data files, automated quantitative analysis of multidimensional NMR spectra with irregular lineshapes such as those frequently encountered in the context of biomacromolecular solid-state NMR, and rapid implementation and development of unconventional data processing methods such as covariance NMR and other non-Fourier approaches. Detailed documentation, install files and source code for nmrglue are freely available at http://nmrglue.com. The source code can be redistributed and modified under the New BSD license.  相似文献   

18.
Recent technological advances and experimental techniques have contributed to an increasing number and size of NMR datasets. In order to scale up productivity, laboratory information management systems for handling these extensive data need to be designed and implemented. The SPINS (Standardized ProteIn Nmr Storage) Laboratory Information Management System (LIMS) addresses these needs by providing an interface for archival of complete protein NMR structure determinations, together with functionality for depositing these data to the public BioMagResBank (BMRB). The software tracks intermediate files during each step of an NMR structure-determination process, including: data collection, data processing, resonance assignments, resonance assignment validation, structure calculation, and structure validation. The underlying SPINS data dictionary allows for the integration of various third party NMR data processing and analysis software, enabling users to launch programs they are accustomed to using for each step of the structure determination process directly out of the SPINS user interface.  相似文献   

19.
A maximum likelihood (ML)-based approach has been established for the direct extraction of NMR parameters (e.g., frequency, amplitude, phase, and decay rate) simultaneously from all dimensions of a D-dimensional NMR spectrum. The approach, referred to here as HTFD-ML (hybrid time frequency domain maximum likelihood), constructs a time-domain model composed of a sum of exponentially-decaying sinusoidal signals. The apodized Fourier transform of this time-domain signal is a model spectrum that represents the best fit to the equivalent frequency-domain data spectrum. The desired amplitude and frequency parameters can be extracted directly from the signal model constructed by the HTFD-ML algorithm. The HTFD-ML approach presented here, as embodied in the software package CHIFIT, is designed to meet the challenges posed by model fitting of D-dimensional NMR data sets, where each consists of many data points (108 is not uncommon) encoding information about numerous signals (up to 105 for a protein of moderate size) that exhibit spectral overlap. The suitability of the approach is demonstrated by its application to the concerted analysis of a series of ten 2D 1H-15N HSQC experiments measuring 15N T1 relaxation. In addition to demonstrating the practicality of performing maximum likelihood analysis on large, multidimensional NMR spectra, the results demonstrate that this parametric model-fitting approach provides more accurate amplitude and frequency estimates than those obtained from conventional peak-based analysis of the FT spectrum. The improved performance of the model fitting approach derives from its ability to take into account the simultaneous contributions of all signals in a crowded spectral region (deconvolution) as well as to incorporate prior knowledge in constructing models to fit the data.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号