首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cryo-electron tomography (cryo-ET) and subtomogram averaging (STA) are increasingly used for macromolecular structure determination in situ. Here, we introduce a set of computational tools and resources designed to enable flexible approaches to STA through increased automation and simplified metadata handling. We create a bidirectional interface between the Dynamo software package and the Warp-Relion-M pipeline, providing a framework for ab initio and geometrical approaches to multiparticle refinement in M. We illustrate the power of working within this framework by applying it to EMPIAR-10164, a publicly available dataset containing immature HIV-1 virus-like particles (VLPs), and a challenging in situ dataset containing chemosensory arrays in bacterial minicells. Additionally, we provide a comprehensive, step-by-step guide to obtaining a 3.4-Å reconstruction from EMPIAR-10164. The guide is hosted on https://teamtomo.org/, a collaborative online platform we establish for sharing knowledge about cryo-ET.

Employing optimal computational methodology in cryo-electron tomography is not always easy; this article provides a set of tools and a complete guide to obtaining high-resolution structures from cryo-ET data.  相似文献   

2.
Cryo-electron tomography (cryo-ET) and subtomogram averaging (STA) can resolve protein complexes at near atomic resolution, and when combined with focused ion beam (FIB) milling, macromolecules can be observed within their native context. Unlike single particle acquisition (SPA), cryo-ET can be slow, which may reduce overall project throughput. We here propose a fast, multi-position tomographic acquisition scheme based on beam-tilt corrected beam-shift imaging along the tilt axis, which yields sub-nanometer in situ STA averages.  相似文献   

3.
4.
The potential of energy filtering and direct electron detection for cryo-electron microscopy (cryo-EM) has been well documented. Here, we assess the performance of recently introduced hardware for cryo-electron tomography (cryo-ET) and subtomogram averaging (STA), an increasingly popular structural determination method for complex 3D specimens. We acquired cryo-ET datasets of EIAV virus-like particles (VLPs) on two contemporary cryo-EM systems equipped with different energy filters and direct electron detectors (DED), specifically a Krios G4, equipped with a cold field emission gun (CFEG), Thermo Fisher Scientific Selectris X energy filter, and a Falcon 4 DED; and a Krios G3i, with a Schottky field emission gun (XFEG), a Gatan Bioquantum energy filter, and a K3 DED. We performed constrained cross-correlation-based STA on equally sized datasets acquired on the respective systems. The resulting EIAV CA hexamer reconstructions show that both systems perform comparably in the 4–6 Å resolution range based on Fourier-Shell correlation (FSC). In addition, by employing a recently introduced multiparticle refinement approach, we obtained a reconstruction of the EIAV CA hexamer at 2.9 Å. Our results demonstrate the potential of the new generation of energy filters and DEDs for STA, and the effects of using different processing pipelines on their STA outcomes.  相似文献   

5.
In recent studies, exome sequencing has proven to be a successful screening tool for the identification of candidate genes causing rare genetic diseases. Although underlying targeted sequencing methods are well established, necessary data handling and focused, structured analysis still remain demanding tasks. Here, we present a cloud-enabled autonomous analysis pipeline, which comprises the complete exome analysis workflow. The pipeline combines several in-house developed and published applications to perform the following steps: (a) initial quality control, (b) intelligent data filtering and pre-processing, (c) sequence alignment to a reference genome, (d) SNP and DIP detection, (e) functional annotation of variants using different approaches, and (f) detailed report generation during various stages of the workflow. The pipeline connects the selected analysis steps, exposes all available parameters for customized usage, performs required data handling, and distributes computationally expensive tasks either on a dedicated high-performance computing infrastructure or on the Amazon cloud environment (EC2). The presented application has already been used in several research projects including studies to elucidate the role of rare genetic diseases. The pipeline is continuously tested and is publicly available under the GPL as a VirtualBox or Cloud image at http://simplex.i-med.ac.at; additional supplementary data is provided at http://www.icbi.at/exome.  相似文献   

6.
Advances in electron microscope instrumentation, cryo-electron tomography data collection, and subtomogram averaging have allowed for the in-situ visualization of molecules and their complexes in their native environment. Current data processing pipelines commonly extract subtomograms as a cubic subvolume with the key assumption that the selected object of interest is discrete from its surroundings. However, in instances when the object is in its native environment, surrounding densities may negatively affect the subsequent alignment and refinement processes, leading to loss of information due to misalignment. For example, the strong densities from surrounding membranes may dominate the alignment process for membrane proteins. Here, we developed methods for feature-guided subtomogram alignment and 3D signal permutation for subtomogram averaging. Our 3D signal permutation method randomizes and filters voxels outside a mask of any shape and blurs the boundary of the mask that encapsulates the object of interest. The randomization preserves global statistical properties such as mean density and standard deviation of voxel density values, effectively producing a featureless background surrounding the object of interest. This signal permutation process can be repeatedly applied with intervening alignments of the 3D signal-permuted subvolumes, recentering of the mask, and optional adjustments of the shape of the mask. We have implemented these methods in a new processing pipeline which starts from tomograms, contains feature-guided subtomogram extraction and alignment, 3D signal-permutation, and subtomogram visualization tools. As an example, feature-guided alignment and 3D signal permutation leads to improved subtomogram average maps for a dataset of synaptic protein complexes in their native environment.  相似文献   

7.
Single-cell RNA sequencing (scRNA-seq) has emerged as a powerful technique to decipher tissue composition at the single-cell level and to inform on disease mechanisms, tumor heterogeneity, and the state of the immune microenvironment. Although multiple methods for the computational analysis of scRNA-seq data exist, their application in a clinical setting demands standardized and reproducible workflows, targeted to extract, condense, and display the clinically relevant information. To this end, we designed scAmpi (Single Cell Analysis mRNA pipeline), a workflow that facilitates scRNA-seq analysis from raw read processing to informing on sample composition, clinically relevant gene and pathway alterations, and in silico identification of personalized candidate drug treatments. We demonstrate the value of this workflow for clinical decision making in a molecular tumor board as part of a clinical study.  相似文献   

8.

Introduction

Untargeted metabolomics workflows include numerous points where variance and systematic errors can be introduced. Due to the diversity of the lipidome, manual peak picking and quantitation using molecule specific internal standards is unrealistic, and therefore quality peak picking algorithms and further feature processing and normalization algorithms are important. Subsequent normalization, data filtering, statistical analysis, and biological interpretation are simplified when quality data acquisition and feature processing are employed.

Objectives

Metrics for QC are important throughout the workflow. The robust workflow presented here provides techniques to ensure that QC checks are implemented throughout sample preparation, data acquisition, pre-processing, and analysis.

Methods

The untargeted lipidomics workflow includes sample standardization prior to acquisition, blocks of QC standards and blanks run at systematic intervals between randomized blocks of experimental data, blank feature filtering (BFF) to remove features not originating from the sample, and QC analysis of data acquisition and processing.

Results

The workflow was successfully applied to mouse liver samples, which were investigated to discern lipidomic changes throughout the development of nonalcoholic fatty liver disease (NAFLD). The workflow, including a novel filtering method, BFF, allows improved confidence in results and conclusions for lipidomic applications.

Conclusion

Using a mouse model developed for the study of the transition of NAFLD from an early stage known as simple steatosis, to the later stage, nonalcoholic steatohepatitis, in combination with our novel workflow, we have identified phosphatidylcholines, phosphatidylethanolamines, and triacylglycerols that may contribute to disease onset and/or progression.
  相似文献   

9.

Aim

Palaeoecological data are crucial for comprehending large-scale biodiversity patterns and the natural and anthropogenic drivers that influence them over time. Over the last decade, the availability of open-access research databases of palaeoecological proxies has substantially increased. These databases open the door to research questions needing advanced numerical analyses and modelling based on big-data compilations. However, compiling and analysing palaeoecological data pose unique challenges that require a guide for producing standardized and reproducible compilations.

Innovation

We present a step-by-step guide of how to process fossil pollen data into a standardized dataset compilation ready for macroecological and palaeoecological analyses. We describe successive criteria that will enhance the quality of the compilations. Though these criteria are project and research question-dependent, we discuss the most important assumptions that should be considered and adjusted accordingly. Our guide is accompanied by an R-workflow—called FOSSILPOL—and corresponding R-package—called R-Fossilpol—that provide a detailed protocol ready for interdisciplinary users. We illustrate the workflow by sourcing and processing Scandinavian fossil pollen datasets and show the reproducibility of continental-scale data processing.

Main Conclusions

The study of biodiversity and macroecological patterns through time and space requires large-scale syntheses of palaeoecological datasets. The data preparation for such syntheses must be transparent and reproducible. With our FOSSILPOL workflow and R-package, we provide a protocol for optimal handling of large compilations of fossil pollen datasets and workflow reproducibility. Our workflow is also relevant for the compilation and synthesis of other palaeoecological proxies and as such offers a guide for synthetic and cross-disciplinary analyses with macroecological, biogeographical and palaeoecological perspectives. However, we emphasize that expertise and informed decisions based on palaeoecological knowledge remain crucial for high-quality data syntheses and should be strongly embedded in studies that rely on the increasing amount of open-access palaeoecological data.  相似文献   

10.
As structural genomics and proteomics research has become popular, the importance of cell-free protein synthesis systems has been realized for high-throughput expression. Our group has established a high-throughput pipeline for protein sample preparation for structural genomics and proteomics by using cell-free protein synthesis. Among the many procedures for cell-free protein synthesis, the preparation of the cell extract is a crucial step to establish a highly efficient and reproducible workflow. In this article, we describe a detailed protocol for E. coli cell extract preparation for cell-free protein synthesis, which we have developed and routinely use. The cell extract prepared according to this protocol is used for many of our cell-free synthesis applications, including high-throughput protein expression using PCR-amplified templates and large-scale protein production for structure determinations.  相似文献   

11.
Cryo-electron tomography (CET) is a three-dimensional imaging technique for structural studies of macromolecules under close-to-native conditions. In-depth analysis of macromolecule populations depicted in tomograms requires identification of subtomograms corresponding to putative particles, averaging of subtomograms to enhance their signal, and classification to capture the structural variations among them. Here, we introduce the open-source platform PyTom that unifies standard tomogram processing steps in a python toolbox. For subtomogram averaging, we implemented an adaptive adjustment of scoring and sampling that clearly improves the resolution of averages compared to static strategies. Furthermore, we present a novel stochastic classification method that yields significantly more accurate classification results than two deterministic approaches in simulations. We demonstrate that the PyTom workflow yields faithful results for alignment and classification of simulated and experimental subtomograms of ribosomes and GroEL(14)/GroEL(14)GroES(7), respectively, as well as for the analysis of ribosomal 60S subunits in yeast cell lysate. PyTom enables parallelized processing of large numbers of tomograms, but also provides a convenient, sustainable environment for algorithmic development.  相似文献   

12.
We have developed a simple and efficient protocol for the isolation of good-quality recombinant phage DNA useful for all downstream processing, including automated sequencing. The overnight-grown phage particles were effectively precipitated (without any contaminating Escherichia coli DNA and other culture media components) by adjusting the pH of the culture medium to 5.2 with sodium acetate, followed by addition of ethanol to 25%. The phage DNA was selectively precipitated with ethanol in the presence of guanidinium thiocyanate under alkaline pH, resulting in uniform quality and quantity of phage DNA. The quality of the phage DNA preparation was demonstrated by DNA sequencing that provided an average read length of >700 bases (PHRED20 quality). This protocol for plating, picking, growing, and subsequent DNA purification of individual phage clones can be completely automated using any standard robotic platform. This protocol does not require any commercial kits and can be completed within 2 h.  相似文献   

13.
14.
Understanding the function and evolution of developmental regulatory networks requires the characterisation and quantification of spatio-temporal gene expression patterns across a range of systems and species. However, most high-throughput methods to measure the dynamics of gene expression do not preserve the detailed spatial information needed in this context. For this reason, quantification methods based on image bioinformatics have become increasingly important over the past few years. Most available approaches in this field either focus on the detailed and accurate quantification of a small set of gene expression patterns, or attempt high-throughput analysis of spatial expression through binary pattern extraction and large-scale analysis of the resulting datasets. Here we present a robust, “medium-throughput” pipeline to process in situ hybridisation patterns from embryos of different species of flies. It bridges the gap between high-resolution, and high-throughput image processing methods, enabling us to quantify graded expression patterns along the antero-posterior axis of the embryo in an efficient and straightforward manner. Our method is based on a robust enzymatic (colorimetric) in situ hybridisation protocol and rapid data acquisition through wide-field microscopy. Data processing consists of image segmentation, profile extraction, and determination of expression domain boundary positions using a spline approximation. It results in sets of measured boundaries sorted by gene and developmental time point, which are analysed in terms of expression variability or spatio-temporal dynamics. Our method yields integrated time series of spatial gene expression, which can be used to reverse-engineer developmental gene regulatory networks across species. It is easily adaptable to other processes and species, enabling the in silico reconstitution of gene regulatory networks in a wide range of developmental contexts.  相似文献   

15.
Spike-triggered averaging (STA) of muscle force transients has often been used to estimate motor unit contractile properties, using the discharge of a motor unit within the muscle as the triggering events. For motor units that exert torque about multiple degrees-of-freedom, STA has also been used to estimate motor unit pulling direction. It is well known that motor unit firing rate and weak synchronization of motor unit discharges with other motor units in the muscle can distort STA estimates of contractile properties, but the distortion of STA estimates of motor unit pulling direction has not been thoroughly evaluated. Here, we derive exact equations that predict that STA decouples firing rate and synchronization distortion when used to estimate motor unit pulling direction. We derive a framework for analyzing synchronization, consider whether the distortion due to synchronization can be removed from STA estimates of pulling direction, and show that there are distributions of motor unit pulling directions for which STA is insensitive to synchronization. We conclude that STA may give insight into how motoneuronal synchronization is organized with respect to motor unit pulling direction. Action Editor: David Terman  相似文献   

16.
Soft tissue artefact (STA), i.e. the motion of the skin, fat and muscles gliding on the underlying bone, may lead to a marker position error reaching up to 8.7 cm for the particular case of the scapula. Multibody kinematics optimisation (MKO) is one of the most efficient approaches used to reduce STA. It consists in minimising the distance between the positions of experimental markers on a subject skin and the simulated positions of the same markers embedded on a kinematic model. However, the efficiency of MKO directly relies on the chosen kinematic model. This paper proposes an overview of the different upper limb models available in the literature and a discussion about their applicability to MKO.The advantages of each joint model with respect to its biofidelity to functional anatomy are detailed both for the shoulder and the forearm areas. Models capabilities of personalisation and of adaptation to pathological cases are also discussed. Concerning model efficiency in terms of STA reduction in MKO algorithms, a lack of quantitative assessment in the literature is noted. In priority, future studies should concern the evaluation and quantification of STA reduction depending on upper limb joint constraints.  相似文献   

17.
TomoAlign is a software package that integrates tools to mitigate two important resolution limiting factors in cryoET, namely the beam-induced sample motion and the contrast transfer function (CTF) of the microscope. The package is especially focused on cryoET of thick specimens where fiducial markers are required for accurate tilt-series alignment and sample motion estimation. TomoAlign models the beam-induced sample motion undergone during the tilt-series acquisition. The motion models are used to produce motion-corrected subtilt-series centered on the particles of interest. In addition, the defocus of each particle at each tilt image is determined and can be corrected, resulting in motion-corrected and CTF-corrected subtilt-series from which the subtomograms can be computed. Alternatively, the CTF information can be passed on so that CTF correction can be carried out entirely within external packages like Relion. TomoAlign serves as a versatile tool that can streamline the cryoET workflow from initial alignment of tilt-series to final subtomogram averaging during in situ structure determination.  相似文献   

18.
Cyclops is a new computer program designed as a graphical front-end that allows easy control and interaction with tasks and programs for 3D reconstruction of biological complexes using cryo-electron microscopy. Cyclops' current plug-ins are designed for automated particle picking and include two new algorithms, automated carbon masking and quaternion based rotation space sampling, which are also presented here. Additional plug-ins are in the pipeline. Cyclops allows straightforward organization and visualization of all data and tasks and allows both interactive and batch-wise processing. Furthermore, it was designed for straightforward implementation in grid architectures. As a front-end to a collection of programs it provides a common interface to these programs, thus enhancing the usability of the suite and the productivity of the user.  相似文献   

19.
20.
In vivo19F MRI allows quantitative cell tracking without the use of ionizing radiation. It is a noninvasive technique that can be applied to humans. Here, we describe a general protocol for cell labeling, imaging, and image processing. The technique is applicable to various cell types and animal models, although here we focus on a typical mouse model for tracking murine immune cells. The most important issues for cell labeling are described, as these are relevant to all models. Similarly, key imaging parameters are listed, although the details will vary depending on the MRI system and the individual setup. Finally, we include an image processing protocol for quantification. Variations for this, and other parts of the protocol, are assessed in the Discussion section. Based on the detailed procedure described here, the user will need to adapt the protocol for each specific cell type, cell label, animal model, and imaging setup. Note that the protocol can also be adapted for human use, as long as clinical restrictions are met.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号