共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In the last decade, directed evolution has become a routine approach for engineering proteins with novel or altered properties. Concurrently, a trend away from purely 'blind' randomization strategies and towards more 'semi-rational' approaches has also become apparent. In this review, we discuss ways in which structural information and predictive computational tools are playing an increasingly important role in guiding the design of randomized libraries: web servers such as ConSurf-HSSP and SCHEMA allow the prediction of sites to target for producing functional variants, while algorithms such as GLUE, PEDEL and DRIVeR are useful for estimating library completeness and diversity. In addition, we review recent methodological developments that facilitate the construction of unbiased libraries, which are inherently more diverse than biased libraries and therefore more likely to yield improved variants. 相似文献
3.
Chromatin immunoprecipitation (ChIP) followed by deep sequencing can now easily be performed across different conditions, time points and even species. However, analyzing such data is not trivial and standard methods are as yet unavailable. Here we present a protocol to systematically compare ChIP-sequencing (ChIP-seq) data across conditions. We first describe technical guidelines for data preprocessing, read mapping, read-density visualization and peak calling. We then describe methods and provide code with specific examples to compare different data sets across species and across conditions, including a threshold-free approach to measure global similarity, a strategy to assess the binary conservation of binding events and measurements for quantitative changes of binding. We discuss how differences in binding can be related to gene functions, gene expression and sequence changes. Once established, this protocol should take about 2 d to complete and be generally applicable to many data sets. 相似文献
4.
Jan M. Deussing 《Cell and tissue research》2013,353(1):9-25
Hes genes are required to maintain diverse progenitor cell populations during embryonic development. Loss of Hes1 results in a spectrum of malformations of pharyngeal endoderm-derived organs, including the ultimobranchial body (progenitor of C cells), parathyroid, thymus and thyroid glands, together with highly penetrant C-cell aplasia (81%) and parathyroid aplasia (28%). The hypoplastic parathyroid and thymus are mostly located around the pharyngeal cavity, even at embryonic day (E) 15.5 to E18.5, indicating the failure of migration of the organs. To clarify the relationship between these phenotypes and neural crest cells, we examine fate mapping of neural crest cells colonized in pharyngeal arches in Hes1 null mutants by using the Wnt1-Cre/R26R reporter system. In null mutants, the number of neural crest cells labeled by X-gal staining is markedly decreased in the pharyngeal mesenchyme at E12.5 when the primordia of the thymus, parathyroid and ultimobranchial body migrate toward their destinations. Furthermore, phospho-Histone-H3-positive proliferating cells are reduced in number in the pharyngeal mesenchyme at this stage. Our data indicate that the development of pharyngeal organs and survival of neural-crest-derived mesenchyme in pharyngeal arches are critically dependent on Hes1. We propose that the defective survival of neural-crest-derived mesenchymal cells in pharyngeal arches directly or indirectly leads to deficiencies of pharyngeal organs. 相似文献
5.
Southern J Pitt-Francis J Whiteley J Stokeley D Kobashi H Nobes R Kadooka Y Gavaghan D 《Progress in biophysics and molecular biology》2008,96(1-3):60-89
Recent advances in biotechnology and the availability of ever more powerful computers have led to the formulation of increasingly complex models at all levels of biology. One of the main aims of systems biology is to couple these together to produce integrated models across multiple spatial scales and physical processes. In this review, we formulate a definition of multi-scale in terms of levels of biological organisation and describe the types of model that are found at each level. Key issues that arise in trying to formulate and solve multi-scale and multi-physics models are considered and examples of how these issues have been addressed are given for two of the more mature fields in computational biology: the molecular dynamics of ion channels and cardiac modelling. As even more complex models are developed over the coming few years, it will be necessary to develop new methods to model them (in particular in coupling across the interface between stochastic and deterministic processes) and new techniques will be required to compute their solutions efficiently on massively parallel computers. We outline how we envisage these developments occurring. 相似文献
6.
Nicola Kelly Caoimhe A. Sweeney Kathrin Kurtenbach James A. Grogan Stefan Jockenhoevel 《Computer methods in biomechanics and biomedical engineering》2013,16(16):1334-1344
AbstractBraided stents are associated with a number of complications in vivo. Accurate computational modelling of these devices is essential for the design and development of the next generation of these stents. In this study, two commonly utilised methods of computationally modelling filament interaction in braided stents are investigated: the join method and the weave method. Three different braided stent designs are experimentally tested and computationally modelled in both radial and v-block configurations. The results of the study indicate that while both methods are capable of capturing braided stent performance to some degree, the weave method is much more robust. 相似文献
7.
Although comparison of RNA-protein interaction profiles across different conditions has become increasingly important to understanding the function of RNA-binding proteins (RBPs), few computational approaches have been developed for quantitative comparison of CLIP-seq datasets. Here, we present an easy-to-use command line tool, dCLIP, for quantitative CLIP-seq comparative analysis. The two-stage method implemented in dCLIP, including a modified MA normalization method and a hidden Markov model, is shown to be able to effectively identify differential binding regions of RBPs in four CLIP-seq datasets, generated by HITS-CLIP, iCLIP and PAR-CLIP protocols. dCLIP is freely available at http://qbrc.swmed.edu/software/. 相似文献
8.
An essential first step in investigations of viruses in soil is the evaluation of viral recovery methods suitable for subsequent culture-independent analyses. In this study, four elution buffers (10% beef extract, 250 mM glycine buffer, 10 mM sodium pyrophosphate, and 1% potassium citrate) and three enumeration techniques (plaque assay, epifluorescence microscopy [EFM], and transmission electron microscopy [TEM]) were compared to determine the best method of extracting autochthonous bacteriophages from two Delaware agricultural soils. Beef extract and glycine buffer were the most effective in eluting viable phages inoculated into soils (up to 29% recovery); however, extraction efficiency varied significantly with phage strain. Potassium citrate eluted the highest numbers of virus-like particles from both soils based on enumerations by EFM (mean, 5.3 x 10(8) g of dry soil(-1)), but specific soil-eluant combinations posed significant problems to enumeration by EFM. Observations of virus-like particles under TEM gave confidence that the particles were, in fact, phages, but TEM enumerations yielded measurements of phage abundance (mean, 1.5 x 10(8) g of dry soil(-1)) that were about five times lower. Clearly, the measurement of phage abundance in soils varies with both the extraction and enumeration methodology; thus, it is important to assess multiple extraction and enumeration approaches prior to undertaking ecological studies of phages in a particular soil. 相似文献
9.
Comparative analyses of squalene synthase (SQS) proteins in poplar and pine by using bioinformatics tools 总被引:2,自引:0,他引:2
Squalene synthase (SQS, EC 2.5.1.21) is a major enzyme in biosynthesis of isoprenoid (farnesyl pyrophosphate (FPP) squalene). In the present study, we have analyzed SQS enzymes of black cottonwood (Populus trichocarpa, hereafter Pt) and Masson’s pine (Pinus massoniana, hereafter Pm) using bioinformatics tools. PtSQS and PmSQS sequences were found to have very similar physicochemical properties with “squalene/phytoene synthase” domain structure (PF00494). PtSQS sequence was 47.3 kDa weight and 413 amino acids long with a pI value of 6.86, while PmSQS was 46.6 kDa weight and 409 amino acids long with a pI of 7.92. Alignment of SQS protein sequences in 15 plant species showed a highly similar conserved pattern and included 77DTVED81 and 213DYLED217 motifs, which are rich in aspartic acids, for FPP binding sites. In phylogenetic tree, monocots and polycot were clearly separated from dicots with high bootstrap value (99 %). A total of 10 interaction partners were predicted for PtSQS and PmSQS proteins. Nine of them were hypothetical proteins (related with phytosterol biosynthesis), while one was putative uncharacterized protein. Similar 3D structures and identical binding sites were predicted for pine and poplar. In docking, FPP-PtSQS was found to make 8 H bonds with Asp81, Asp217, Glu80, and Gln206 residues in poplar with highest affinity while FPP-PmSQS made 7 H bonds with Arg49, Arg74, Ser48, and Val47 residues in pine with highest affinity. The results of this study will provide valuable theoretical knowledge for future studies of identification and characterization of SQS genes and proteins in various tree species and will provide an insight for studies of biotechnological manipulation of sterol biosynthesis pathway to enhance the plant stress tolerance and productivity. 相似文献
10.
Moult J 《Philosophical transactions of the Royal Society of London. Series B, Biological sciences》2006,361(1467):453-458
In principle, given the amino acid sequence of a protein, it is possible to compute the corresponding three-dimensional structure. Methods for modelling structure based on this premise have been under development for more than 40 years. For the past decade, a series of community wide experiments (termed Critical Assessment of Structure Prediction (CASP)) have assessed the state of the art, providing a detailed picture of what has been achieved in the field, where we are making progress, and what major problems remain. The rigorous evaluation procedures of CASP have been accompanied by substantial progress. Lessons from this area of computational biology suggest a set of principles for increasing rigor in the field as a whole. 相似文献
11.
Assessing computational tools for the discovery of transcription factor binding sites 总被引:34,自引:0,他引:34
Tompa M Li N Bailey TL Church GM De Moor B Eskin E Favorov AV Frith MC Fu Y Kent WJ Makeev VJ Mironov AA Noble WS Pavesi G Pesole G Régnier M Simonis N Sinha S Thijs G van Helden J Vandenbogaert M Weng Z Workman C Ye C Zhu Z 《Nature biotechnology》2005,23(1):137-144
The prediction of regulatory elements is a problem where computational methods offer great hope. Over the past few years, numerous tools have become available for this task. The purpose of the current assessment is twofold: to provide some guidance to users regarding the accuracy of currently available tools in various settings, and to provide a benchmark of data sets for assessing future tools. 相似文献
12.
Over the last seven years our laboratory has focused on the determination of the structural aspects of nucleoside triphosphate diphosphohydrolases (NTPDases) using site-directed mutagenesis and computational comparative protein modeling to generate hypotheses and models for the hydrolytic site and enzymatic mechanism of the family of NTPDase nucleotidases. This review summarizes these studies utilizing NTPDase3 (also known as CD39L3 and HB6), an NTPDase family member that is intermediate in its characteristics between the more widely distributed and studied NTPDase1 (also known as CD39) and NTPDase2 (also known as CD39L1 and ecto-ATPase) enzymes. Relevant site-directed mutagenesis studies of other NTPDases are also discussed and compared to NTPDase3 results. It is anticipated that many of the results and conclusions reached via studies of NTPDase3 will be relevant to understanding the structure and enzymatic mechanism of all the cell-surface members of this family (NTPDase1–3, 8), and that understanding these NTPDase enzymes will aid in modulating the many varied processes under purinergic signaling control. This review also integrates the site-directed mutagenesis results with a recent 3-D structural model for the extracellular portion of NTPDases that helps explain the importance of the apyrase conserved regions (ACRs) of the NTPDases. Utilizing this model and published work from Dr Guidotti's laboratory concerning the importance and characteristics of the two transmembrane helices and their movements in response to substrate, we present a speculative cartoon model of the enzymatic mechanism of the membrane-bound NTPDases that integrates movements of the extracellular region required for catalysis with movements of the N- and C-terminal transmembrane helices that are important for control and modulation of enzyme activity. 相似文献
13.
Mark P. Buttner Patricia Cruz Linda D. Stetzenbach Amy K. Klima-Comba Vanessa L. Stevens Peter A. Emanuel 《Applied microbiology》2004,70(12):7040-7045
Current surface sampling methods for microbial contaminants are designed to sample small areas and utilize culture analysis. The total number of microbes recovered is low because a small area is sampled, making detection of a potential pathogen more difficult. Furthermore, sampling of small areas requires a greater number of samples to be collected, which delays the reporting of results, taxes laboratory resources and staffing, and increases analysis costs. A new biological surface sampling method, the Biological Sampling Kit (BiSKit), designed to sample large areas and to be compatible with testing with a variety of technologies, including PCR and immunoassay, was evaluated and compared to other surface sampling strategies. In experimental room trials, wood laminate and metal surfaces were contaminated by aerosolization of Bacillus atrophaeus spores, a simulant for Bacillus anthracis, into the room, followed by settling of the spores onto the test surfaces. The surfaces were sampled with the BiSKit, a cotton-based swab, and a foam-based swab. Samples were analyzed by culturing, quantitative PCR, and immunological assays. The results showed that the large surface area (1 m2) sampled with the BiSKit resulted in concentrations of B. atrophaeus in samples that were up to 10-fold higher than the concentrations obtained with the other methods tested. A comparison of wet and dry sampling with the BiSKit indicated that dry sampling was more efficient (efficiency, 18.4%) than wet sampling (efficiency, 11.3%). The sensitivities of detection of B. atrophaeus on metal surfaces were 42 ± 5.8 CFU/m2 for wet sampling and 100.5 ± 10.2 CFU/m2 for dry sampling. These results demonstrate that the use of a sampling device capable of sampling larger areas results in higher sensitivity than that obtained with currently available methods and has the advantage of sampling larger areas, thus requiring collection of fewer samples per site. 相似文献
14.
Omics and bioinformatics are essential to understanding the molecular systems that underlie various plant functions. Recent game-changing sequencing technologies have revitalized sequencing approaches in genomics and have produced opportunities for various emerging analytical applications. Driven by technological advances, several new omics layers such as the interactome, epigenome and hormonome have emerged. Furthermore, in several plant species, the development of omics resources has progressed to address particular biological properties of individual species. Integration of knowledge from omics-based research is an emerging issue as researchers seek to identify significance, gain biological insights and promote translational research. From these perspectives, we provide this review of the emerging aspects of plant systems research based on omics and bioinformatics analyses together with their associated resources and technological advances. 相似文献
15.
Bridging the information gap: computational tools for intermediate resolution structure interpretation 总被引:18,自引:0,他引:18
Due to large sizes and complex nature, few large macromolecular complexes have been solved to atomic resolution. This has lead to an under-representation of these structures, which are composed of novel and/or homologous folds, in the library of known structures and folds. While it is often difficult to achieve a high-resolution model for these structures, X-ray crystallography and electron cryomicroscopy are capable of determining structures of large assemblies at low to intermediate resolutions. To aid in the interpretation and analysis of such structures, we have developed two programs: helixhunter and foldhunter. Helixhunter is capable of reliably identifying helix position, orientation and length using a five-dimensional cross-correlation search of a three-dimensional density map followed by feature extraction. Helixhunter's results can in turn be used to probe a library of secondary structure elements derived from the structures in the Protein Data Bank (PDB). From this analysis, it is then possible to identify potential homologous folds or suggest novel folds based on the arrangement of alpha helix elements, resulting in a structure-based recognition of folds containing alpha helices. Foldhunter uses a six-dimensional cross-correlation search allowing a probe structure to be fitted within a region or component of a target structure. The structural fitting therefore provides a quantitative means to further examine the architecture and organization of large, complex assemblies. These two methods have been successfully tested with simulated structures modeled from the PDB at resolutions between 6 and 12 A. With the integration of helixhunter and foldhunter into sequence and structural informatics techniques, we have the potential to deduce or confirm known or novel folds in domains or components within large complexes. 相似文献
16.
Over the past decade, a number of biocomputational tools have been developed to predict small RNA (sRNA) genes in bacterial genomes. In this study, several of the leading biocomputational tools, which use different methodologies, were investigated. The performance of the tools, both individually and in combination, was evaluated on ten sets of benchmark data, including data from a novel RNA-seq experiment conducted in this study. The results of this study offer insight into the utility as well as the limitations of the leading biocomputational tools for sRNA identification and provide practical guidance for users of the tools. 相似文献
17.
Rod photoreceptors are activated by light through activation of a cascade that includes the G protein-coupled receptor rhodopsin, the G protein transducin, its effector cyclic guanosine monophosphate (cGMP) phosphodiesterase and the second messengers cGMP and Ca2+. Signalling is localised to the particular rod outer segment disc, which is activated by absorption of a single photon. Modelling of this cascade has previously been performed mostly by assumption of a well-stirred cytoplasm. We recently published the first fully spatially resolved model that captures the local nature of light activation. The model reduces the complex geometry of the cell to a simpler one using the mathematical theories of homogenisation and concentrated capacity. The model shows that, upon activation of a single rhodopsin, changes of the second messengers cGMP and Ca2+ are local about the particular activated disc. In the current work, the homogenised model is computationally compared with the full, non-homogenised one, set in the original geometry of the rod outer segment. It is found to have an accuracy of 0.03% compared with the full model in computing the integral response and a 5200-fold reduction in computation time. The model can reconstruct the radial time-profiles of cGMP and Ca2+ in the interdiscal spaces adjacent to the activated discs. Cellular electrical responses are localised near the activation sites, and multiple photons sufficiently far apart produce essentially independent responses. This leads to a computational analysis of the notion and estimate of 'spread' and the optimum distribution of activated sites that maximises the response. Biological insights arising from the spatio-temporal model include a quantification of how variability in the response to dim light is affected by the distance between the outer segment discs capturing photons. The model is thus a simulation tool for biologists to predict the effect of various factors influencing the timing, spread and control mechanisms of this G protein-coupled, receptor-mediated cascade. It permits ease of simulation experiments across a range of conditions, for example, clamping the concentration of calcium, with results matching analogous experimental results. In addition, the model accommodates differing geometries of rod outer segments from different vertebrate species. Thus it represents a building block towards a predictive model of visual transduction. 相似文献
18.
Catherine S. Jarnevich James J. Graham Gregory J. Newman Alycia W. Crall Thomas J. Stohlgren 《Biological invasions》2007,9(5):597-599
Data sensitivity can pose a formidable barrier to data sharing. Knowledge of species current distributions from data sharing
is critical for the creation of watch lists and an early warning/rapid response system and for model generation for the spread
of invasive species. We have created an on-line system to synthesize disparate datasets of non-native species locations that
includes a mechanism to account for data sensitivity. Data contributors are able to mark their data as sensitive. This data
is then ‘fuzzed’ in mapping applications and downloaded files to quarter-quadrangle grid cells, but the actual locations are
available for analyses. We propose that this system overcomes the hurdles to data sharing posed by sensitive data. 相似文献
19.
20.
Anderson AE Ellis BJ Weiss JA 《Computer methods in biomechanics and biomedical engineering》2007,10(3):171-184
Computational techniques and software for the analysis of problems in mechanics have naturally moved from their origins in the traditional engineering disciplines to the study of cell, tissue and organ biomechanics. Increasingly complex models have been developed to describe and predict the mechanical behavior of such biological systems. While the availability of advanced computational tools has led to exciting research advances in the field, the utility of these models is often the subject of criticism due to inadequate model verification and validation (V&V). The objective of this review is to present the concepts of verification, validation and sensitivity studies with regard to the construction, analysis and interpretation of models in computational biomechanics. Specific examples from the field are discussed. It is hoped that this review will serve as a guide to the use of V&V principles in the field of computational biomechanics, thereby improving the peer acceptance of studies that use computational modeling techniques. 相似文献