首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The number of fluorophores within a molecule complex can be revealed by single-molecule photobleaching imaging. A widely applied strategy to analyze intensity traces over time is the quantification of photobleaching step counts. However, several factors can limit and bias the detection of photobleaching steps, including noise, high numbers of fluorophores, and the possibility that several photobleaching events occur almost simultaneously. In this study, we propose a new approach, to our knowledge, to determine the fluorophore number that correlates the intensity decay of a population of molecule complexes with the decay of the number of visible complexes. We validated our approach using single and fourfold Atto-labeled DNA strands. As an example we estimated the subunit stoichiometry of soluble CD95L using GFP fusion proteins. To assess the precision of our method we performed in silico experiments showing that the estimates are not biased for experimentally observed intensity fluctuations and that the relative precision remains constant with increasing number of fluorophores. In case of fractional fluorescent labeling, our simulations predicted that the fluorophore number estimate corresponds to the product of the true fluorophore number with the labeling fraction. Our method, denoted by spot number and intensity correlation (SONIC), is fully automated, robust to noise, and does not require the counting of photobleaching events.  相似文献   

2.
Here we describe a computational approach for the high-throughput sequence mapping of combinatorial libraries obtained by DNA shuffling. Original algorithms and their software implementation were developed for the automated and reliable analysis of hybridization data of differentially labeled oligonucleotide probes with PCR products spotted on DNA microarrays. This novel approach allows a context-dependent sequence attribution tolerant to fluctuations in experimental conditions and is well adapted to hybridization signals of variable qualities resulting from high-throughput PCR amplification from colonies. In addition, the analysis permits the calculation of sequence signatures that are characteristic of combinatorial library structure, defects, and diversity. The approach is of interest for the characterization and the equalization (library reduction to nonredundant structures) of combinatorial libraries involved in directed evolution and could be extrapolated to high-throughput polymorphism analysis.  相似文献   

3.
4.
We have developed an integrated automation system for genetic analysis and gene manipulation. The system, SX-8G Plus, is equipped with an 8-nozzle dispensing unit, a thermal cycler, a cooled reagent reservoir, four tip storage racks, four microplate platforms, buffer reservoirs, an agarose gel electrophoresis unit, a power supply, a pump for exchanging electrophoresis buffer, and a CCD camera. Automation of nucleic acid extraction and purification, the most difficult step in automating genetic analysis and gene manipulation, was realized using magnetic beads with Magtration Technology, which we have previously developed for automating the handling of paramagnetic beads. Using this system, we could perform the automated separation and purification of DNA fragments by agarose gel electrophoresis starting from sample loading. The system would enable the automation of almost all procedures in genetic analysis and gene manipulation.  相似文献   

5.
6.
The constituents of large, multisubunit protein complexes dictate their functions in cells, but determining their precise molecular makeup in vivo is challenging. One example of such a complex is the cellulose synthesis complex (CSC), which in plants synthesizes cellulose, the most abundant biopolymer on Earth. In growing plant cells, CSCs exist in the plasma membrane as six-lobed rosettes that contain at least three different cellulose synthase (CESA) isoforms, but the number and stoichiometry of CESAs in each CSC are unknown. To begin to address this question, we performed quantitative photobleaching of GFP-tagged AtCESA3-containing particles in living Arabidopsis thaliana cells using variable-angle epifluorescence microscopy and developed a set of information-based step detection procedures to estimate the number of GFP molecules in each particle. The step detection algorithms account for changes in signal variance due to changing numbers of fluorophores, and the subsequent analysis avoids common problems associated with fitting multiple Gaussian functions to binned histogram data. The analysis indicates that at least 10 GFP-AtCESA3 molecules can exist in each particle. These procedures can be applied to photobleaching data for any protein complex with large numbers of fluorescently tagged subunits, providing a new analytical tool with which to probe complex composition and stoichiometry.  相似文献   

7.
X-ray crystallography is a critical tool in the study of biological systems. It is able to provide information that has been a prerequisite to understanding the fundamentals of life. It is also a method that is central to the development of new therapeutics for human disease. Significant time and effort are required to determine and optimize many macromolecular structures because of the need for manual interpretation of complex numerical data, often using many different software packages, and the repeated use of interactive three-dimensional graphics. The Phenix software package has been developed to provide a comprehensive system for macromolecular crystallographic structure solution with an emphasis on automation. This has required the development of new algorithms that minimize or eliminate subjective input in favor of built-in expert-systems knowledge, the automation of procedures that are traditionally performed by hand, and the development of a computational framework that allows a tight integration between the algorithms. The application of automated methods is particularly appropriate in the field of structural proteomics, where high throughput is desired. Features in Phenix for the automation of experimental phasing with subsequent model building, molecular replacement, structure refinement and validation are described and examples given of running Phenix from both the command line and graphical user interface.  相似文献   

8.
Two-photon probe excitation data are commonly presented as absorption cross section or molecular brightness (the detected fluorescence rate per molecule). We report two-photon molecular brightness spectra for a diverse set of organic and genetically encoded probes with an automated spectroscopic system based on fluorescence correlation spectroscopy. The two-photon action cross section can be extracted from molecular brightness measurements at low excitation intensities, while peak molecular brightness (the maximum molecular brightness with increasing excitation intensity) is measured at higher intensities at which probe photophysical effects become significant. The spectral shape of these two parameters was similar across all dye families tested. Peak molecular brightness spectra, which can be obtained rapidly and with reduced experimental complexity, can thus serve as a first-order approximation to cross-section spectra in determining optimal wavelengths for two-photon excitation, while providing additional information pertaining to probe photostability. The data shown should assist in probe choice and experimental design for multiphoton microscopy studies. Further, we show that, by the addition of a passive pulse splitter, nonlinear bleaching can be reduced--resulting in an enhancement of the fluorescence signal in fluorescence correlation spectroscopy by a factor of two. This increase in fluorescence signal, together with the observed resemblance of action cross section and peak brightness spectra, suggests higher-order photobleaching pathways for two-photon excitation.  相似文献   

9.
Single molecule detection of target molecules specifically bound by paired fluorescently labeled probes has shown great potential for sensitive quantitation of biomolecules. To date, no reports have rigorously evaluated the analytical capabilities of a single molecule detection platform employing this dual-probe approach or the performance of its data analysis methodology. In this paper, we describe a rapid, automated, and sensitive multicolor single molecule detection apparatus and a novel extension of coincident event counting based on detection of fluorescent probes. The approach estimates the number of dual-labeled molecules of interest from the total number of coincident fluorescent events observed by correcting for unbound probes that randomly pass through the interrogation zone simultaneously. Event counting was evaluated on three combinations of distinct fluorescence channels and was demonstrated to outperform conventional spatial cross-correlation in generating a wider linear dynamic response to target molecules. Furthermore, this approach succeeded in detecting subpicomolar concentrations of a model RNA target to which fluorescently labeled oligonucleotide probes were hybridized in a complex background of RNA. These results illustrate that the fluorescent event counting approach described represents a general tool for rapid sensitive quantitative analysis of any sample analyte, including nucleic acids and proteins, for which pairs of specific probes can be developed.  相似文献   

10.
Normal-mode analysis of lateral diffusion on a bounded membrane surface.   总被引:4,自引:4,他引:0  
The normal-mode analysis of fluorescence redistribution after photobleaching, introduced for the characterization of lateral diffusion on spherical membrane surfaces, has been generalized and extended to other surface geometries. Theoretical expressions are derived for the characteristic values and orthogonal characteristic functions of the diffusion equations for cylindrical surfaces, ellipsoids of revolution and dimpled discoidal surfaces. On the basis of these results, a simple analytical function is proposed as an empirical solution for the analysis of photobleaching data on a variety of discoidal surfaces. Special experimental and computational methods for determining the surface-diffusion coefficient are described, and demonstrated with data for lipid diffusion in erythrocyte membranes.  相似文献   

11.
12.
Counting cells is often a necessary but tedious step for in vitro cell culture. Consistent cell concentrations ensure experimental reproducibility and accuracy. Cell counts are important for monitoring cell health and proliferation rate, assessing immortalization or transformation, seeding cells for subsequent experiments, transfection or infection, and preparing for cell-based assays. It is important that cell counts be accurate, consistent, and fast, particularly for quantitative measurements of cellular responses.Despite this need for speed and accuracy in cell counting, 71% of 400 researchers surveyed1 who count cells using a hemocytometer. While hemocytometry is inexpensive, it is laborious and subject to user bias and misuse, which results in inaccurate counts. Hemocytometers are made of special optical glass on which cell suspensions are loaded in specified volumes and counted under a microscope. Sources of errors in hemocytometry include: uneven cell distribution in the sample, too many or too few cells in the sample, subjective decisions as to whether a given cell falls within the defined counting area, contamination of the hemocytometer, user-to-user variation, and variation of hemocytometer filling rate2.To alleviate the tedium associated with manual counting, 29% of researchers count cells using automated cell counting devices; these include vision-based counters, systems that detect cells using the Coulter principle, or flow cytometry1. For most researchers, the main barrier to using an automated system is the price associated with these large benchtop instruments1.The Scepter cell counter is an automated handheld device that offers the automation and accuracy of Coulter counting at a relatively low cost. The system employs the Coulter principle of impedance-based particle detection3 in a miniaturized format using a combination of analog and digital hardware for sensing, signal processing, data storage, and graphical display. The disposable tip is engineered with a microfabricated, cell- sensing zone that enables discrimination by cell size and cell volume at sub-micron and sub-picoliter resolution. Enhanced with precision liquid-handling channels and electronics, the Scepter cell counter reports cell population statistics graphically displayed as a histogram.  相似文献   

13.
Bioinformatics support for high-throughput proteomics   总被引:2,自引:0,他引:2  
In the "post-genome" era, mass spectrometry (MS) has become an important method for the analysis of proteome data. The rapid advancement of this technique in combination with other methods used in proteomics results in an increasing number of high-throughput projects. This leads to an increasing amount of data that needs to be archived and analyzed.To cope with the need for automated data conversion, storage, and analysis in the field of proteomics, the open source system ProDB was developed. The system handles data conversion from different mass spectrometer software, automates data analysis, and allows the annotation of MS spectra (e.g. assign gene names, store data on protein modifications). The system is based on an extensible relational database to store the mass spectra together with the experimental setup. It also provides a graphical user interface (GUI) for managing the experimental steps which led to the MS data. Furthermore, it allows the integration of genome and proteome data. Data from an ongoing experiment was used to compare manual and automated analysis. First tests showed that the automation resulted in a significant saving of time. Furthermore, the quality and interpretability of the results was improved in all cases.  相似文献   

14.
Glycosylation is considered one of the most complex and structurally diverse post-translational modifications of proteins. Glycans play important roles in many biological processes such as protein folding, regulation of protein stability, solubility and serum half-life. One of the ways to study glycosylation is systematic structural characterizations of protein glycosylation utilizing glycomics methodology based around mass spectrometry (MS). The most prevalent bottleneck stages for glycomic analyses is laborious sample preparation steps. Therefore, in this study, we aim to improve sample preparations by automation. We recently demonstrated the successful application of an automated high-throughput (HT), glycan permethylation protocol based on 96-well microplates, in the analysis of purified glycoproteins. Therefore, we wanted to test if these developed HT methodologies could be applied to more complex biological starting materials. Our automated 96-well-plate based permethylation method showed very comparable results with established glycomic methodology. Very similar glycomic profiles were obtained for complex glycoprotein/protein mixtures derived from heterogeneous mouse tissues. Automated N-glycan release, enrichment and automated permethylation of samples proved to be convenient, robust and reliable. Therefore we conclude that these automated procedures are a step forward towards the development of a fully automated, fast and reliable glycomic profiling system for analysis of complex biological materials.  相似文献   

15.
Recent advances in the field of intravital imaging have for the first time allowed us to conduct pharmacokinetic and pharmacodynamic studies at the single cell level in live animal models. Due to these advances, there is now a critical need for automated analysis of pharmacokinetic data. To address this, we began by surveying common thresholding methods to determine which would be most appropriate for identifying fluorescently labeled drugs in intravital imaging. We then developed a segmentation algorithm that allows semi-automated analysis of pharmacokinetic data at the single cell level. Ultimately, we were able to show that drug concentrations can indeed be extracted from serial intravital imaging in an automated fashion. We believe that the application of this algorithm will be of value to the analysis of intravital microscopy imaging particularly when imaging drug action at the single cell level.  相似文献   

16.
High-performance liquid chromatography (HPLC) is a powerful technique which enables a reliable and quantitative determination of enzyme activities. The purpose of the work reported here was to develop an automatic assay of enzymatic activity. Using an automatic sample processor and injector, a program was developed which allows the complete automation of each step of analysis (calibration, enzymatic reaction, HPLC determination). This program can be adapted to different experimental requirements as each step can be performed independently and each input (time, volume, number of standards) is made by answering questions asked by instrument. Using this approach both kinetic and single-point determinations can be carried out, and in the latter case different samples can be analysed sequentially. This paper reports the automated analysis of trypsin.  相似文献   

17.
An investigation on the photobleaching behavior of fluorescein in microscopy was carried out through a systematic analysis of photobleaching mechanisms. The individual photochemical reactions of fluorescein were incorporated into a theoretical analysis and mathematical simulation to study the photochemical processes leading to photobleaching of fluorescein in microscopy. The photobleaching behavior of free and bound fluorescein has also been investigated by experimental means. Both the theoretical simulation and experimental data show that photobleaching of fluorescein in microscopy is, in general, not a single-exponential process. The simulation suggests that the non-single-exponential behavior is caused by the oxygen-independent, proximity-induced triplet-triplet or triplet-ground state dye reactions of bound fluorescein in microscopy. The single-exponential process is a special case of photobleaching behavior when the reactions between the triplet dye and molecular oxygen are dominant.  相似文献   

18.
Fluorescence imaging is often used to monitor dynamic cellular functions under conditions of very low light intensities to avoid photodamage to the cell and rapid photobleaching. Determination of the time of a fluorescence change relative to a rapid high time-resolution event, such as an action potential or pulse stimulation, is challenged by the low photon rate and the need to use imaging frame durations that limit the time resolution. To overcome these limitations, we developed a time superresolution method named event correlation microscopy that aligns repetitive events with respect to the high time-resolution events. We describe the algorithm of the method, its step response function, and a theoretical, computational, and experimental analysis of its precision, providing guidelines for camera exposure time settings depending on imaging signal properties and camera parameters for optimal time resolution. We also demonstrate the utility of the method to recover rapid nonstepwise kinetics by deconvolution fits. The event correlation microscopy method provides time superresolution beyond the photon rate limit and imaging frame duration with well-defined precision.  相似文献   

19.
The use of capillary electrophoresis with fluorescently labeled nucleic acids revolutionized DNA sequencing, effectively fueling the genomic revolution. We present an application of this technology for the high-throughput structural analysis of nucleic acids by chemical and enzymatic mapping ('footprinting'). We achieve the throughput and data quality necessary for genomic-scale structural analysis by combining fluorophore labeling of nucleic acids with novel quantitation algorithms. We implemented these algorithms in the CAFA (capillary automated footprinting analysis) open-source software that is downloadable gratis from https://simtk.org/home/cafa. The accuracy, throughput and reproducibility of CAFA analysis are demonstrated using hydroxyl radical footprinting of RNA. The versatility of CAFA is illustrated by dimethyl sulfate mapping of RNA secondary structure and DNase I mapping of a protein binding to a specific sequence of DNA. Our experimental and computational approach facilitates the acquisition of high-throughput chemical probing data for solution structural analysis of nucleic acids.  相似文献   

20.
The problem of detecting DNA motifs with functional relevance in real biological sequences is difficult due to a number of biological, statistical and computational issues and also because of the lack of knowledge about the structure of searched patterns. Many algorithms are implemented in fully automated processes, which are often based upon a guess of input parameters from the user at the very first step. In this paper, we present a novel method for the detection of seeded DNA motifs, composed by regions with a different extent of variability. The method is based on a multi-step approach, which was implemented in a motif searching web tool (MOST). Overrepresented exact patterns are extracted from input sequences and clustered to produce motifs core regions, which are then extended and scored to generate seeded motifs. The combination of automated pattern discovery algorithms and different display tools for the evaluation and selection of results at several analysis steps can potentially lead to much more meaningful results than complete automation can produce. Experimental results on different yeast and human real datasets proved the methodology to be a promising solution for finding seeded motifs. MOST web tool is freely available at http://telethon.bio.unipd.it/bioinfo/MOST.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号