首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Palo K  Mets U  Jäger S  Kask P  Gall K 《Biophysical journal》2000,79(6):2858-2866
Fluorescence correlation spectroscopy (FCS) has proven to be a powerful technique with single-molecule sensitivity. Recently, it has found a complement in the form of fluorescence intensity distribution analysis (FIDA). Here we introduce a fluorescence fluctuation method that combines the features of both techniques. It is based on the global analysis of a set of photon count number histograms, recorded with multiple widths of counting time intervals simultaneously. This fluorescence intensity multiple distributions analysis (FIMDA) distinguishes fluorescent species on the basis of both the specific molecular brightness and the translational diffusion time. The combined information, extracted from a single measurement, increases the readout effectively by one dimension and thus breaks the individual limits of FCS and FIDA. In this paper a theory is introduced that describes the dependence of photon count number distributions on diffusion coefficients. The theory is applied to a series of photon count number histograms corresponding to different widths of counting time intervals. Although the ability of the method to determine specific brightness values, diffusion times, and concentrations from mixtures is demonstrated on simulated data, its experimental utilization is shown by the determination of the binding constant of a protein-ligand interaction exemplifying its broad applicability in the life sciences.  相似文献   

2.
Single-molecule detection technologies are becoming a powerful readout format to support ultra-high-throughput screening. These methods are based on the analysis of fluorescence intensity fluctuations detected from a small confocal volume element. The fluctuating signal contains information about the mass and brightness of the different species in a mixture. The authors demonstrate a number of applications of fluorescence intensity distribution analysis (FIDA), which discriminates molecules by their specific brightness. Examples for assays based on brightness changes induced by quenching/dequenching of fluorescence, fluorescence energy transfer, and multiple-binding stoichiometry are given for important drug targets such as kinases and proteases. FIDA also provides a powerful method to extract correct biological data in the presence of compound fluorescence.  相似文献   

3.
Fluorescence fluctuation methods such as fluorescence correlation spectroscopy and fluorescence intensity distribution analysis (FIDA) have proven to be versatile tools for studying molecular interactions with single molecule sensitivity. Another well-known fluorescence technique is the measurement of the fluorescence lifetime. Here, we introduce a method that combines the benefits of both FIDA and fluorescence lifetime analysis. It is based on fitting the two-dimensional histogram of the number of photons detected in counting time intervals of given width and the sum of excitation to detection delay times of these photons. Referred to as fluorescence intensity and lifetime distribution analysis (FILDA), the technique distinguishes fluorescence species on the basis of both their specific molecular brightness and the lifetime of the excited state and is also able to determine absolute fluorophore concentrations. The combined information yielded by FILDA results in significantly increased accuracy compared to that of FIDA or fluorescence lifetime analysis alone. In this paper, the theory of FILDA is elaborated and applied to both simulated and experimental data. The outstanding power of this technique in resolving different species is shown by quantifying the binding of calmodulin to a peptide ligand, thus indicating the potential for application of FILDA to similar problems in the life sciences.  相似文献   

4.
We have established a new type of homogeneous immunoassay based on nanoparticles (nanoparticle immunoassay, or NPIA) being analyzed using fluorescence intensity distribution analysis (FIDA). This method allows the characterization of single fluorescently labeled molecules or particles with respect to their molecular brightness and concentration. Upon binding of conjugates to molecules coupled to the nanoparticle surface, the brightness of the complex scales with the number of bound conjugates. The complexes can then be distinguished accurately from free conjugate and concentrations of free and bound molecules can be determined reliably. In this study we present various examples of NPIAs where capture antibodies were linked to the nanoparticles, which were either artificial beads or bacteria. Two assay formats have been developed; first, direct labeling of the conjugate was used to quantitate free antigen through competition experiments, and second, an antigen-directed antibody was labeled to establish an assay similar to a sandwich ELISA setup. The major advantages of a NPIA are the robustness and high signal-to-noise ratio at short measurement times, as demonstrated with a miniaturized experiment in a Nanocarriertrade mark holding a volume of 1 microl/well. In addition to the good data quality, NPIAs are straightforward to perform because they require no washing steps. NPIAs open new dimensions for high throughput pharmaceutical screening and diagnostics. Assay development times can be reduced significantly because of a simple toolbox principle that is applicable to most types of assays.  相似文献   

5.
It is generally accepted that the number of neurons in a given brain area far exceeds the number of neurons needed to carry any specific function controlled by that area. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating the dimensionality of linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms’ accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding of the low-dimensional manifold within the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated the dimensionality. We thus developed a denoising algorithm based on deep learning, the “Joint Autoencoder”, which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the intrinsic dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.  相似文献   

6.
Over the last decade the number of applications of fluorescence correlation spectroscopy (FCS) has grown rapidly. Here we describe the development and application of a software package, FCS Data Processor, to analyse the acquired correlation curves. The algorithms combine strong analytical power with flexibility in use. It is possible to generate initial guesses, link and constrain fit parameters to improve the accuracy and speed of analysis. A global analysis approach, which is most effective in analysing autocorrelation curves determined from fluorescence fluctuations of complex biophysical systems, can also be implemented. The software contains a library of frequently used models that can be easily extended to include user-defined models. The use of the software is illustrated by analysis of different experimental fluorescence fluctuation data sets obtained with Rhodamine Green in aqueous solution and enhanced green fluorescent protein in vitro and in vivo.An erratum to this article can be found at Victor V. Skakun, Mark A. Hink and Anatoli V. Digris contributed equally to this work.  相似文献   

7.
Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated Vo2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant tau1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and tau1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant tau2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and tau2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.  相似文献   

8.

Background

Mathematical modeling has achieved a broad interest in the field of biology. These models represent the associations among the metabolism of the biological phenomenon with some mathematical equations such that the observed time course profile of the biological data fits the model. However, the estimation of the unknown parameters of the model is a challenging task. Many algorithms have been developed for parameter estimation, but none of them is entirely capable of finding the best solution. The purpose of this paper is to develop a method for precise estimation of parameters of a biological model.

Methods

In this paper, a novel particle swarm optimization algorithm based on a decomposition technique is developed. Then, its root mean square error is compared with simple particle swarm optimization, Iterative Unscented Kalman Filter and Simulated Annealing algorithms for two different simulation scenarios and a real data set related to the metabolism of CAD system.

Results

Our proposed algorithm results in 54.39% and 26.72% average reduction in root mean square error when applied to the simulation and experimental data, respectively.

Conclusion

The results show that the metaheuristic approaches such as the proposed method are very wise choices for finding the solution of nonlinear problems with many unknown parameters.
  相似文献   

9.
荧光寿命成像技术(fhlorescence lifetime imaging,FLIM)是一种新颖且功能强大的、能用于复杂生物组织和细胞结构与功能分析的生物组织成像技术。传统的时域荧光寿命成像数据分析方法,由于没有考虑荧光分子团之间以及他们与周围环境的相互作用,可能导致复杂的连续分布荧光寿命这一实际情况,因此对生物组织中自发荧光发光强度衰减过程的实验数据拟合效果欠佳。文章提出利用人工神经网络(artificial neural network,ANN)原理拟合算法来计算生物荧光分子团衰减动力过程,该方法能有效地建立生物荧光分子团衰减动力过程的非线性模型,并且具有处理非线性模型能力强、鲁棒性好、拟合精度高和所需计算时间少等优点。通过计算证明,相对于单参量指数与多参量指数衰减函数,这种数据拟合方法对于某些荧光分子团的多槽基面效价测定样品(multi-well plate assays)的数据有更好的一致性和更小的计算量。同时在文章中讨论了将该拟合算法应用于荧光寿命成像的前景。  相似文献   

10.
APSY-NMR with proteins: practical aspects and backbone assignment   总被引:2,自引:1,他引:1  
Automated projection spectroscopy (APSY) is an NMR technique for the recording of discrete sets of projection spectra from higher-dimensional NMR experiments, with automatic identification of the multidimensional chemical shift correlations by the dedicated algorithm GAPRO. This paper presents technical details for optimizing the set-up and the analysis of APSY-NMR experiments with proteins. Since experience so far indicates that the sensitivity for signal detection may become the principal limiting factor for applications with larger proteins or more dilute samples, we performed an APSY-NMR experiment at the limit of sensitivity, and then investigated the effects of varying selected experimental parameters. To obtain the desired reference data, a 4D APSY-HNCOCA experiment with a 12-kDa protein was recorded in 13 min. Based on the analysis of this data set and on general considerations, expressions for the sensitivity of APSY-NMR experiments have been generated to guide the selection of the projection angles, the calculation of the sweep widths, and the choice of other acquisition and processing parameters. In addition, a new peak picking routine and a new validation tool for the final result of the GAPRO spectral analysis are introduced. In continuation of previous reports on the use of APSY-NMR for sequence-specific resonance assignment of proteins, we present the results of a systematic search for suitable combinations of a minimal number of four- and five-dimensional APSY-NMR experiments that can provide the input for algorithms that generate automated protein backbone assignments.  相似文献   

11.
Recent evidence suggests that the EGF receptor oligomerizes or clusters in cells even in the absence of agonist ligand. To assess the status of EGF receptors in live cells, an EGF receptor fused to eGFP was stably expressed in CHO cells and studied using fluorescence correlation spectroscopy and fluorescent brightness analysis. By modifying FIDA for use in a two-dimensional system with quantal brightnesses, a method was developed to quantify the degree of clustering of the receptors on the cell surface. The analysis demonstrates that under physiological conditions, the EGF receptor exists in a complex equilibrium involving single molecules and clusters of two or more receptors. Acute depletion of cellular cholesterol enhanced EGF receptor clustering whereas cholesterol loading decreased receptor clustering, indicating that receptor aggregation is sensitive to the lipid composition of the membrane.  相似文献   

12.
We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters.  相似文献   

13.
Currently, results of gel electrophoresis are commonly documented in digital formats by image acquisition instruments. In this study, gel images tuned by a common image processing software package, Photoshop, were assessed to understand the transforming algorithms and their impacts on quantitative analysis. TotalLab 100, an electrophoresis gel image analysis software package, was applied for image quantitation and evaluation. The three most frequently used image tuning functions—adjustments of the brightness, contrast, and grayscale span (level) of images—were investigated using both data generated from a standard grayscale tablet and an actual electrophoresis gel image. The influences of these procedures were analyzed for the grayscale transformation between the input and output images. Although all three procedures differentially improved the visualization of the input image, adjusting the contrast of images disrupted the quantitative information because of its nonlinear transforming algorithm. Under certain conditions, adjusting the brightness or the level of images could preserve the quantitative information because of the linear transforming algorithms. It was found that when the minimum and maximum grayscales of a gel image were recognized, using a commercial software package to maximally stretch the level may significantly improve the quality of a gel image without jeopardizing quantitative analysis.  相似文献   

14.
Many image analysis systems are available for processing the images produced by laser scanning of DNA microarrays. The image processing system takes pixel-level intensity data and converts it to a set of gene-level expression or copy number summaries that will be used in further analyses. Image analysis systems currently in use differ with regard to the specific algorithms they implement, ease of use, and cost. Thus, it would be desirable to have an objective means of comparing systems. Here we describe a systematic method of comparing image processing results produced by different image analysis systems using a series of replicate microarray experiments. We demonstrate the method with a comparison of cDNA microarray data generated by the UCSF Spot and the GenePix image processing systems.  相似文献   

15.
MOTIVATION: Diffusable and non-diffusable gene products play a major role in body plan formation. A quantitative understanding of the spatio-temporal patterns formed in body plan formation, by using simulation models is an important addition to experimental observation. The inverse modelling approach consists of describing the body plan formation by a rule-based model, and fitting the model parameters to real observed data. In body plan formation, the data are usually obtained from fluorescent immunohistochemistry or in situ hybridizations. Inferring model parameters by comparing such data to those from simulation is a major computational bottleneck. An important aspect in this process is the choice of method used for parameter estimation. When no information on parameters is available, parameter estimation is mostly done by means of heuristic algorithms. RESULTS: We show that parameter estimation for pattern formation models can be efficiently performed using an evolution strategy (ES). As a case study we use a quantitative spatio-temporal model of the regulatory network for early development in Drosophila melanogaster. In order to estimate the parameters, the simulated results are compared to a time series of gene products involved in the network obtained with immunohistochemistry. We demonstrate that a (mu,lambda)-ES can be used to find good quality solutions in the parameter estimation. We also show that an ES with multiple populations is 5-140 times as fast as parallel simulated annealing for this case study, and that combining ES with a local search results in an efficient parameter estimation method.  相似文献   

16.
An inexpensive microcomputer-based image analysis system is described in which an Apple microcomputer acquires data from a video camera or video cassette recorder and measures the brightness of the image received at specified points or areas. Suggested uses for this apparatus include measurements of chlorophyll fluorescence in algal cells, determination of the effects of ultraviolet illumination on chlorophyll fluorescence, estimation of total amounts of chlorophyll in a microscope field, and microspectrophotometic and microdensitometic measurements. A similar ssytem using the IBM personal computer with a different interface is also described.  相似文献   

17.
This paper presents a new direct method for estimating the average center of rotation (CoR). An existing least-squares (LS) solution has been shown by previous works to have reduced accuracy for data with small range of motion (RoM). Alternative methods proposed to improve the CoR estimation use iterative algorithms. However, in this paper we show that with a carefully chosen normalization scheme, constrained least-squares solutions can perform as well as iterative approaches, even for challenging problems with significant noise and small RoM. In particular, enforcing the normalization constraint avoids poor fits near plane singularities that can affect the existing LS method. Our formulation has an exact solution, accounts for multiple markers simultaneously, and does not depend on manually-adjusted parameters. Simulation tests compare the method to four published CoR estimation techniques. The results show that the new approach has the accuracy of the iterative methods as well as the short computation time and repeatability of a least-squares solution. In addition, application of the new method to experimental motion capture data of the thumb carpometacarpal (CMC) joint yielded a more plausible CoR location compared to the previously reported LS solution and required less time than all four alternative techniques.  相似文献   

18.
We present a method for the reconstruction of three stimulus-evoked time-varying synaptic input conductances from voltage recordings. Our approach is based on exploiting the stochastic nature of synaptic conductances and membrane voltage. Starting with the assumption that the variances of the conductances are known, we use a stochastic differential equation to model dynamics of membrane potential and derive equations for first and second moments that can be solved to find conductances. We successfully apply the new reconstruction method to simulated data. We also explore the robustness of the method as the assumptions of the underlying model are relaxed. We vary the noise levels, the reversal potentials, the number of stimulus repetitions, and the accuracy of conductance variance estimation to quantify the robustness of reconstruction. These studies pave the way for the application of the method to experimental data.  相似文献   

19.
The diversity of immunoglobulin (IG) and T cell receptor (TR) chains depends on several mechanisms: combinatorial diversity, which is a consequence of the number of V, D and J genes and the N-REGION diversity, which creates an extensive and clonal somatic diversity at the V-J and V-D-J junctions. For the IG, the diversity is further increased by somatic hypermutations. The number of different junctions per chain and per individual is estimated to be 10(12). We have chosen the human TRAV-TRAJ junctions as an example in order to characterize the required criteria for a standardized analysis of the IG and TR V-J and V-D-J junctions, based on the IMGT-ONTOLOGY concepts, and to serve as a first IMGT junction reference set (IMGT, http://imgt.cines.fr). We performed a thorough statistical analysis of 212 human rearranged TRAV-TRAJ sequences, which were aligned and analysed by the integrated IMGT/V-QUEST software, which includes IMGT/JunctionAnalysis, then manually expert-verified. Furthermore, we compared these 212 sequences with 37 other human TRAV-TRAJ junction sequences for which some particularities (potential sequence polymorphisms, sequencing errors, etc.) did not allow IMGT/JunctionAnalysis to provide the correct biological results, according to expert verification. Using statistical learning, we constructed an automatic warning system to predict if new, automatically analysed TRAV-TRAJ sequences should be manually re-checked. We estimated the robustness of this automatic warning system.  相似文献   

20.
The analysis of experimental data from the photocycle of bacteriorhodopsin (bR) as sums of exponentials has accumulated a large amount of information on its kinetics which is still controversial. One reason for ambiguous results can be found in the inherent instabilities connected with the fitting of noisy data by sums of exponentials. Nevertheless, there are strategies to optimize the experiments and the data analysis by a proper combination of well known techniques. This paper describes an applicable approach based on the correct weighting of the data, a separation of the linear and the non-linear parameters in the process of the least squares approximation, and a statistical analysis applying the correlation matrix, the determinant of Fisher's information matrix, and the variance of the parameters as a measure of the reliability of the results. In addition, the confidence regions for the linear approximation of the non-linear model are compared with confidence regions for the true non-linear model. Evaluation techniques and rules for an optimum experimental design are mainly exemplified by the analysis of numerically generated model data with increasing complexity. The estimation of the number of exponentials significant for the interpretation of a given set of data is demonstrated by using records from eight absorption and photocurrent experiments on the photocycle of bacteriorhodopsin. Offprint requests to: K.-H. Müller  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号