首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ideally detailed neuron models should make use of morphological and electrophysiological data from the same cell. However, this rarely happens. Typically a modeler will choose a cell morphology from a public database, assign standard values for R a, C m, and other parameters and then do the modeling study. The assumption is that the model will produce results representative of what might be obtained experimentally. To test this assumption we developed models of CA1 hippocampal pyramidal neurons using 4 different morphologies obtained from 3 public databases. The multiple run fitter in NEURON was used to fit parameter values in each of the 4 morphological models to match experimental data recorded from 19 CA1 pyramidal cells. Fits with fixed standard parameter values produced results that were generally not representative of our experimental data. However, when parameter values were allowed to vary, excellent fits were obtained in almost all cases, but the fitted parameter values were very different among the 4 reconstructions and did not match standard values. The differences in fitted values can be explained by very different diameters, total lengths, membrane areas and volumes among the reconstructed cells, reflecting either cell heterogeneity or issues with the reconstruction data. The fitted values compensated for these differences to make the database cells and experimental cells more similar electrotonically. We conclude that models using fully reconstructed morphologies need to be calibrated with experimental data (even when morphological and electrophysiological data come from the same cell), model results should be generated with multiple reconstructions, morphological and experimental cells should come from the same strain of animal at the same age, and blind use of standard parameter values in models that use reconstruction data may not produce representative experimental results. Action Editor: Steve Redman  相似文献   

2.
When the fluorescence intensity of a chromophore attached to or bound in an enzyme relates to a specific reactive step in the enzymatic reaction, a single molecule fluorescence study of the process reveals a time sequence in the fluorescence emission that can be analyzed to derive kinetic and mechanistic information. Reports of various experimental results and corresponding theoretical studies have provided a basis for interpreting these data and understanding the methodology. We have found it useful to parallel experiments with Monte Carlo simulations of potential models hypothesized to describe the reaction kinetics. The simulations can be adapted to include experimental limitations, such as limited data sets, and complexities such as dynamic disorder, where reaction rates appear to change over time. By using models that are known a priori, the simulations reveal some of the challenges of interpreting finite single-molecule data sets by employing various statistical signatures that have been identified.  相似文献   

3.
Several models of flocking have been promoted based on simulations with qualitatively naturalistic behavior. In this paper we provide the first direct application of computational modeling methods to infer flocking behavior from experimental field data. We show that this approach is able to infer general rules for interaction, or lack of interaction, among members of a flock or, more generally, any community. Using experimental field measurements of homing pigeons in flight we demonstrate the existence of a basic distance dependent attraction/repulsion relationship and show that this rule is sufficient to explain collective behavior observed in nature. Positional data of individuals over time are used as input data to a computational algorithm capable of building complex nonlinear functions that can represent the system behavior. Topological nearest neighbor interactions are considered to characterize the components within this model. The efficacy of this method is demonstrated with simulated noisy data generated from the classical (two dimensional) Vicsek model. When applied to experimental data from homing pigeon flights we show that the more complex three dimensional models are capable of simulating trajectories, as well as exhibiting realistic collective dynamics. The simulations of the reconstructed models are used to extract properties of the collective behavior in pigeons, and how it is affected by changing the initial conditions of the system. Our results demonstrate that this approach may be applied to construct models capable of simulating trajectories and collective dynamics using experimental field measurements of herd movement. From these models, the behavior of the individual agents (animals) may be inferred.  相似文献   

4.
5.
Metcalf DG  Law PB  DeGrado WF 《Proteins》2007,67(2):375-384
We present a molecular modeling protocol that selects modeled protein structures based on experimental mutagenesis results. The computed effect of a point mutation should be consistent with its experimental effect for correct models; mutations that do not affect protein stability and function should not affect the computed energy of a correct model while destabilizing mutations should have unfavorable computed energies. On the other hand, an incorrect model will likely display computed energies that are inconsistent with experimental results. We added terms to our energy function which penalize models that are inconsistent with experimental results. This creates a selective advantage for models that are consistent with experimental results in the Monte Carlo simulated annealing protocol we use to search conformational space. We calibrated our protocol to predict the structure of transmembrane helix dimers using glycophorin A as a model system. Inclusion of mutational data in this protocol compensates for the limitations of our force field and the limitations of our conformational search. We demonstrate an application of this structure prediction protocol by modeling the transmembrane region of the BNIP3 apoptosis factor.  相似文献   

6.
This paper compares regression and neural network modeling approaches to predict competitive biosorption equilibrium data. The regression approach is based on the fitting of modified Langmuir-type isotherm models to experimental data. Neural networks, on the other hand, are non-parametric statistical estimators capable of identifying patterns in data and correlations between input and output. Our results show that the neural network approach outperforms traditional regression-based modeling in correlating and predicting the simultaneous uptake of copper and cadmium by a microbial biosorbent. The neural network is capable of accurately predicting unseen data when provided with limited amounts of data for training. Because neural networks are purely data-driven models, they are more suitable for obtaining accurate predictions than for probing the physical nature of the biosorption process.  相似文献   

7.
W J Deal 《Biopolymers》1973,12(9):2057-2073
Accurate equilibrium binding data for the oxygenation of hemoglobin are used (a) to show that various models for cooperativity are inconsistent with the best available experimental data, (b) to determine the equilibrium constants for binding of 2,3-diphosphoglycerate to hemoglobin molecules in intermediate stages of oxygenation, and (c) to deduce a mechanism for allosteric effects in hemoglobin which is consistent with the best available experimental data. The total free energy of cooperativity is defined and discussed.  相似文献   

8.
Two new statistical models based on Monte Carlo Simulation (MCS) have been developed to score peptide matches in shotgun proteomic data and incorporated in a database search program, MassMatrix (www.massmatrix.net). The first model evaluates peptide matches based on the total abundance of matched peaks in the experimental spectra. The second model evaluates amino acid residue tags within MS/MS spectra. The two models provide complementary scores for peptide matches that result in higher confidence in peptide identification when significant scores are returned from both models. The MCS-based models use a variance reduction technique that improves estimation precision. Due to the high computational expense of MCS-based models, peptide matches were prefiltered by other statistical models before further evaluation by the MCS-based models. Receiver operating characteristic analysis of the data sets confirmed that MCS-based models improved the overall performance of the MassMatrix search software, especially for low-mass accuracy data sets.  相似文献   

9.
Small‐angle scattering (SAS) of X‐rays and neutrons is a fundamental tool to study the nanostructural properties, and in particular, biological macromolecules in solution. In structural biology, SAS recently transformed from a specialization into a general technique leading to a dramatic increase in the number of publications reporting structural models. The growing amount of data recorded and published has led to an urgent need for a global SAS repository that includes both primary data and models. In response to this, a small‐angle scattering biological data bank (SASBDB) was designed in 2014 and is available for public access at www.sasbdb.org . SASBDB is a comprehensive, free and searchable repository of SAS experimental data and models deposited together with the relevant experimental conditions, sample details and instrument characteristics. SASBDB is rapidly growing, and presently has over 1,000 entries containing more than 1,600 models. We describe here the overall organization and procedures of SASBDB paying most attention to user‐relevant information during submission. Perspectives of further developments, in particular, with OneDep system of the Protein Data Bank, and also widening of SASBDB including new types of data/models are discussed.  相似文献   

10.
11.
12.

Quantitative dynamical models facilitate the understanding of biological processes and the prediction of their dynamics. These models usually comprise unknown parameters, which have to be inferred from experimental data. For quantitative experimental data, there are several methods and software tools available. However, for qualitative data the available approaches are limited and computationally demanding. Here, we consider the optimal scaling method which has been developed in statistics for categorical data and has been applied to dynamical systems. This approach turns qualitative variables into quantitative ones, accounting for constraints on their relation. We derive a reduced formulation for the optimization problem defining the optimal scaling. The reduced formulation possesses the same optimal points as the established formulation but requires less degrees of freedom. Parameter estimation for dynamical models of cellular pathways revealed that the reduced formulation improves the robustness and convergence of optimizers. This resulted in substantially reduced computation times. We implemented the proposed approach in the open-source Python Parameter EStimation TOolbox (pyPESTO) to facilitate reuse and extension. The proposed approach enables efficient parameterization of quantitative dynamical models using qualitative data.

  相似文献   

13.
Systems biology aims to study the properties of biological systems in terms of the properties of their molecular constituents. This occurs frequently by a process of mathematical modelling. The first step in this modelling process is to unravel the interaction structure of biological systems from experimental data. Previously, an algorithm for gene network inference from gene expression perturbation data was proposed. Here, the algorithm is extended by using regression with subset selection. The performance of the algorithm is extensively evaluated on a set of data produced with gene network models at different levels of simulated experimental noise. Regression with subset selection outperforms the previously stated matrix inverse approach in the presence of experimental noise. Furthermore, this regression approach enables us to deal with under-determination, that is, when not all genes are perturbed. The results on incomplete data sets show that the new method performs well at higher number of perturbations, even when noise levels are high. At lower number of perturbations, although still being able to recover the majority of the connections, less confidence can be placed in the recovered edges.  相似文献   

14.
Neuron models, in particular conductance-based compartmental models, often have numerous parameters that cannot be directly determined experimentally and must be constrained by an optimization procedure. A common practice in evaluating the utility of such procedures is using a previously developed model to generate surrogate data (e.g., traces of spikes following step current pulses) and then challenging the algorithm to recover the original parameters (e.g., the value of maximal ion channel conductances) that were used to generate the data. In this fashion, the success or failure of the model fitting procedure to find the original parameters can be easily determined. Here we show that some model fitting procedures that provide an excellent fit in the case of such model-to-model comparisons provide ill-balanced results when applied to experimental data. The main reason is that surrogate and experimental data test different aspects of the algorithm’s function. When considering model-generated surrogate data, the algorithm is required to locate a perfect solution that is known to exist. In contrast, when considering experimental target data, there is no guarantee that a perfect solution is part of the search space. In this case, the optimization procedure must rank all imperfect approximations and ultimately select the best approximation. This aspect is not tested at all when considering surrogate data since at least one perfect solution is known to exist (the original parameters) making all approximations unnecessary. Furthermore, we demonstrate that distance functions based on extracting a set of features from the target data (such as time-to-first-spike, spike width, spike frequency, etc.)—rather than using the original data (e.g., the whole spike trace) as the target for fitting—are capable of finding imperfect solutions that are good approximations of the experimental data.  相似文献   

15.
16.
The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed.  相似文献   

17.
Measurement of exchange of substances between blood and tissue has been a long-lasting challenge to physiologists, and considerable theoretical and experimental accomplishments were achieved before the development of the positron emission tomography (PET). Today, when modeling data from modern PET scanners, little use is made of earlier microvascular research in the compartmental models, which have become the standard model by which the vast majority of dynamic PET data are analysed. However, modern PET scanners provide data with a sufficient temporal resolution and good counting statistics to allow estimation of parameters in models with more physiological realism. We explore the standard compartmental model and find that incorporation of blood flow leads to paradoxes, such as kinetic rate constants being time-dependent, and tracers being cleared from a capillary faster than they can be supplied by blood flow. The inability of the standard model to incorporate blood flow consequently raises a need for models that include more physiology, and we develop microvascular models which remove the inconsistencies. The microvascular models can be regarded as a revision of the input function. Whereas the standard model uses the organ inlet concentration as the concentration throughout the vascular compartment, we consider models that make use of spatial averaging of the concentrations in the capillary volume, which is what the PET scanner actually registers. The microvascular models are developed for both single- and multi-capillary systems and include effects of non-exchanging vessels. They are suitable for analysing dynamic PET data from any capillary bed using either intravascular or diffusible tracers, in terms of physiological parameters which include regional blood flow.  相似文献   

18.
A method for measuring three-dimensional kinematics that incorporates the direct cross-registration of experimental kinematics with anatomic geometry from Computed Tomography (CT) data has been developed. Plexiglas registration blocks were attached to the bones of interest and the specimen was CT scanned. Computer models of the bone surface were developed from the CT image data. Determination of discrete kinematics was accomplished by digitizing three pre-selected contiguous surfaces of each registration block using a three-dimensional point digitization system. Cross-registration of bone surface models from the CT data was accomplished by identifying the registration block surfaces within the CT images. Kinematics measured during a biomechanical experiment were applied to the computer models of the bone surface. The overall accuracy of the method was shown to be at or below the accuracy of the digitization system used. For this experimental application, the accuracy was better than +/-0.1mm for position and 0.1 degrees for orientation for linkage digitization and better than +/-0.2mm and +/-0.2 degrees for CT digitization. Surface models of the radius and ulna were constructed from CT data, as an example application. Kinematics of the bones were measured for simulated forearm rotation. Screw-displacement axis analysis showed 0.1mm (proximal) translation of the radius (with respect to the ulna) from supination to neutral (85.2 degrees rotation) and 1.4mm (proximal) translation from neutral to pronation (65.3 degrees rotation). The motion of the radius with respect to the ulna was displayed using the surface models. This methodology is a useful tool for the measurement and application of rigid-body kinematics to computer models.  相似文献   

19.
The fully hydrated liquid crystalline phase of the dimyristoylphosphatidycholine lipid bilayer at 30 degrees C was simulated using molecular dynamics with the CHARMM potential for five surface areas per lipid (A) in the range 55-65 A(2) that brackets the previously determined experimental area 60.6 A(2). The results of these simulations are used to develop a new hybrid zero-baseline structural model, denoted H2, for the electron density profile, rho(z), for the purpose of interpreting x-ray diffraction data. H2 and also the older hybrid baseline model were tested by fitting to partial information from the simulation and various constraints, both of which correspond to those available experimentally. The A, rho(z), and F(q) obtained from the models agree with those calculated directly from simulation at each of the five areas, thereby validating this use of the models. The new H2 was then applied to experimental dimyristoylphosphatidycholine data; it yields A = 60.6 +/- 0.5 A(2), in agreement with the earlier estimate obtained using the hybrid baseline model. The electron density profiles also compare well, despite considerable differences in the functional forms of the two models. Overall, the simulated rho(z) at A = 60.7 A(2) agrees well with experiment, demonstrating the accuracy of the CHARMM lipid force field; small discrepancies indicate targets for improvements. Lastly, a simulation-based model-free approach for obtaining A is proposed. It is based on interpolating the area that minimizes the difference between the experimental F(q) and simulated F(q) evaluated for a range of surface areas. This approach is independent of structural models and could be used to determine structural properties of bilayers with different lipids, cholesterol, and peptides.  相似文献   

20.

Background  

To cancel experimental variations, microarray data must be normalized prior to analysis. Where an appropriate model for statistical data distribution is available, a parametric method can normalize a group of data sets that have common distributions. Although such models have been proposed for microarray data, they have not always fit the distribution of real data and thus have been inappropriate for normalization. Consequently, microarray data in most cases have been normalized with non-parametric methods that adjust data in a pair-wise manner. However, data analysis and the integration of resultant knowledge among experiments have been difficult, since such normalization concepts lack a universal standard.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号