首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Theory of the kinetic analysis of patch-clamp data.   总被引:7,自引:5,他引:2       下载免费PDF全文
This paper describes a theory of the kinetic analysis of patch-clamp data. We assume that channel gating is a Markov process that can be described by a model consisting of n kinetic states and n(n - 1) rate constants at each voltage, and that patch-clamp data describe the occupancy of x different conductance levels over time. In general, all the kinetic information in a set of patch-clamp data is found in either two-dimensional dwell time histograms describing the frequency of observation of sequential dwell times of durations tau 1 and tau 2 (Fredkin, D. R., M. Montal, and J. A. Rice, 1985, Proceedings of the Berkeley Conference in Honor of Jerzy Neyman and Jack Kiefer, vol. 1, 269-289) or in three-point joint probability functions describing the probability that a channel is in a given conductance at time t, and at time t + tau 1, and at time t + tau 1 + tau 2. For the special case of channels with a single open state plus multiple closed states, one-dimensional analyses provide all of the kinetic information. Stationary patch-clamp data have information that can be used to determine H rate constants, where H = n(n - 1) - G and G is the number of intraconductance rate constants. Thus, to calculate H rate constants, G rate constants must be fixed. In general there are multiple sets of G rate constants that can be fixed to allow the calculation of H rate constants although not every set of G rate constants will work. Arbitrary assignment of the G intraconductance rate constants equal to zero always provides a solution and the calculation of H rate constants. Nonstationary patch-clamp data have information for the determination of H rate constants at a reference voltage plus n(n - 1) rate constants at all test voltages. Thus, nonstationary data have extra information about the voltage dependencies of rate constants that can be used to rule out kinetic models that cannot be disqualified on the basis of stationary data.  相似文献   

2.
3.
The production of benthic foraminiferal communities is filtered through taphonomic (mainly destructive) processes within the sediments to generate the fossil assemblage. Both the production and the taphonomy depend on bottom water oxygen content and flux of organic carbon to the seabed. An examination of the relationships of processes generating the fossil assemblage to oxygen and organic carbon supply is made using pore water geochemical measurements to estimate carbon flux for locations in the Gulf of Mexico and the central California margin. The locations are plotted in a three dimensional field with bottom water oxygen content, organic carbon flux, and sediment depth as the axes. Then the response of foraminiferal standing stock, taphonomic processes and the developing fossil assemblage to the field is investigated. Variation in the vertical stratification of foraminiferal standing stock and test production, species' stratification, taphonomic process intensity and stratification, and sediment bioturbation lead to marked differences in the way the fossil assemblage is generated across the oxygen content-organic carbon flux field. The result is that the oxygen-carbon flux field has a significant impact on the fossil assemblage through the interaction of biological and biogeochemical processes in the sediments. A model of this interaction is investigated to show how its elements change across the oxygen-carbon flux field and how these affect the generation of the fossil assemblage.  相似文献   

4.
5.
Decision-in decision-out fusion architecture can be used to fuse the outputs of multiple classifiers from different diagnostic sources. In this paper, Dempster-Shafer Theory (DST) has been used to fuse classification results of breast cancer data from two different sources: gene-expression patterns in peripheral blood cells and Fine-Needle Aspirate Cytology (FNAc) data. Classification of individual sources is done by Support Vector Machine (SVM) with linear, polynomial and Radial Base Function (RBF) kernels. Out put belief of classifiers of both data sources are combined to arrive at one final decision. Dynamic uncertainty assessment is based on class differentiation of the breast cancer. Experimental results have shown that the new proposed breast cancer data fusion methodology have outperformed single classification models.  相似文献   

6.
7.
8.

Aim

Biodiversity loss is a key component of biodiversity change and can impact ecosystem services. However, estimation of the loss has focused mostly on per‐species extinction rates measured over a limited number of spatial scales, with little theory linking small‐scale extirpations to global extinctions. Here, we provide such a link by introducing the relationship between area and the number of extinctions (number of extinctions–area relationship; NxAR) and between area and the proportion of extinct species (proportion of extinctions–area relationship; PxAR). Unlike static patterns, such as the species–area relationship, NxAR and PxAR represent spatial scaling of a dynamic process. We show theoretical and empirical forms of these relationships and we discuss their role in perception and estimation of the current extinction crisis.

Location

U.S.A., Europe, Czech Republic and Barro Colorado Island (Panama).

Time period

1500–2009.

Major taxa studied

Vascular plants, birds, butterflies and trees.

Methods

We derived the expected forms of NxAR and PxAR from several theoretical frameworks, including the theory of island biogeography, neutral models and species–area relationships. We constructed NxAR and PxAR from five empirical datasets collected over a range of spatial and temporal scales.

Results

Although increasing PxAR is theoretically possible, empirical data generally support a decreasing PxAR; the proportion of extinct species decreases with area. In contrast, both theory and data revealed complex relationships between numbers of extinctions and area (NxAR), including nonlinear, unimodal and U‐shaped relationships, depending on region, taxon and temporal scale.

Main conclusions

The wealth of forms of NxAR and PxAR explains why biodiversity change appears scale dependent. Furthermore, the complex scale dependence of NxAR and PxAR means that global extinctions indicate little about local extirpations, and vice versa. Hence, effort should be made to understand and report extinction rates as a scale‐dependent problem. In this effort, estimation of scaling relationships such as NxAR and PxAR should be central.  相似文献   

9.
Wakeley J  Lessard S 《Genetics》2003,164(3):1043-1053
We develop predictions for the correlation of heterozygosity and for linkage disequilibrium between two loci using a simple model of population structure that includes migration among local populations, or demes. We compare the results for a sample of size two from the same deme (a single-deme sample) to those for a sample of size two from two different demes (a scattered sample). The correlation in heterozygosity for a scattered sample is surprisingly insensitive to both the migration rate and the number of demes. In contrast, the correlation in heterozygosity for a single-deme sample is sensitive to both, and the effect of an increase in the number of demes is qualitatively similar to that of a decrease in the migration rate: both increase the correlation in heterozygosity. These same conclusions hold for a commonly used measure of linkage disequilibrium (r(2)). We compare the predictions of the theory to genomic data from humans and show that subdivision might account for a substantial portion of the genetic associations observed within the human genome, even though migration rates among local populations of humans are relatively large. Because correlations due to subdivision rather than to physical linkage can be large even in a single-deme sample, then if long-term migration has been important in shaping patterns of human polymorphism, the common practice of disease mapping using linkage disequilibrium in "isolated" local populations may be subject to error.  相似文献   

10.
熊子仙 《植物学报》1998,15(3):73-76
本文介绍了Melvill提出的生殖叶学说中的有关生殖叶的概念,雌蕊和雄蕊的起源及同种演化方式。并结合心皮学说进行了讨论。  相似文献   

11.
Life cycle inventory (LCI) is becoming an established environmental management tool that quantifies all resource usage and waste generation associated with providing specific goods or services to society. LCIs are increasingly used by industry as well as policy makers to provide a holistic ‘macro’ overview of the environmental profile of a good or service. This information, effectively combined with relevant information obtained from other environmental management tools, is very useful in guiding strategic environmental decision making. LCIs are very data intensive. There is a risk that they imply a level of accuracy that does not exist. This is especially true today, because the availability of accurate LCI data is limited. Also, it is not easy for LCI users, decision-makers and other interested parties to differentiate between ‘good quality’ and ‘poor quality’ LCI data. Several data quality requirements for ‘good’ LCI data can be defined only in relation to the specific study in which they are used. In this paper we show how and why the use of a common LCI database for some of the more commonly used LCI data, together with increased documentation and harmonisation of the data quality features of all LCI data, is key to the further development of LCI as a useful and pragmatic environmental management tool. Initiatives already underway to make this happen are also described.  相似文献   

12.
13.
A dynamic structure refinement method for X-ray crystallography, referred to as the normal mode refinement, is proposed. The Debye-Waller factor is expanded in terms of the low-frequency normal modes whose amplitudes and eigenvectors are experimentally optimized in the process of the crystallographic refinement. In this model, the atomic fluctuations are treated as anisotropic and concerted. The normal modes of the external motion (TLS model) are also introduced to cover the factors other than the internal fluctuations, such as the lattice disorder and diffusion. A program for the normal mode refinement (NM-REF) has been developed. The method has first been tested against simulated diffraction data for human lysozyme calculated by a Monte Carlo simulation. Applications of the method have demonstrated that the normal mode refinement has: (1) improved the fitting to the diffraction data, even with fewer adjustable parameters; (2) distinguished internal fluctuations from external ones; (3) determined anisotropic thermal factors; and (4) identified concerted fluctuations in the protein molecule.  相似文献   

14.
A theoretical model has been developed in order to describe the organization of acyl chains in phospholipid bilayers. Since the model is intended to reproduce highly quantitative experimental results such as the deuterium magnetic resonance (NMR) data and to supplement the experimental information, all the rotameric degrees of freedom, the excluded volume interactions and the van der Waals interactions have been considered. The model is a direct extension of a generalized van der Waals theory of nematic liquid crystals to flexible molecules. In this picture, the anisotropy of the short-range repulsive forces which are treated by a hard core potential is introduced as the dominant factor governing intrinsic order among the chains. The anisotropy of the attractive forces, which are approximated by a molecular field, plays a somewhat secondary role. The dependence of the energy of interaction on the relative chain conformations is approximated by two order parameters reflecting respectively the ‘average shape’ of the molecules and the ‘average shape’ in a ‘mean orientation’. The influence of the interactions in the polar region on the lateral chain area is accounted for by an effective lateral pressure. In certain aspects the model has features in common with the Mar?elja theory.  相似文献   

15.
It is well known to all those acquainted with D. N. Uznadze's theory of set [ustanovka] (1) that this theory was meant to answer the question of "the character and inner structure of human activity" [11; 79]. But, as A. T. Bochorishvili correctly noted, we do not yet have "clarity in basic concepts. … Soviet psychology cannot yet go so far as to speak of the content of the basic concept of the psychology of set, of the content of set itself" [5: 15]. As a panacea for overcoming these differences of opinion, Bochorishvili proposes that we "widely and actively develop investigations of the theoretical bases of the psychology of set as D. N. Uznadze understood if" (ibid.).  相似文献   

16.
Theory of mind     
Frith C  Frith U 《Current biology : CB》2005,15(17):R644-R646
  相似文献   

17.
18.
19.
Protein data, from sequence and structure to interaction, is being generated through many diverse methodologies; it is stored and reported in numerous forms and multiple places. The magnitude of the data limits researchers abilities to utilize all information generated. Effective integration of protein data can be accomplished through better data modeling. We demonstrate this through the MIPD project.  相似文献   

20.
Missing data are commonly encountered using multilocus, fragment‐based (dominant) fingerprinting methods, such as random amplified polymorphic DNA (RAPD) or amplified fragment length polymorphism (AFLP). Data sets containing missing data have been analysed by eliminating those bands or samples with missing data, assigning values to missing data or ignoring the problem. Here, we present a method that uses random assignments of band presence–absence to the missing data, implemented by the computer program famd (available from http://homepage.univie.ac.at/philipp.maria.schlueter/famd.html ), for analyses based on pairwise similarity and Shannon's index. When missing values group in a data set, sample or band elimination is likely to be the most appropriate action. However, when missing values are scattered across the data set, minimum, maximum and average similarity coefficients are a simple means of visualizing the effects of missing data on tree structure. Our approach indicates the range of values that a data set containing missing data points might generate, and forces the investigator to consider the effects of missing values on data interpretation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号