首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A novel simplified structural model of sarcomeric force production in striate muscle is presented. Using some simple assumptions regarding the distribution of myosin spring lengths at different sliding velocities it is possible to derive a very simple expression showing the main components of the experimentally observed force-velocity relationship of muscle: nonlinearity during contraction (Hill, 1938), maximal force production during stretching equal to two times the isometric force (Katz, 1939), yielding at high stretching velocity, slightly concave force-extension relationship during sudden length changes (Ford et al., 1977; Lombardi & Piazzesi, 1990), accurate reproduction of the rate of ATP consumption (Shirakawa et al., 2000; He et al., 2000) and of the extra energy liberation rate (Hill, 1964a). Different assumptions regarding the force-length relationship of individual cross-bridges are explored [linear, power function and worm-like chain (WLC) model based], and it is shown that the best results are obtained if the individual myosin-spring forces are modelled using a WLC model, thus hinting that entropic elasticity could be the main source of force in myosin undergoing the conformational changes associated with the power stroke.  相似文献   

2.
Epidermal Growth Factor (EGF) is an important regulator of normal epithelial and carcinoma cell migration. The mechanism by which EGF induces cell migration is not fully understood. A recent report in Nature Cell Biology (Katz et al., 2007) demonstrates that EGF regulates migration through a switch in the expression of two tensin isoforms, weakening the association of beta1 integrin with the actin cytoskeleton in focal adhesions.  相似文献   

3.
1 Identical values of the rate of lactate turnover determined with [14C-(U)]-lactate were found with single injection or continuous infusion techniques in anaesthetized and mechanically ventilated rats. 2 The mean transit time and total minimal body mass of lactate determined graphically (Katz et al., 1974 a, b) were higher with single injection than with continuous infusion of the tracer.  相似文献   

4.
Summary The inactivation (desensitization) of the acetylcholine receptor by carbanylcholine, a stable analogue of acetylcholine, has been investigated in eel Ringer's solution, pH 7.0, 0°C, by measurements of (i) ion flux and (ii) the kinetics of the reaction of [125I]--bungarotoxin with the receptor. The effect of preincubation with carbamylcholine is significantly different in the two types of measurement. In both the receptor-controlled flux of inorganic ions and the toxin-binding kinetics a biphasic process has been observed (Hess, G.P., Lipkowitz, S., Struve, G.E., 1978,Proc. Nat. Acad. Sci. USA 75:1703; Hess, G.P. et al., 1975,Biochem. Biophys. Res. Commun. 64: 1018; Bulger, J.E. et al., 1977,Biochemistry 16: 684), only the initial fast phase of which is inhibited and the subsequent slow phase persists. However, preincubation with carbamylcholineper se has no effect on the toxin reaction. The results obtained are consistent with the proposal of Katz and Thesleff (Katz, B., Thesleff, S., 1957,J. Physiol. (London) 138: 65) that the active form of the receptor is converted to an inactive form in the presence of acetylcholine receptor ligands, and with our previous experiments (Hess et al., 1978) which indicated that one receptor form is responsible for the initial fast phase of both the receptor-controlled ion flux and the toxin binding reaction, and that its conversion to the other form results in the slow phases in these two measurements.  相似文献   

5.
Costa, L.E., Reynafarje, B. and Lehninger, A.L. [(1984) J. Biol. Chem. 259, 4802-4811] have reported 'second-generation' measurements of the H+/O ratio approaching 8.0 for vectorial H+ translocation coupled to succinate oxidation by rat liver mitochondria. In a Commentary in this Journal [Krab, K., Soos, J. and Wikstr?m, M. (1984) FEBS Lett. 178, 187-192] it was concluded that the measurements of Costa et al. significantly overestimated the true H+/O stoichiometry. It is shown here that the mathematical simulation on which Krab et al. based this claim is faulty and that data reported by Costa et al. had already excluded the criticism advanced by Krab et al. Also reported are new data, obtained under conditions in which the arguments of Krab et al. are irrelevant, which confirm that the H+/O ratio for succinate oxidation extrapolated to level flow is close to 8.  相似文献   

6.
Clegg LX  Gail MH  Feuer EJ 《Biometrics》2002,58(3):684-688
We propose a new Poisson method to estimate the variance for prevalence estimates obtained by the counting method described by Gail et al. (1999, Biometrics 55, 1137-1144) and to construct a confidence interval for the prevalence. We evaluate both the Poisson procedure and the procedure based on the bootstrap proposed by Gail et al. in simulated samples generated by resampling real data. These studies show that both variance estimators usually perform well and yield coverages of confidence intervals at nominal levels. When the number of disease survivors is very small, however, confidence intervals based on the Poisson method have supranominal coverage, whereas those based on the procedure of Gail et al. tend to have below-nominal coverage. For these reasons, we recommend the Poisson method, which also reduces the computational burden considerably.  相似文献   

7.
The transmission disequilibrium test (TDT) has been utilized to test the linkage and association between a genetic trait locus and a marker. Spielman et al. (1993) introduced TDT to test linkage between a qualitative trait and a marker in the presence of association. In the presence of linkage, TDT can be applied to test for association for fine mapping (Martin et al., 1997; Spielman and Ewens, 1996). In recent years, extensive research has been carried out on the TDT between a quantitative trait and a marker locus (Allison, 1997; Fan et al., 2002; George et al., 1999; Rabinowitz, 1997; Xiong et al., 1998; Zhu and Elston, 2000, 2001). The original TDT for both qualitative and quantitative traits requires unrelated offspring of heterozygous parents for analysis, and much research has been carried out to extend it to fit for different settings. For nuclear families with multiple offspring, one approach is to treat each child independently for analysis. Obviously, this may not be a valid method since offspring of one family are related to each other. Another approach is to select one offspring randomly from each family for analysis. However, with this method much information may be lost. Martin et al. (1997, 2000) constructed useful statistical tests to analyse the data for qualitative traits. In this paper, we propose to use mixed models to analyse sample data of nuclear families with multiple offspring for quantitative traits according to the models in Amos (1994). The method uses data of all offspring by taking into account their trait mean and variance-covariance structures, which contain all the effects of major gene locus, polygenic loci and environment. A test statistic based on mixed models is shown to be more powerful than the test statistic proposed by George et al. (1999) under moderate disequilibrium for nuclear families. Moreover, it has higher power than the TDT statistic which is constructed by randomly choosing a single offspring from each nuclear family.  相似文献   

8.
We developed a new computer wire coding method and then applied it to investigate the suggestion that control selection bias might explain the observed association between wire codes and childhood cancer made in the study conducted by Savitz et al. in the Denver area. The computer wire coding method used a geographic information system approach with data on the local distribution electric system and from tax assessor records. Individual residences were represented as a circle scaled to the ground floor area of the residence and centered on the lot centroid. The wire code of the residence was determined from the distance between the circle and the relevant power line, and from the current carrying capacity of that line. Using this method, wire codes were generated for 238 290 residences built before 1986, the time of the Savitz et al. study, in the Denver metropolitan area. We then attempted to reconstruct the 1985 population of hypothetically eligible control children in the Denver metropolitan area by using 1980 census data. Since data were not available to locate the children in each residence within a census block, uniform, Poisson, and negative binomial distributions were used to randomly assign children to residences. To evaluate the likelihood of the wire code distribution of the controls selected by Savitz et al., 100 random trials were conducted for each distribution, matching two controls to each case. The odds ratios between childhood cancer and very high current configuration (VHCC) wire codes were reduced when the assigned controls were used, suggesting control selection bias may have been present. However, control selection bias is unlikely to account for all the reported association between childhood cancer and wire codes in the Savitz et al. study.  相似文献   

9.
In this paper we have extended the model of HIV pathogenesis under treatment by anti-viral drugs given by Perelson et al. [A.S. Perelson et al., Science 271 (1999) 1582] to a stochastic model. By using this stochastic model as the stochastic system model, we have developed a state space model for the HIV pathogenesis under treatment by anti-viral drugs. In this state space model, the observation model is a statistical model based on the observed numbers of RNA virus copies over different times. For this model we have developed procedures for estimating and predicting the numbers of infectious free HIV and non-infectious free HIV as well as the numbers of different types of T cells through extended Kalman filter method. As an illustration, we have applied the method of this paper to the data of patient Nos. 104, 105 and 107 given by Perelson et al. [A.S. Perelson et al., Science 271 (1999) 1582] under treatment by Ritonavir. For these individuals, it is shown that within two weeks since treatment, most of the free HIV are non-infectious, indicating the usefulness of the treatment. Furthermore, the Kalman filter method revealed a much stronger effect of the treatment within the first 10 to 20 h than that predicted by the deterministic model.  相似文献   

10.
Lolle et al. reported a high frequency of genomic changes in ARABIDOPSIS plants carrying the hothead mutation and proposed that the changes observed were the result of a gene correction system mediated by a hypothetical RNA cache. Here, we propose a very different hypothesis to explain the data reported by Lolle et al. Our hypothesis is based on a relatively straightforward developmental aberration in which maternal cells ("Legacy cells") fuse with the developing embryo, resulting in a chimera, which could then give rise to the aberrant genetic segregations reported by Lolle et al.  相似文献   

11.
BACKGROUND: The discriminatory power and imaging efficiency of different multicolor FISH (M-FISH) analysis systems are key factors in obtaining accurate and reproducible classification results. In a recent paper, Garini et al. put forth an analytical technique to quantify the discriminatory power ("S/N ratio") and imaging efficiency ('excitation efficiency') of multicolor fluorescent karyotyping systems. METHODS: A parametric model of multicolor fluorescence microscopy, based on the Beer-Lambert law, is analyzed and reduced to a simple expression for S/N ratio. Parameters for individual system configurations are then plugged into the model for comparison purposes. RESULTS: We found that several invalid assumptions, which are used to reduce the complex mathematics of the Beer-Lambert law to a simple S/N ratio, result in some completely misleading conclusions about classification accuracy. The authors omit the most significant noise source, and consider only one highly abstract and unrepresentative situation. Unwisely chosen parameters used in the examples lead to predictions that are not consistent with actual results. CONCLUSIONS: The earlier paper presents an inaccurate view of the M-FISH situation. In this short communication, we point out several inaccurate assumptions in the mathematical development of Garini et al. and the poor choices of parameters in their examples. We show results obtained with different imaging systems that indicate that reliable and comparable results are obtained if the metaphase samples are well-hybridized. We also conclude that so-called biochemical noise, not photon noise, is the primary factor that limits pixel classification accuracy, given reasonable exposure times. Copyright Wiley-Liss, Inc.  相似文献   

12.
A power calculation is crucial in planning genetic studies. In genetic association studies, the power is often calculated using the expected number of individuals with each genotype calculated from an assumed allele frequency under Hardy-Weinberg equilibrium. Since the allele frequency is often unknown, the number of individuals with each genotype is random and so a power calculation assuming a known allele frequency may be incorrect. Ambrosius et al. recently showed that the power ignoring this randomness may lead to studies with insufficient power and proposed averaging the power due to the randomness. We extend the method of averaging power in two directions. First, for testing association in case-control studies, we use the Cochran-Armitage trend test and find that the time needed for calculating the averaged power is much reduced compared to the chi-square test with two degrees of freedom studied by Ambrosius et al. A real study is used for illustration of the method. Second, we extend the method to linkage analysis, where the number of identical-by-descent alleles shared by siblings is random. The distribution of identical-by-descent numbers depends on the underlying genetic model rather than the allele frequency. The robust test for linkage analysis is also examined using the averaged powers. We also recommend a sensitivity analysis when the true allele frequency or the number of identical-by-descent alleles is unknown.  相似文献   

13.
In the planning stage of a clinical trial investigating a potentially targeted therapy, there is commonly a high degree of uncertainty whether the treatment is more efficient (or efficient only) in a subgroup compared to the whole population. Recently developed adaptive designs enable to allow for an efficacy assessment both for the whole population and a subgroup and to select the target population mid-course based on interim results (see, e.g., Wang et al., Pharm Stat 6:227–244, 2007, Brannath et al., Stat Med 28:1445–1463, 2009, Wang et al., Biom J 51:358–374, 2009, Jenkins et al., Pharm Stat 10:347–356, 2011, Friede et al., Stat Med 31:4309–4120, 2012). Frequently, predictive biomarkers are used in these trials for identifying patients more likely to benefit from a drug. We consider the situation that the selection of the patient population is based on a biomarker and where the diagnostics that evaluates the biomarker may be perfect, i.e., with 100 % sensitivity and specificity, or not. The performance of the applied subset selection rule is crucial for the overall characteristics of the design. In the setting of an adaptive enrichment design, we evaluate the properties of subgroup selection rules in terms of type I error rate and power by taking into account decision rules with a fixed ad hoc threshold and optimal decision rules developed for the situation of uncertain assumptions. In a simulation study, we demonstrate that designs with optimal decision rules are under certain assumptions more powerful as compared to those with ad hoc decision rules. Throughout the results, a strong impact of sensitivity and specificity of the biomarker on both type I error rate and power is observed.  相似文献   

14.
Summary Gene expression index estimation is an essential step in analyzing multiple probe microarray data. Various modeling methods have been proposed in this area. Amidst all, a popular method proposed in Li and Wong (2001) is based on a multiplicative model, which is similar to the additive model discussed in Irizarry et al. (2003a) at the logarithm scale. Along this line, Hu et al. (2006) proposed data transformation to improve expression index estimation based on an ad hoc entropy criteria and naive grid search approach. In this work, we re‐examined this problem using a new profile likelihood‐based transformation estimation approach that is more statistically elegant and computationally efficient. We demonstrate the applicability of the proposed method using a benchmark Affymetrix U95A spiked‐in experiment. Moreover, We introduced a new multivariate expression index and used the empirical study to shows its promise in terms of improving model fitting and power of detecting differential expression over the commonly used univariate expression index. As the other important content of the work, we discussed two generally encountered practical issues in application of gene expression index: normalization and summary statistic used for detecting differential expression. Our empirical study shows somewhat different findings from the MAQC project ( MAQC, 2006 ).  相似文献   

15.
Favor, in the Appendix to a paper by Ehling and his collaborators (Ehling et al., 1982), develops a method that tests for the presence of a partially penetrant dominant cataract mutation in a suspected parent (showing lens opacity) outcrossed to a homozygous strain-101 mouse when no individual with lens opacity is observed in the progeny. This method, based on the Chi-square distribution, is examined as to the validity of the Normal approximation. An alternative procedure is discussed.  相似文献   

16.
Scholz M  Kraft G 《Radiation research》2004,161(5):612-620
The physical and biological basis of our model to calculate the biological effects of charged particles, termed the local effect model (LEM), has recently been questioned in a commentary by R. Katz. Major objections were related to the definition of the target size and the use of the term cross section. Here we show that the objections raised against our approach are unjustified and are largely based on serious misunderstandings of the conceptual basis of the local effect model. Furthermore, we show that the approach developed by Katz and coworkers itself suffers from exactly those deficiencies for which Katz criticizes our model. The essential conceptual differences between the two models are discussed by means of some illustrative examples, based on a comparison with experimental data. For these examples, the predictions of the local effect model are fully consistent with the experimental data. In contrast, e.g. for very heavy ions, there are significant discrepancies observed for the Katz approach. These discrepancies can be attributed to the inadequate definition of the target size in this model. Experimental data are thus clearly in favor of the definition of the target as used in the local effect model. Agreement with experimental data is achieved for protons within the Katz approach but at the cost of questionable approximations in combination with the violation of the fundamental physical principle of energy conservation.  相似文献   

17.
In seated postures, such as those in office or automotive seats, locating the hip joint center (HJC) using three markers on the pelvis has been difficult if not impossible. A two-target approach by Bell et al. (J. Biomech. 23 (1990) 617) has been used, however, this method was shown to have inaccuracies when compared to the three-target method developed by Seidel et al. (J. Biomech. 28 (1995) 995). A new two-target method that is specific to the seated environment, has better accuracy than the Bell et al. approach, and is based on the Seidel et al. approach was developed and tested on 13 seated subjects. This new method used three targets and an initial reference file to estimate the HJC location. Once the HJC was located, assumptions were made that the magnitudes between the HJC and the respective anterior superior iliac spine, and the HJC and the respective lateral epicondyle remained constant. The primary concern when evaluating this new method was the affect of seated posture movement, in particular leg splay and spinal flexion on the assumptions. The results obtained with the new approach were compared to Seidel et al. and provided HJC locations with average differences of 3.8, 1.2 and 2.8mm for spinal flexion in the anterior/posterior, medial/lateral and superior/inferior directions, respectively, and 2.3, 1.0 and 1.4mm for knee splay. The proposed method provided better HJC estimation than the Bell et al. approach particularly in the superior/inferior dimensions.  相似文献   

18.
P Coffino 《Gene》1988,69(2):365-368
Messenger RNAs that have structurally unusual 5' leaders attract interest and provoke conjecture. Cloning and sequencing of two rodent ornithine decarboxylase (ODC) cDNAs, those for mouse [Kahana and Nathans, Proc. Natl. Acad. Sci. USA 82 (1985) 1673-1677] and, recently as published in this journal, for rat [Van Kranen et al., Gene 60 (1987) 145-155], have indicated the presence of such features. In both cases, the leader is unusually long and contains multiple AUG start codons preceding that which encodes the N terminus of the protein. In addition, the leader of the rat clone contains a 54-nt perfect inverted repeat. Because ODC expression appears to be regulated translationally, functional implications immediately suggest themselves. Certain unusual features of the mouse cDNA have proven artefactual [Brabant et al., Proc. Natl. Acad. Sci. USA 85 (1988) 2200-2204; Katz and Kahana, J. Biol. Chem. 263 (1988) 7604-7609]. It is likely that the putative leader sequence of rat ODC cDNA also resulted from a cloning artefact.  相似文献   

19.
Bill Shipley 《Oikos》2009,118(1):152-159
Haegeman and Loreau published a paper that is primarily a criticism of a maximum entropy model of trait-based community assembly (by Shipley et al.) and purports to show the limitations of this method in ecology. However, they misunderstood the basic purpose, logic and justification of the maximum entropy formalism and, because of this, leveled criticisms of Shipley et al. that are unfounded. Part of the confusion can be traced to sloppy presentation of the underlying approach in Shipley et al. The confusion arises because maximum entropy models are justified based on information theory and Bayesian logic while the interpretation that Haegeman and Loreau present is based on substantive empirical assumptions about microstate allocations and a combinatorial argument that do not apply to maximum entropy models and which I do not apply to my model in particular.  相似文献   

20.
We consider the problem of drawing superiority inferences on individual endpoints following non-inferiority testing. R?hmel et al. (2006) pointed out this as an important problem which had not been addressed by the previous procedures that only tested for global superiority. R?hmel et al. objected to incorporating the non-inferiority tests in the assessment of the global superiority test by exploiting the relationship between the two, since the results of the latter test then depend on the non-inferiority margins specified for the former test. We argue that this is justified, besides the fact that it enhances the power of the global superiority test. We provide a closed testing formulation which generalizes the three-step procedure proposed by R?hmel et al. for two endpoints. For the global superiority test, R?hmel et al. suggest using the L?uter (1996) test which is modified to make it monotone. The resulting test not only is complicated to use, but the modification does not readily extend to more than two endpoints, and it is less powerful in general than several of its competitors. This is verified in a simulation study. Instead, we suggest applying the one-sided likelihood ratio test used by Perlman and Wu (2004) or the union-intersection t(max) test used by Tamhane and Logan (2004).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号