首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In addition to hypothesis optimality, the evaluation of clade (group, edge, split, node) support is an important aspect of phylogenetic analysis. Here we clarify the logical relationship between support and optimality and formulate adequacy conditions for support measures. Support, S, and optimality, O, are both empirical knowledge claims about the strength of hypotheses, h1, h2, …hn, in relation to evidence, e, given background knowledge, b. Whereas optimality refers to the absolute strength of hypotheses, support refers to the relative strength of hypotheses. Consequently, support and optimality are logically related such that they vary in direct proportion to each other, S(h | e,b) ∝ O(h | e,b). Furthermore, in order for a support measure to be objective it must quantify support as a function of explanatory power. For example, Goodman–Bremer support and ratio of explanatory power (REP) support satisfy the adequacy requirement S(h | e,b) ∝ O(h | e,b) and calculate support as a function of explanatory power. As such, these are adequate measures of objective support. The equivalent measures for statistical optimality criteria are the likelihood ratio (or log‐likelihood difference) and likelihood difference support measures for maximum likelihood and the posterior probability ratio and posterior probability difference support measures for Bayesian inference. These statistical support measures satisfy the adequacy requirement S(h | e,b) ∝ O(h | e,b) and to that extent are internally consistent; however, they do not quantify support as a function of explanatory power and therefore are not measures of objective support. Neither the relative fit difference (RFD; relative GB support) nor any of the parsimony (bootstrap and jackknife character resampling) or statistical [bootstrap character resampling, Markov chain Monte Carlo (MCMC) clade frequencies] support measures based on clade frequencies satisfy the adequacy condition S(h | e,b) ∝ O(h | e,b) or calculate support as a function of explanatory power. As such, they are not adequate support measures. © The Willi Hennig Society 2008.  相似文献   

2.
X. Yathindra  V. S. R. Rao 《Biopolymers》1971,10(10):1891-1900
The characteristic ratio CN = 〈r20/Nlv2 of the β-D (1 → 4′)-linked polysaccharides xylan and mannan has been computed as a function of the angle τ at the bridge oxygen atom and the degree of polymerization N. The calculated values of the characteristic ratio CN are very high relative to their free rotational dimensions. The characteristic ratio of these polysaecharides converges to the asymptotic value at low degree of polymerization at higher τ values. The low values of the calculated characteristic ratio of xylan compared to cellulose and mannan for the same τ value indicate that the former is more flexible and assumes a compact configuration. A pronounced difference in the values of the characteristic ratio CN of cellulose and mannan has also been observed lower τ angles (<120°). On the other hand, nearly the same values of CN have been obtained at higher τ angles (120°–125°), which suggests that, cellulose and mannan may have similar configuralons in certain solvents.  相似文献   

3.
Chris J. Lloyd 《Biometrics》2010,66(3):975-982
Summary Clinical trials data often come in the form of low‐dimensional tables of small counts. Standard approximate tests such as score and likelihood ratio tests are imperfect in several respects. First, they can give quite different answers from the same data. Second, the actual type‐1 error can differ significantly from nominal, even for quite large sample sizes. Third, exact inferences based on these can be strongly nonmonotonic functions of the null parameter and lead to confidence sets that are discontiguous. There are two modern approaches to small sample inference. One is to use so‐called higher order asymptotics ( Reid, 2003 , Annal of Statistics 31 , 1695–1731) to provide an explicit adjustment to the likelihood ratio statistic. The theory for this is complex but the statistic is quick to compute. The second approach is to perform an exact calculation of significance assuming the nuisance parameters equal their null estimate ( Lee and Young, 2005 , Statistic and Probability Letters 71 , 143–153), which is a kind of parametric bootstrap. The purpose of this article is to explain and evaluate these two methods, for testing whether a difference in probabilities p2? p1 exceeds a prechosen noninferiority margin δ0 . On the basis of an extensive numerical study, we recommend bootstrap P‐values as superior to all other alternatives. First, they produce practically identical answers regardless of the basic test statistic chosen. Second, they have excellent size accuracy and higher power. Third, they vary much less erratically with the null parameter value δ0 .  相似文献   

4.
5.
Comparing fluctuating asymmetry (FA) between different traits can be difficult because traits vary at different scales. FA is generally quantified either as the variance of the difference between left and right (σ2L?R) or the mean of the absolute value of this difference (μ|R?L|). Corrections for scale differences are obtained by dividing by trait size mean. We show that a third index, one minus the correlation coefficient between left and right (1 ? rL,R), is equivalent to σ2L?R standardized by trait size variance. The indices are compared with Monte‐Carlo simulations. All achieve the expected correction for scale differences. High type I error rates (false indication of differences) occur only for σ2L?R and μ|R?L| if trait sizes close to or below 0 occur. 1 ? rL,R with a bootstrap test has always low error rates. Recommendation of the index to be used should be based on whether standardization of FA by trait size mean or trait size variance is preferred. A survey of 36 traits in the Speckled Wood Butterfly (Pararge aegeria) indicated that σ2L?R is slightly higher correlated to trait size variance than to trait size mean. Thus 1 ? rL,R seems to be the superior index and should be reported when FA of different traits is compared.  相似文献   

6.
A table for determining minimal sample sizes n1 = n2 = n for testing the hypothesis of equality of location parameters a4 and a2 of two two-parametric exponential distributions for a first kind risk α=0,01 and 0,05 are given in such a way that the second kind risk β≦β0 as long as |a1a>2|>d.  相似文献   

7.
For the model y0 + β1 x + e (model I of linear regression) in the literature confidence estimators of an unknown position x0 are given at which either the expectation of y is given (see FIELLER, 1944; FINNEY, 1952), or realizations of y are given (see GRAYBILL, 1961). These confidence regions with level 1—α need not be intervals. The occurrence of interval shape is a random event. Its probability is equal to the power of the t test for the examination of the hypothesis H: β1 = 0. The papers mentioned above claim to provide confidence intervals with level 1 ? α. But because of the restriction of (1 —α)—confidence regions to intervals the true confidence probability is the conditional probability Wc: Wc = P (the confidence region covers x0| the region has interval shape). Here this conditional probability is shown to be less than 1 —α. Evidence on the possible deviations from 1 —α has been obtained by simulations.  相似文献   

8.
S-sample smooth goodness of fit tests may be constructed using components from one sample goodness of fit testing. Each sample could be assessed for consistency with a target distribution using these components, although that is not our objective here. Contrasts in the components may be used to assess consistency of the samples with each other. If all the samples are consistent, we could then conveniently perform a one-sample goodness of fit test for the target distribution. If the samples are not consistent, an LSD-type analysis can be performed on the one-sample components to identify where the differences between occur. This approach gives a detailed and informative scrutiny of the data.  相似文献   

9.
The paper considers methods for testing H0: β1 = … = βp = 0, where β1, … ,βp are the slope parameters in a linear regression model with an emphasis on p = 2. It is known that even when the usual error term is normal, but heteroscedastic, control over the probability of a type I error can be poor when using the conventional F test in conjunction with the least squares estimator. When the error term is nonnormal, the situation gets worse. Another practical problem is that power can be poor under even slight departures from normality. Liu and Singh (1997) describe a general bootstrap method for making inferences about parameters in a multivariate setting that is based on the general notion of depth. This paper studies the small-sample properties of their method when applied to the problem at hand. It is found that there is a practical advantage to using Tukey's depth versus the Mahalanobis depth when using a particular robust estimator. When using the ordinary least squares estimator, the method improves upon the conventional F test, but practical problems remain when the sample size is less than 60. In simulations, using Tukey's depth with the robust estimator gave the best results, in terms of type I errors, among the five methods studied.  相似文献   

10.
Genetic correlations within a trait across environments (rg) are important in the analysis of phenotypic plasticity. Not all methods are, however, equally reliable. An overview of all different methods for estimation of rg with one generation data sets is given. Formulae for the relationship between causal variance components and family means are derived. When these formulae are used covariances derived from family means, thought to be incorrect, are exactly the same as those derived with the ANOVA method. The bias, precision and power of the different methods are compared with Monte Carlo simulations. For all methods bias is small and precision is high for the large balanced data sets analyzed here, except when the variance in one or both of the environments is close to 0. Significance testing causes more problems. Confidence intervals with or without z-transformation are not suitable for testing, nor is testing for g*e interaction in an ANOVA suitable for testing whether the rg is different from 1. The F-test in a mixed model ANOVA and a likelihood ratio test in a REML-analysis can be used for testing a difference from 0 but not from 1 or other values. Jackknife and Bootstrap, however, are suitable tests both for differences with 0,1 and other values, though negative variances can make these tests difficult to apply.  相似文献   

11.
Total lengths (LT) of 50 free-swimming fish in a tank, silver carp Hypophthalmichthys molitrix and rainbow trout Oncorhynchus mykiss, were measured using a DIDSON (Dual-frequency IDentification SONar) camera. Using Sound Metrics software, multiple measurements of each fish (LT, side aspect angle and distance from the camera) at different times were analysed by two experienced operators while a subset of data was analysed by two inexperienced operators. The main result showed high variability in intra-fish LT measurements. The number of measurements required to minimise errors and to obtain robust fish measurements (true LT ± 3 cm) was estimated by a bootstrap method. Three to five measurements per fish were recommended for fish surveys in rivers. In this experimental study, aiming to reproduce river conditions, no evidence of fish position (side aspect angle and distance from the camera) effect was detected, but an operator effect (partially explained by training) was observed. General linear mixed models also showed that lengths of the smallest fish (LT < 57 cm) were overestimated and lengths of the largest fish (LT > 57 cm) were underestimated in comparison with their true lengths. In conclusion, we highlight that this technology, like any monitoring methods, returns imperfect observations. We advise DIDSON users to ensure that measurements are carried out correctly in order to draw accurate conclusion from this new technology.  相似文献   

12.
We performed long time simulations using the |D1> approximation for the solution of the Davydov Hamiltonian. In addition we computed expectation values of the relevant operators with the state (D/J)|D1> and the deviation |> from the exact solution over long times, namely 10 ns. We found that in the very long time scale the |D1> ansatz is very close to an exact solution, showing expectation values of the relevant physical observables in the state (D/J)|D1> being about 5-6 orders of magnitudes larger than in the deviation state |>. In the intermediate time scale of the ps range such errors, as known from our previous work, are somewhat larger, but still more or less negligibly. Thus we also report results from an investigation of the very short time (in the range 0-0.4 ps) behaviour of the |D1> state compared with that of an expansion of the exact solution in powers of time t. This expansion is reliable for about 0.12 ps for special cases as shown in the previous paper. However, the accuracy of the exactly known value of the norm and the expectation value of the Hamiltonian finally indicates up to what time a given expansion is valid, as also shown in the preceding paper. The comparison of the expectation values of the operators representing the relevant physical observables, formed with the third order wave function and with the corresponding results of |D1> simulations has shown, that our expansion is valid up to a time of roughly 0.10-0.15 ps. Within this time the second and third order corrections turned out to be not very important. This is due to the fact that our first order state contains already some terms of the expansion, summed up to inifinite order. Further we found good agreement of the results obtained with our expansion and those from the corresponding |D1> simulations within the time of about 0.10 ps. At later times, the factors with explicit powers of t in second and third order become dominant, making the expansion meaningless. Possibilities for the use of such expansions for larger times are described. Alltogether we have shown (together with previous work on medium times), that the |D1> state, although of approximative nature, is very close to an exact solution of the Davydov model on time scales from some femtoseconds up to nanoseconds. Especially the very small time region is of importance, because in this time a possible soliton formation from the initial excitation would start.  相似文献   

13.
Suppose it is desired to determine whether there is an association between any pair of p random variables. A common approach is to test H0 : R = I, where R is the usual population correlation matrix. A closely related approach is to test H0 : Rpb = I, where Rpb is the matrix of percentage bend correlations. In so far as type I errors are a concern, at a minimum any test of H0 should have a type I error probability close to the nominal level when all pairs of random variables are independent. Currently, the Gupta-Rathie method is relatively successful at controlling the probability of a type I error when testing H0: R = I, but as illustrated in this paper, it can fail when sampling from nonnormal distributions. The main goal in this paper is to describe a new test of H0: Rpb = I that continues to give reasonable control over the probability of a type I error in the situations where the Gupta-Rathie method fails. Even under normality, the new method has advantages when the sample size is small relative to p. Moreover, when there is dependence, but all correlations are equal to zero, the new method continues to give good control over the probability of a type I error while the Gupta-Rathie method does not. The paper also reports simulation results on a bootstrap confidence interval for the percentage bend correlation.  相似文献   

14.
In applied statistics it is customary to have to obtain a one‐ or two‐tail confidence interval for the difference d = p2p1 between two independent binomial proportions. Traditionally, one is looking for a classic and non‐symmetric interval (with respect to zero) of the type d ∈ [δLU], d ≤ δ0 or d ≥ δ0. However, in clinical trials, equivalence studies, vaccination efficacy studies, etc., and even after performing the classic homogeneity test, intervals of the type |d| ≤ Δ0 or |d| ≥ Δ0, where Δ0 > 0, may be necessary. In all these cases it is advisable to obtain the interval by inverting the appropriate test. The advantage of this procedure is that the conclusions obtained using the test are compatible with those obtained using the interval. The article shows how this is done using the new exact and asymptotic unconditional tests published. The programs for performing these tests may be obtained at URL http://www.ugr.es/~bioest/software.htm.  相似文献   

15.
When the synaptosomal cytosol fraction from rat brain was chromatographed on a DEAE-cellulose column and assayed for protein phosphatases for τ factor and histone H1, two peaks of activities, termed peak 1 (major) and peak 2 (minor), were separated. Each peak was in a single form on Sephacryl S-300 column chromatography. Both peaks 1 and 2 dephosphorylated τ factor phosphorylated by Ca2+/calmodulin-dependent protein kinase II and the catalytic subunit of cyclic AMP-dependent protein kinase. The Km values were in the range of 0.42–0.84 μM for τ factor. There were no differences in kinetic properties of dephosphorylation between the substrates phosphorylated by the two kinases. The phosphatase activities did not depend on Ca2+, Mn2+, and Mg2+. Immunoprecipitation and immunoblotting analysis using polyclonal antibodies to the catalytic subunit of brain protein phosphatase 2A revealed that both protein phosphatases are the holoenzymic forms of protein phosphatase 2A. Aluminum chloride inhibited the activities of both peaks 1 and 2 with IC50 values of 40–60 μM. These results suggest that dephosphorylation of r factor in presynaptic nerve terminals is controlled mainly by protein phosphatase 2A and that the neurotoxic effect of aluminum seems to be related mostly to inhibition of dephosphorylation of τ factor  相似文献   

16.
A Kerr effect study is reported in which measurements have been made on the magnitudes of both the steady maxima and the decays of the birefringence of solutions of ovalbumin, bovine γ-globulin, and β-lactoglobulin. For each protein, results are presented on solutions covering the concentration range of 0.3–1.7 g./100 ml. in order to obtain by extrapolation, values of the specific Kerr constant Ksp, and the birefringence relaxation time τ25, w at zero concentration. The relaxation times thus obtained for ovalbumin (18.3 nsec.) and γ-globulin (157 nsec.) have been shown to be compatible with molecular models and dimensions presented in the literature. All experiments showed the need for careful extrapolation to zero concentration if reliable parameters are to be obtained: for example a 1% solution of ovalbumin or l.5% solution of γ-globulin, would give values for τ which are 50% too high when compared with the true value at infinite dilution. The gradual fall in τ for γ-globulin as the pH was lowered from 6.7 to 3.0 was also studied for three solvents. Fisher's generalized model for the arrangement of the polar residues around the outside of a globular protein has been developed to account for ellipsoidal particles and has been used to demonstrate the suitability and usefulness of this treatment in predicting the conformation and dimensions of these proteins. Rather unusual birefringence traces for β-lactoglobulin were obtained, which may indicate the dissociation of aggregates, or of the parent molecule into its subunits, under the influence of strong electric fields.  相似文献   

17.
Copper(II)-DNA denaturation. II. The model of DNA denaturation   总被引:1,自引:0,他引:1  
D C Liebe  J E Stuehr 《Biopolymers》1972,11(1):167-184
In a continuing study of the denaturation of DNA as brought abought about by Cu(II) ions, results are presented for the dependence of Tm and τ (the terminal relaxation time) on ionic strength, pH, reactant concentrations, and temperature. Maximum stability of the double helix, as reflected by the longest relaxation times and highest Tm values, was observed between pH 5.3 and 6.2. Outside this range both Tm and τ decreased sharply. A second, faster relaxation time was deduced from the kinetic cureves. The apparent activation energies of the rapid and slow (“terminal”) relaxations were found to be 12 and 55 kcal/mole, respectively. Several lines of evidence led to the conclusions that (1) the rate-determining step in DNA denaturation, when occurring in the transition region, is determined by chemical events and (2) the interactions which are disrupted kinetically in the rate-determining step are those which account for the major portion of the thermal (Tm) stability of helical DNA.  相似文献   

18.
B Lubas  T Wilczok 《Biopolymers》1971,10(8):1267-1276
The molecular mobility of calf thymus DNA molecules in solution has been discussed in terms of correlation time τ calculated from measurements of longitudinal T1 and transverse T2 magnetic relaxation times. The influence of DNA concentration and ionic strength of the solution upon freedom of movement of DNA molecules was studied for native and denatured DNA and also during thermal helix-coil transition. The dependence of τ values on temperature was carried out by comparing the values of correlation times τtat given temperature with the correlation time τ20 at 20°C. The molecular rotation of DNA at 20°C and at higher ionic strength at 0.15 and 1.0.M NaCl is described by τ values of the order of 1.0–1.2 × 10?8 and was reduced slightly with increase of temperature below the helix-coil transition. The molecular rotation of DNA in 0.02MNaCl was lower at 20°C as compared to DNA in solvents with higher NaCl concentrations and increases rapidly with increase of temperature in the range 20–60°C. The values of correlation time are characterized by fast increase at temperatures above the spectrophotometrically determined beginning of melting curve. The beginning of this increase is observed at about 65, 80, and 85°C for DNA in 0.02, 0.15, and 1.0MNaCl, respectively. Values of correlation time for denatured DNA are in all cases about 1.1–1.4 times that for native DNA. The obtained results are discussed in terms of conformation of DNA molecules in solution as well as in terms of water dipole binding in DNA hydration shells.  相似文献   

19.
The quantitative ion character–activity relationship (QICAR) was used to correlate nine ion characteristics with ion toxicity order numbers (TON) in 19 metals. A multi-parameter regression model was used to simulate the metals toxicity order numbers after minimization of the multicollinearity among the ion characteristics using principal component analysis (PCA). The toxicity order numbers of the metals increased with the positively correlated variables AN, Xm 2r, ANIP, AW, and Xm , and decreased with the negatively correlated variables ΔE 0, |logK OH|, AR/AW, and σ P . The regression model provided high prediction ability, with Nash-Suttcliffe simulation efficiency coefficients (NSC) of 0.93 for the modeling phase and 0.87 for the testing phases. The model may be successfully employed to predict the stability constants and metal toxicity and used as a first step in the further risk assessment modeling.  相似文献   

20.
J Shimada  H Yamakawa 《Biopolymers》1988,27(4):675-682
The sedimentation coefficient sN of the DNA topoisomer with the linking number N is evaluated as a function of N and chain length on the basis of a (circular) twisted wormlike chain, i.e., a special case of the helical wormlike chain. Evaluation is carried out by an application of the Oseen–Burgers procedure of hydrodynamics to the cylinder model with the preaveraged Oseen tensor. The necessary mean reciprocal distance between two contour points is obtained by a Monte Carlo method. It is shown that sN increases as |ΔN| is increased from 0 in the range of small |ΔN|, where ΔN = N ? N , with N the number of helix turns in the linear DNA chain in the undeformed state. It is found that there is semiquantitative agreement between the Monte Carlo values and the experimental data obtained by Wang for sN.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号