首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Lancaster (1961) generalized Fisher's (1932) nonparametric procedure for combining independent p-values by transforming Pi from the i-th experiment to a chi-squared random variable with di degrees of freedom, with di not necessarily equal to 2. We explore the relationship between Lancaster's procedure and a weighted Lipták procedure (Koziol and Tuckwell, 1994) under which Pi is transformed to the standard normal scale. We investigate approximations to the null distribution of Lancaster's test procedure, chi-squared with d degrees of freedom. We find that the Cornish-Fisher (1960) expansions and the Lugannani-Rice (1980) saddlepoint approximations are quite accurate, for non-integral values of d, and for values of d as low as 20.  相似文献   

2.
When a trial involves an invasive laboratory test procedure or requires patients to make a commitment to follow a restrictive test schedule, we can often lose a great proportion of our sampled patients due to refusal of participation into our study. Therefore, incorporating the possible loss of patients into sample size calculation is certainly important in the planning stage of a study. In this paper, we have generalized the sample size calculation procedure for intraclass correlation by accounting for the random loss of patients in the beginning of a trial. We have demonstrated that the simple ad hoc procedure, that raises the estimated sample size in the absence of loss of patients by the factor 1/po, where po is the retention probability for a randomly selected patient, is adequate when po is large (=0.80). When po is small (i.e., a high refusal rate), however, use of this simple ad hoc procedure tends to underestimate the required sample size. Furthermore, we have found that if the individual retention probability varied substantially among patients, then the magnitude of the above underestimation could even be critical and therefore, the application of the simple direct adjustment procedure in this situation should be avoided.  相似文献   

3.
Suppose it is desired to determine whether there is an association between any pair of p random variables. A common approach is to test H0 : R = I, where R is the usual population correlation matrix. A closely related approach is to test H0 : Rpb = I, where Rpb is the matrix of percentage bend correlations. In so far as type I errors are a concern, at a minimum any test of H0 should have a type I error probability close to the nominal level when all pairs of random variables are independent. Currently, the Gupta-Rathie method is relatively successful at controlling the probability of a type I error when testing H0: R = I, but as illustrated in this paper, it can fail when sampling from nonnormal distributions. The main goal in this paper is to describe a new test of H0: Rpb = I that continues to give reasonable control over the probability of a type I error in the situations where the Gupta-Rathie method fails. Even under normality, the new method has advantages when the sample size is small relative to p. Moreover, when there is dependence, but all correlations are equal to zero, the new method continues to give good control over the probability of a type I error while the Gupta-Rathie method does not. The paper also reports simulation results on a bootstrap confidence interval for the percentage bend correlation.  相似文献   

4.
By a suitable transformation of the pairs of observations obtained in the successive periods of the trial, bioequivalence assessment in a standard comparative bioavailability study reduces to testing for equivalence of two continuous distributions from which unrelated samples are available. Let the two distribution functions be given by F(x) = P[Xx], G(y) = P[Yy] with (X, Y) denoting an independent pair of real-valued random variables. An intuitively appealing way of putting the notion of equivalence of F and G into nonparametric terms can be based on the distance of the functional P[X > Y] from the value it takes if F and G coincide. This leads to the problem of testing the null hypothesis Ho P[X > Y] ≤ 1/2 - ε1 or P[X > Y] ≥ 1/2 + ε2 versus H1 : 1/2 ? ε1 < P[X > Y] < 1/2 + ∈2, with sufficiently small ε1, ε2 ∈ (0, 1/2). The testing procedure we derive for (0, H1) and propose to term Mann-Whitney test for equivalence, consists of carrying out in terms of the U-statistics estimator of P[X > Y] the uniformly most powerful level a test for an interval hypothesis about the mean of a Gaussian distribution with fixed variance. The test is shown to be asymptotically distribution-free with respect to the significance level. In addition, results of an extensive simulation study are presented which suggest that the new test controls the level even with sample sizes as small as 10. For normally distributed data, the loss in power as against the optimal parametric procedure is found to be almost as small as in comparisons between the Mann-Whitney and the t-statistic in the conventional one or two-sided setting, provided the power of the parametric test does not fall short of 80%.  相似文献   

5.
For two independent binomial proportions Barnard (1947) has introduced a method to construct a non-asymptotic unconditional test by maximisation of the probabilities over the ‘classical’ null hypothesis H0= {(θ1, θ2) ∈ [0, 1]2: θ1 = θ2}. It is shown that this method is also useful when studying test problems for different null hypotheses such as, for example, shifted null hypotheses of the form H0 = {(θ1, θ2) ∈ [0, 1]2: θ2 ≤ θ1 ± Δ } for non-inferiority and 1-sided superiority problems (including the classical null hypothesis with a 1-sided alternative hypothesis). We will derive some results for the more general ‘shifted’ null hypotheses of the form H0 = {(θ1, θ2) ∈ [0, 1]2: θ2g1 )} where g is a non decreasing curvilinear function of θ1. Two examples for such null hypotheses in the regulatory setting are given. It is shown that the usual asymptotic approximations by the normal distribution may be quite unreliable. Non-asymptotic unconditional tests (and the corresponding p-values) may, therefore, be an alternative, particularly because the effort to compute non-asymptotic unconditional p-values for such more complex situations does not increase as compared to the classical situation. For ‘classical’ null hypotheses it is known that the number of possible p-values derived by the unconditional method is very large, albeit finite, and the same is true for the null hypotheses studied in this paper. In most of the situations investigated it becomes obvious that Barnard's CSM test (1947) when adapted to the respective null space is again a very powerful test. A theorem is provided which in addition to allowing fast algorithms to compute unconditional non-asymptotical p-values fills a methodological gap in the calculation of exact unconditional p-values as it is implemented, for example, in Stat Xact 3 for Windows (1995).  相似文献   

6.
Flocculating agents are used as auxiliary to recover bacterial cells in downstream processes for polyhydroxyalkanoate production. However little is known about the Curpiavidus necator flocs. In this work a new procedure for floc characterization through digital image analysis is presented and validated using the batch settling test. Average diameter, particle size distribution and morphological characteristics of the microbial aggregates were obtained from the flocculation/sedimentation process of the Cupriavidus necator DSM 545 cells by the use of tannin as flocculating agent. The experimental results demonstrated that the proposed method is adequate to determine the average floc diameter with values around 150 μm in accordance with the value obtained from the batch settling test. Nevertheless a morphological characterization of Cupriavidus necator DSM 545 bioaggregates in terms of size distribution and regularity could only be performed by an image analysis procedure. The procedure allowed us to describe the regularity of bacterial flocs through the quantification of morphological parameters of Euclidean [convexity (Conv) and form factor (FF)] and fractal geometry [surface fractal dimension (D BS)], which are important factors to be considered in the settling efficiency of aggregates.  相似文献   

7.
A statistical goodness-of-fit test, based on representing the sample observations by linked vectors, is developed. The direction and the length of the linked vectors are defined as functions of the expected values of the order statistics and sample order statistics, respectively. The underlying method can be used to test distributional assumptions for any location-scale family. A test statistic Qn is introduced and some of its properties are studied. It is shown that the proposed test can be generalized to test if two or more independent samples come from the same distribution. The test procedure provides a graphical method of identifying the true distribution when the null hypothesis is rejected.  相似文献   

8.
We consider the problem of comparing a set of p1 test treatments with a control treatment. This is to be accomplished in two stages as follows: In the first stage, N1 observations are allocated among the p1 treatments and the control, and the subset selection procedure of Gupta and Sobel (1958) is employed to eliminate “inferior” treatments. In the second stage, N2 observations are allocated among the (randomly) selected subset of p2(≤p1) treatments and the control, and joint confidence interval estimates of the treatment versus control differences are calculated using Dunnett's (1955) procedure. Here both N1 and N2 are assumed to be fixed in advance, and the so-called square root rule is used to allocate observations among the treatments and the control in each stage. Dunnett's procedure is applied using two different types of estimates of the treatment versus control mean differences: The unpooled estimates are based on only the data obtained in the second stage, while the pooled estimates are based on the data obtained in both stages. The procedure based on unpooled estimates uses the critical point from a p2-variate Student t-distribution, while that based on pooled estimates uses the critical point from a p1-variate Student t-distribution. The two procedures and a composite of the two are compared via Monte Carlo simulation. It is shown that the expected value of p2 determines which procedure yields shorter confidence intervals on the average. Extensions of the procedures to the case of unequal sample sizes are given. Applicability of the proposed two-stage procedures to a drug screening problem is discussed.  相似文献   

9.
An enrichment method for nitrogen fixing hydrogen bacteria is described. The procedure invariably resulted in the isolation of yellow-pigmented coryneform bacterial strains assigned to Corynebacterium autotrophicum. The procedure included a serial transfer in an ammonium-free mineral liquid medium under an atmosphere of 10% hydrogen, 5% oxygen, 10% carbon dioxide and 75% nitrogen, followed by a short alkali treatment and by streaking on nutrient broth-succinate agar. The ability to fix nitrogen was confirmed by the acetylene reduction test and by 15N2 incorporation.  相似文献   

10.
The sequential procedure for testing up to k upper outliers proposed by Kimber (1982) for one-parameter exponential distribution is modified to a two-parameter exponential distribution. Further null distributions of some test statistics for an upper outlier-pair in a complete or censored sample from a two-parameter exponential distribution are given. Percentage points of the statistic T1 are tabulated.  相似文献   

11.
12.
J Shimada  H Yamakawa 《Biopolymers》1988,27(4):675-682
The sedimentation coefficient sN of the DNA topoisomer with the linking number N is evaluated as a function of N and chain length on the basis of a (circular) twisted wormlike chain, i.e., a special case of the helical wormlike chain. Evaluation is carried out by an application of the Oseen–Burgers procedure of hydrodynamics to the cylinder model with the preaveraged Oseen tensor. The necessary mean reciprocal distance between two contour points is obtained by a Monte Carlo method. It is shown that sN increases as |ΔN| is increased from 0 in the range of small |ΔN|, where ΔN = N ? N , with N the number of helix turns in the linear DNA chain in the undeformed state. It is found that there is semiquantitative agreement between the Monte Carlo values and the experimental data obtained by Wang for sN.  相似文献   

13.
There are many situations in which grain distributions resulting from in situ hybridization of radioactively labeled probes to unique genes should be subjected to a statistical analysis. However, the problems posed by analysis of in situ hybridization data are not straightforward, and no completely satisfying method is currently available. We have developed a procedure in which the major and any number of minor site(s) of hybridization may be specifically located and the significance of each tested. This Zmax procedure first tests the overall distribution for departure from randomness and then identifies significantly overlabeled whole chromosomes (or chromosome arms or other large segments), a process that may be repeated to pinpoint significantly overlabeled regions within these chromosomes. We describe in detail the derivation of the Zmax statistic, present tables of significant Zmax levels, and show with examples how Zmax is used in tests of significance of in situ hybridization data.  相似文献   

14.
Fluoren-9-carboxylic acid acts not only as an auxin but also as an gibberellin-antagonist. In the standard pea straight test (S5 section) for auxin it stimulated elongation, the optimum concentration being 10 mg/l. On the other hand, it inhibited elongation at 0.1 mg/l. This inhibitory effect was more marked when younger tissue (S1 section) which also responds to gibberellin was used. Interaction of FCA and IAA in the S5 section has shown that at higher concentration of IAA there seemed to be a suppraoptimal effect, indicating that FCA acted as an auxin. However, in the S1 section, the stimulating effect of GA3 was markedly inhibited by 0.1 mg/l FCA; 10 mg/l FCA was either additive or less than additive to GA3. In the cucumber hypocotyl test FCA itself was inactive up to 100 μg/plant, but it inhibited the GA3-induced elongation. This inhibition was overcome by increasing the dosage of GA3. In the same material, the IAA-induced elongation was not affected by FCA. These results indicate that whether FCA acts as an auxin or a gibberellin-antagonist depends on whether the tissue is sensitive to gibberellin and/or auxin.  相似文献   

15.
The kinetics of granulosa cell populations in two types of follicles in ovaries of 28-day-old Bagg mice are investigated. The analysis includes estimations of mean values and standard deviations of the transit times (TG1, TS, TG2 and TC), the doubling time TD, and the proliferative fraction p. First the percentage of labelled mitosis curve (PLM-curve) and the continuous labelling curve (CL-curve) are estimated. Then a hypothesis concerning the cell kinetics of the granulosa cells in the two follicle types is set up. The normal distribution is chosen to simulate the probability density functions of the transit times. On the basis of the hypothesis mathematical expressions for the PLM- and CL-curves are worked out. By fitting the calculated PLM-curve to the experimental one it is possible to estimate mean values and standard deviations of TG1, TS>, and TG2. As a test of the hypothesis the CL-curve is calculated by means of the estimated parameter values and compared to the experimental one. The calculated PLM- and CL-curves are found to be in good agreement with the experimental data as far as both follicle types are concerned. It is concluded that the method is a useful procedure. The choice of a normal distribution does not imply a significant limitation of the method in these investigations. Moreover it is concluded that the hypothesis is plausible. This means, e.g., that the proliferative fraction is unity in the two follicle types and that there is no cell loss from the cell systems.  相似文献   

16.
In this paper we analyze a quantitative genetic character which is controlled by both major genes and polygenes. Assuming that there are no epistatic effects, no linkage and no genetic-environmental interactions, we follow TAN and CHANG (1972) to derive the probability distributions for segregating populations. The numbers of major genes and polygenes, and the additive and dominance effects of major genes and polygenes are then estimated by using the procedures developed in TAN and CHANG (1972) and the POWELL -FLETCHER search procedure for maximum values. In this paper, we consider the case involving data from P1, P2, F2, B1 (Backcross to P1) and B2 (Backcross to P2) as this type of experiment is common in practical applications. The analyses are applied to a simulated model generated by using binomial, multinomial and normal variables and to the data of an experiment on kernel weight of sorghum plant provided to the authors by Professor GEORGE H. L. LIANG of Kansas State University. The analysis of these data indicate clearly that the method derived in this paper is useful and desirable.  相似文献   

17.
The 3 way nested ANOVA model yijk = μ + ai + Bij + γk + (αγ)ik + epsilonijk With α (treatment or group effects) and γ (time) both being fixed effects and B (the individual effects) random and nested within α, is introduced and explored. The problems associated with the usual approach are explained. The alternative model Is developed and a method of evaluation via the method of linear contrasts is recomended. The test statistic has the distribution of a convolution of F-distributions. Further, a method of investigating the assumptions of the model is offered and a further generalization using path spaces (of dim. K) is developed. Here again the appropriate test statistic has the distribution of a convolution of F-distributions. This combined with the method of linear contrasts offers an elegant solution to the BEHRENS-FISHER problem. yijk = fik + Bij + epsilonijk  相似文献   

18.
The two classical selection approaches in comparing experimental treatments with a control are combined to form an integrated approach. In this integrated approach, there is a preference zone (PZ) and an indifference zone (IZ), and the concept of a correct decision (CD) is defined differently in each of these zones. In the PZ, we are required to select the best treatment for a correct decision (CD1) but in the IZ, we define any selected subset to be correct (CD2) if it contains the best treatment among all the experimental treatments and the controlled treatment. We propose a single-stage procedure R to achieve the selection goals CD1 and CD2 simultaneously with certain probability requirements. It is shown that both the probability of a correct decision under PZ, P(CD1 | PZ), and the probability of a correct decision under IZ, P(CD2 | IZ), satisfy some monotonicity properties and the least favorable configuration in PZ and the worst configuration in IZ are derived by these properties. We also derive formulas for the probabilities of correct decision and provide a brief table to illustrate the magnitude of the procedure parameters and the common sample sizes needed for various probability requirements and configurations.  相似文献   

19.
A simple, rapid bioluminescence test (BT) for the determination of lipid oxidation is described. The test utilizes an aldehyde-requiring dark mutant of Vibrio harveyi (M42) that emits light in the presence of long chain (C8-C16) aliphatic aldehydes. The procedure consists of treating the oil or fat with Co2+ ion in ethanolic medium at alkaline pH. This treatment facilitates the decomposition of the hydroperoxides into long-chain aldehydes, part of which is used by the bacteria to produce light. The test was evaluated with corn, soybean and safflower oils, and shows excellent correlation with the commonly used peroxide value assay.  相似文献   

20.
Partial pressure of CO2 (pCO2) and iron availability in seawater show corresponding changes due to biological and anthropogenic activities. The simultaneous change in these factors precludes an understanding of their independent effects on the ecophysiology of phytoplankton. In addition, there is a lack of data regarding the interactive effects of these factors on phytoplankton cellular stoichiometry, which is a key driving factor for the biogeochemical cycling of oceanic nutrients. Here, we investigated the effects of pCO2 and iron availability on the elemental composition (C, N, P, and Si) of the diatom Pseudo‐nitzschia pseudodelicatissima (Hasle) Hasle by dilute batch cultures under 4 pCO2 (~200, ~380, ~600, and ~800 μatm) and five dissolved inorganic iron (Fe′; ~5, ~10, ~20, ~50, and ~100 pmol · L?1) conditions. Our experimental procedure successfully overcame the problems associated with simultaneous changes in pCO2 and Fe′ by independently manipulating carbonate chemistry and iron speciation, which allowed us to evaluate the individual effects of pCO2 and iron availability. We found that the C:N ratio decreased significantly only with an increase in Fe′, whereas the C:P ratio increased significantly only with an increase in pCO2. Both Si:C and Si:N ratios decreased with increasing pCO2 and Fe′. Our results indicate that changes in pCO2 and iron availability could influence the biogeochemical cycling of nutrients in future oceans with high‐ CO2 levels, and, similarly, during the time course of phytoplankton blooms. Moreover, pCO2 and iron availability may also have affected oceanic nutrient biogeochemistry in the past, as these conditions have changed markedly over the Earth's history.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号