首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The aim of this contribution is to give an overview of approaches to testing for non-inferiority of one out of two binomial distributions as compared to the other in settings involving independent samples (the paired samples case is not considered here but the major conclusions and recommendations can be shown to hold for both sampling schemes). In principle, there is an infinite number of different ways of defining (one-sided) equivalence in any multiparameter setting. In the binomial two-sample problem, the following three choices of a measure of dissimilarity between the underlying distributions are of major importance for real applications: the odds ratio (OR), the relative risk (RR), and the difference (DEL) of both binomial parameters. It is shown that for all three possibilities of formulating the hypotheses of a non-inferiority problem concerning two binomial proportions, reasonable testing procedures providing exact control over the type-I error risk are available. As a particularly useful and versatile way of handling mathematically nonnatural parametrizations like RR and DELTA, the approach through Bayesian posterior probabilities of hypotheses with respect to some non-informative reference prior has much to recommend it. In order to ensure that the corresponding testing procedure be valid in the classical, i.e. frequentist sense, it suffices to use straightforward computational techniques yielding suitably corrected nominal significance levels. In view of the availability of testing procedures with satisfactory properties for all parametrizations of main practical interest, the discussion of the pros and cons of these methods has to focus on the question of which of the underlying measures of dissimilarity should be preferred on grounds of logic and intuition. It is argued that the OR clearly merits to be given preference also with regard to this latter kind of criteria since the non-inferiority hypotheses defined in terms of the other parametric functions are bounded by lines which cross the boundaries of the parameter space. From this fact, we conclude that the exact Fisher type test for one-sided equivalence provides the most reasonable approach to the confirmatory analysis of non-inferiority trials involving two independent samples of binary data. The marked conservatism of the nonrandomized version of this test can largely be removed by using a suitably increased nominal significance level (depending, in addition to the target level, on the sample sizes and the equivalence margin), or by replacing it with a Bayesian test for non-inferiority with respect to the odds ratio.  相似文献   

2.
Inverse Adaptive Cluster Sampling   总被引:3,自引:0,他引:3  
Consider a population in which the variable of interest tends to be at or near zero for many of the population units but a subgroup exhibits values distinctly different from zero. Such a population can be described as rare in the sense that the proportion of elements having nonzero values is very small. Obtaining an estimate of a population parameter such as the mean or total that is nonzero is difficult under classical fixed sample-size designs since there is a reasonable probability that a fixed sample size will yield all zeroes. We consider inverse sampling designs that use stopping rules based on the number of rare units observed in the sample. We look at two stopping rules in detail and derive unbiased estimators of the population total. The estimators do not rely on knowing what proportion of the population exhibit the rare trait but instead use an estimated value. Hence, the estimators are similar to those developed for poststratification sampling designs. We also incorporate adaptive cluster sampling into the sampling design to allow for the case where the rare elements tend to cluster within the population in some manner. The formulas for the variances of the estimators do not allow direct analytic comparison of the efficiency of the various designs and stopping rules, so we provide the results of a small simulation study to obtain some insight into the differences among the stopping rules and sampling approaches. The results indicate that a modified stopping rule that incorporates an adaptive sampling component and utilizes an initial random sample of fixed size is the best in the sense of having the smallest variance.  相似文献   

3.
Standard errors for attributable risk for simple and complex sample designs   总被引:1,自引:0,他引:1  
Graubard BI  Fears TR 《Biometrics》2005,61(3):847-855
Adjusted attributable risk (AR) is the proportion of diseased individuals in a population that is due to an exposure. We consider estimates of adjusted AR based on odds ratios from logistic regression to adjust for confounding. Influence function methods used in survey sampling are applied to obtain simple and easily programmable expressions for estimating the variance of AR. These variance estimators can be applied to data from case-control, cross-sectional, and cohort studies with or without frequency or individual matching and for sample designs with subject samples that range from simple random samples to (sample) weighted multistage stratified cluster samples like those used in national household surveys. The variance estimation of AR is illustrated with: (i) a weighted stratified multistage clustered cross-sectional study of childhood asthma from the Third National Health and Examination Survey (NHANES III), and (ii) a frequency-matched case-control study of melanoma skin cancer.  相似文献   

4.
We consider the problem of testing for independence against the consistent superiority of one treatment over another when the response variable is binary and is compared across two treatments in each of several strata. Specifically, we consider the randomized clinical trial setting. A number of issues arise in this context. First, should tables be combined if there are small or zero margins? Second, should one assume a common odds ratio across strata? Third, if the odds ratios differ across strata, then how does the standard test (based on a common odds ratio) perform? Fourth, are there other analyzes that are more appropriate for handling a situation in which the odds ratios may differ across strata? In addressing these issues we find that the frequently used Cochran–Mantel–Haenszel test may have a poor power profile, despite being optimal when the odds ratios are common. We develop novel tests that are analogous to the Smirnov, modified Smirnov, convex hull, and adaptive tests that have been proposed for ordered categorical data. (© 2006 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study.  相似文献   

6.
In attempting to improve the efficiency of McNemar's test statistic, we develop two test procedures that account for the information on both the discordant and concordant pairs for testing equality between two comparison groups in dichotomous data with matched pairs. Furthermore, we derive a test procedure derived from one of the most commonly‐used interval estimators for odds ratio. We compare these procedures with those using McNemar's test, McNemar's test with the continuity correction, and the exact test with respect to type I error and power in a variety of situations. We note that the test procedures using McNemar's test with the continuity correction and the exact test can be quite conservative and hence lose much efficiency, while the test procedure using McNemar's test can actually perform well even when the expected number of discordant pairs is small. We also find that the two test procedures, which incorporate the information on all matched pairs into hypothesis testing, may slightly improve the power of using McNemar's test without essentially losing the precision of type I error. On the other hand, the test procedure derived from an interval estimator of adds ratio with use of the logarithmic transformation may have type I error much larger than the nominal α‐level when the expected number of discordant pairs is not large and therefore, is not recommended for general use.  相似文献   

7.
Organochlorine pollutants are potentially useful for identifying discrete populations of marine mammals that overlap in geographic distribution. However, many factors unrelated to geographical distribution may affect the chemical burden of individual animals or of entire population components even within a homogeneously distributed population. These factors include. among others, nutritional state, sex, age, trophic level, distance of habitat from mainland and pollution source, excretion. metabolism, and tissue composition. Sample storage and analytical methodology may also be an important source of variation. These, and any other factors, must be identified and their effect ascertained before attempting any comparison between populations. This paper critically examines the nature and magnitude of the effects of these factors on organochlorine tissue loads in marine mammals. Pollutant concentrations can be strongly biased if carefully designed sampling regimes are not followed, but they are affected only moderately by sample treatment after collection. Conversely, ratios between concentrations of compounds, such as the DDE/tDDT or the tDDT/PCB ratios, seem less dependent on sampling regime but more affected by storage. analytical procedures and ecological variations such as distance from pollutant source or trophic level. Taking these effects into account, advice is provided about sampling and strategies for selection of variables that will improve the reliability of the comparisons between populations.  相似文献   

8.
Bootstrap confidence intervals for adaptive cluster sampling   总被引:2,自引:0,他引:2  
Consider a collection of spatially clustered objects where the clusters are geographically rare. Of interest is estimation of the total number of objects on the site from a sample of plots of equal size. Under these spatial conditions, adaptive cluster sampling of plots is generally useful in improving efficiency in estimation over simple random sampling without replacement (SRSWOR). In adaptive cluster sampling, when a sampled plot meets some predefined condition, neighboring plots are added to the sample. When populations are rare and clustered, the usual unbiased estimators based on small samples are often highly skewed and discrete in distribution. Thus, confidence intervals based on asymptotic normal theory may not be appropriate. We investigated several nonparametric bootstrap methods for constructing confidence intervals under adaptive cluster sampling. To perform bootstrapping, we transformed the initial sample in order to include the information from the adaptive portion of the sample yet maintain a fixed sample size. In general, coverages of bootstrap percentile methods were closer to nominal coverage than the normal approximation.  相似文献   

9.
西安地区空军老干部患病现况及其相关因素研究   总被引:1,自引:0,他引:1  
目的:调查西安地区老年空军人群慢性病患病情况及其生活评价的现况,分析其相关因素的影响。方法:采用整群抽样的方法对西安地区310名老年空军进行体格检查和使用生活综合评价调查表进行问卷调查。结果:在本次研究的老年空军人群中,慢性病患病率100%。同时患两种病及以上的占91.29%。患病率由高到低排名前10位的慢性病依次为高血压、冠心病、高脂血症、老年性白内障、脑血管疾病、前列腺疾病、糖尿病、慢性支气管炎、慢性胃肠道疾病、胆囊和胆道疾病。舒张压异常、体检白细胞指标异常的人群患脑血管疾病的优势比分别是2.16、3.95(P<0.05);甘油三脂异常的人群患高脂血症的优势比是4.74(P<0.001);在健康自评和心理卫生评定中、差的人群患冠心病的优势比分别是1.85、1.64(P<0.001);体能检查中、差的人群惠老年性白内障的优势比是1.48(P<0.05);生活功能、社会交往和活动强度中、差的人群患脑血管疾病的优势比分别是3.40、2.22、2.92(P<0.001)。结论:应有针对性、有重点地开展危险因素的早期干预,促进预防和医疗两大系统的协作,才能有效地控制老年人慢性病的发生。  相似文献   

10.
Trapping is a common sampling technique used to estimate fundamental population metrics of animal species such as abundance, survival and distribution. However, capture success for any trapping method can be heavily influenced by individuals’ behavioural plasticity, which in turn affects the accuracy of any population estimates derived from the data. Funnel trapping is one of the most common methods for sampling aquatic vertebrates, although, apart from fish studies, almost nothing is known about the effects of behavioural plasticity on trapping success. We used a full factorial experiment to investigate the effects that two common environmental parameters (predator presence and vegetation density) have on the trapping success of tadpoles. We estimated that the odds of tadpoles being captured in traps was 4.3 times higher when predators were absent compared to present and 2.1 times higher when vegetation density was high compared to low, using odds ratios based on fitted model means. The odds of tadpoles being detected in traps were also 2.9 times higher in predator-free environments. These results indicate that common environmental factors can trigger behavioural plasticity in tadpoles that biases trapping success. We issue a warning to researchers and surveyors that trapping biases may be commonplace when conducting surveys such as these, and urge caution in interpreting data without consideration of important environmental factors present in the study system. Left unconsidered, trapping biases in capture success have the potential to lead to incorrect interpretations of data sets, and misdirection of limited resources for managing species.  相似文献   

11.
Gillet EM  Gregorius HR 《Biometrics》2000,56(3):801-807
In forest trees, classical techniques of studying modes of inheritance are usually not feasible due to the difficulty of performing controlled crosses. The limited information on inheritance extractable from readily available data, such as the large progenies collectable from single seed trees, must be compensated by the design of appropriately parameterized models. For this purpose, a system analytic approach is used to develop a new inferential framework for testing a single-locus codominant mode of inheritance of genetic traits using the inferred genotypes within progenies of single trees of inferred heterozygous genotype. Model assumptions are random gametic fusion between the local gamete pools and absence of postzygotic selection; ovule segregation distortion is allowed. The method yields estimates of the allele frequencies in both local gamete pools. Since tests of modes of inheritance must be tests of models rather than of parameters, the utility of the classical statistical testing procedures is limited, particularly concerning the qualification of a sampling method to attain a preassigned level of precision. Consistent application of this principle makes it possible to design qualified sampling methods prior to the actual experiment as well as to specify qualification levels for tests of completed experiments.  相似文献   

12.
In DNA library screening, blood testing, and monoclonal antibody generation, significant savings in the number of assays can be realized by employing group sampling. Practical considerations often limit the number of stages of group testing that can be performed. We address situations in which only two stages of testing are used. We define efficiency to be the expected number of positives isolated per assay performed and assume gold-standard tests with unit sensitivity and specificity. Although practical tests never are golden, polymerase chain reaction (PCR) methods provide procedures for screening recombinant libraries that are strongly selective yet retain high sensitivity even when samples are pooled. Also, results for gold-standard tests serve as bounds on the performance of practical testing procedures. First we derive formulas for the efficiency of certain extensions of the popular rows-and-columns technique. Then we derive an upper bound on the efficiency of any two-stage strategy that lies well below the classical upper bound for situations with no constraint on the number of stages. This establishes that a restriction to only two stages necessitates performing many more assays than efficient multistage procedures need. Next, we specialize the bound to cases in which each item belonging only to pools that tested positive in stage 1 must be tested individually in stage 2. The specialized bound for such positive procedures is tight because we show that an appropriate multidimensional extension of the rows-and-columns technique achieves it. We also show that two-stage positive procedures in which the stage-1 groups are selected at random perform suboptimally, thereby establishing that efficient tests must be structured carefully.  相似文献   

13.
McNemar's test is used to assess the difference between two different procedures (treatments) using independent matched-pair data. For matched-pair data collected in clusters, the tests proposed by Durkalski et al. and Obuchowski are popular and commonly used in practice since these tests do not require distributional assumptions or assumptions on the structure of the within-cluster correlation of the data. Motivated by these tests, this note proposes a modified Obuchowski test and illustrates comparisons of the proposed test with the extant methods. An extensive Monte Carlo simulation study suggests that the proposed test performs well with respect to the nominal size, and has higher power; Obuchowski's test is most conservative, and the performance of the Durkalski's test varies between the modified Obuchowski test and the original Obuchowski's test. These results form the basis for our recommendation that (i) for equal cluster size, the modified Obuchowski test is always preferred; (ii) for varying cluster size Durkalski's test can be used for a small number of clusters (e.g. K < 50), whereas for a large number of clusters (e.g. K ≥ 50) the modified Obuchowski test is preferred. Finally, to illustrate practical application of the competing tests, two real collections of clustered matched-pair data are analyzed.  相似文献   

14.
The effects of the anesthetic dibucaine on the relaxation kinetics of the gel-liquid crystalline transition of dipalmitoylphosphatidylcholine (DC16PC) multilamellar vesicles have been investigated using volume-perturbation calorimetry. The temperature and pressure responses to a periodic volume perturbation were measured in real time. Data collected in the time domain were subsequently converted into and analyzed in the frequency domain using Fourier series representations of the perturbation and response functions. The Laplace transform of the classical Kolmogorov-Avrami kinetic relation was employed to describe the relaxation dynamics in the frequency domain. The relaxation time of anesthetic-lipid mixtures, as a function of the fractional degree of melting, appears to be qualitatively similar to that of pure lipid systems, with a pronounced maximum, tau max, observed at a temperature corresponding to greater than 75% melting. The tau max decreases by a factor of approximately 2 as the nominal anesthetic/lipid mole ratio increases from 0 to 0.013 and exhibits no further change as the nominal anesthetic/lipid mole ratio is increased. However, the fractional dimensionality of the relaxation process decreases monotonically from slightly less than two to approximately one as the anesthetic/lipid mole ratio increases from 0 to 0.027. At higher ratios, the dimensionality appears to be less than one. These results are interpreted in terms of the classical kinetic theory and related to those obtained from Monte Carlo simulations. Specifically, low concentrations of dibucaine appear to reduce the average cluster size and cause the fluctuating lipid clusters to become more ramified. At the highest concentration of dibucaine, where n < 1, the system must be kinetically heterogeneous.  相似文献   

15.
McMahan CS  Tebbs JM  Bilder CR 《Biometrics》2012,68(3):793-804
Summary Array-based group-testing algorithms for case identification are widely used in infectious disease testing, drug discovery, and genetics. In this article, we generalize previous statistical work in array testing to account for heterogeneity among individuals being tested. We first derive closed-form expressions for the expected number of tests (efficiency) and misclassification probabilities (sensitivity, specificity, predictive values) for two-dimensional array testing in a heterogeneous population. We then propose two "informative" array construction techniques which exploit population heterogeneity in ways that can substantially improve testing efficiency when compared to classical approaches that regard the population as homogeneous. Furthermore, a useful byproduct of our methodology is that misclassification probabilities can be estimated on a per-individual basis. We illustrate our new procedures using chlamydia and gonorrhea testing data collected in Nebraska as part of the Infertility Prevention Project.  相似文献   

16.
The problem of categorial data analysis in survey sampling arises because of non‐independence of sample elements of the sample obtained through imposed sampling design. In this article the performance of modified χ2 statistic for testing independence of attributes have been evaluated for small sample sizes with the help of log‐linear models with respect to its achieved level of significance for fixed nominal level at 5%, through simulation technique. It hase been observed that the perfiormance of these test statistics depends on average and coefficient of variation of eigen values of design effect matrix. The first order corrected statistic is able to capture the effect of sampling design to a great extent but the performance of second order corrected statistic is much better. Further, these modified χ2 test statistics were applied to a real survey data and their performance were evaluated with respect to their achievied level of significance.  相似文献   

17.
A method for estimating the number entering each development stage from data obtained by regular sampling through one generation of an insect population was described. This method is consisted of the following two procedures: The provisional estimates are calculated on the assumption that each stage has a common mortality in a sampling interval. Then these estimates are corrected on another assumption that the mortality is different in each stage but constant during a stage. The result of testing its validity with two laboratory populations of the common cabbage butterfly, Pieris rapae crucivora, showed the availability of the present method.  相似文献   

18.
Zhang M  Tsiatis AA  Davidian M 《Biometrics》2008,64(3):707-715
Summary .   The primary goal of a randomized clinical trial is to make comparisons among two or more treatments. For example, in a two-arm trial with continuous response, the focus may be on the difference in treatment means; with more than two treatments, the comparison may be based on pairwise differences. With binary outcomes, pairwise odds ratios or log odds ratios may be used. In general, comparisons may be based on meaningful parameters in a relevant statistical model. Standard analyses for estimation and testing in this context typically are based on the data collected on response and treatment assignment only. In many trials, auxiliary baseline covariate information may also be available, and it is of interest to exploit these data to improve the efficiency of inferences. Taking a semiparametric theory perspective, we propose a broadly applicable approach to adjustment for auxiliary covariates to achieve more efficient estimators and tests for treatment parameters in the analysis of randomized clinical trials. Simulations and applications demonstrate the performance of the methods.  相似文献   

19.

Background

The group testing method has been proposed for the detection and estimation of genetically modified plants (adventitious presence of unwanted transgenic plants, AP). For binary response variables (presence or absence), group testing is efficient when the prevalence is low, so that estimation, detection, and sample size methods have been developed under the binomial model. However, when the event is rare (low prevalence <0.1), and testing occurs sequentially, inverse (negative) binomial pooled sampling may be preferred.

Methodology/Principal Findings

This research proposes three sample size procedures (two computational and one analytic) for estimating prevalence using group testing under inverse (negative) binomial sampling. These methods provide the required number of positive pools (), given a pool size (k), for estimating the proportion of AP plants using the Dorfman model and inverse (negative) binomial sampling. We give real and simulated examples to show how to apply these methods and the proposed sample-size formula. The Monte Carlo method was used to study the coverage and level of assurance achieved by the proposed sample sizes. An R program to create other scenarios is given in Appendix S2.

Conclusions

The three methods ensure precision in the estimated proportion of AP because they guarantee that the width (W) of the confidence interval (CI) will be equal to, or narrower than, the desired width (), with a probability of . With the Monte Carlo study we found that the computational Wald procedure (method 2) produces the more precise sample size (with coverage and assurance levels very close to nominal values) and that the samples size based on the Clopper-Pearson CI (method 1) is conservative (overestimates the sample size); the analytic Wald sample size method we developed (method 3) sometimes underestimated the optimum number of pools.  相似文献   

20.
A pest management decision to initiate a control treatment depends upon an accurate estimate of mean pest density. Presence-absence sampling plans significantly reduce sampling efforts to make treatment decisions by using the proportion of infested leaves to estimate mean pest density in lieu of counting individual pests. The use of sequential hypothesis testing procedures can significantly reduce the number of samples required to make a treatment decision. Here we construct a mean-proportion relationship for Oligonychus perseae Tuttle, Baker, and Abatiello, a mite pest of avocados, from empirical data, and develop a sequential presence-absence sampling plan using Bartlett's sequential test procedure. Bartlett's test can accommodate pest population models that contain nuisance parameters that are not of primary interest. However, it requires that population measurements be independent, which may not be realistic because of spatial correlation of pest densities across trees within an orchard. We propose to mitigate the effect of spatial correlation in a sequential sampling procedure by using a tree-selection rule (i.e., maximin) that sequentially selects each newly sampled tree to be maximally spaced from all other previously sampled trees. Our proposed presence-absence sampling methodology applies Bartlett's test to a hypothesis test developed using an empirical mean-proportion relationship coupled with a spatial, statistical model of pest populations, with spatial correlation mitigated via the aforementioned tree-selection rule. We demonstrate the effectiveness of our proposed methodology over a range of parameter estimates appropriate for densities of O. perseae that would be observed in avocado orchards in California.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号