首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A comparison of two modified Bonferroni procedures   总被引:2,自引:0,他引:2  
HOMMEL  GERHARD 《Biometrika》1989,76(3):624-625
  相似文献   

2.
An improved Bonferroni procedure for multiple tests of significance   总被引:24,自引:0,他引:24  
SIMES  R. J. 《Biometrika》1986,73(3):751-754
  相似文献   

3.
Environmental management decisions are prone to expensive mistakes if they are triggered by hypothesis tests using the conventional Type I error rate (α) of 0.05. We derive optimal α‐levels for decision‐making by minimizing a cost function that specifies the overall cost of monitoring and management. When managing an economically valuable koala population, it shows that a decision based on α = 0.05 carries an expected cost over $5 million greater than the optimal decision. For a species of such value, there is never any benefit in guarding against the spurious detection of declines and therefore management should proceed directly to recovery action. This result holds in most circumstances where the species’ value substantially exceeds its recovery costs. For species of lower economic value, we show that the conventional α‐level of 0.05 rarely approximates the optimal decision‐making threshold. This analysis supports calls for reversing the statistical ‘burden of proof’ in environmental decision‐making when the cost of Type II errors is relatively high.  相似文献   

4.
To optimize resources, randomized clinical trials with multiple arms can be an attractive option to simultaneously test various treatment regimens in pharmaceutical drug development. The motivation for this work was the successful conduct and positive final outcome of a three‐arm randomized clinical trial primarily assessing whether obinutuzumab plus chlorambucil in patients with chronic lympocytic lymphoma and coexisting conditions is superior to chlorambucil alone based on a time‐to‐event endpoint. The inference strategy of this trial was based on a closed testing procedure. We compare this strategy to three potential alternatives to run a three‐arm clinical trial with a time‐to‐event endpoint. The primary goal is to quantify the differences between these strategies in terms of the time it takes until the first analysis and thus potential approval of a new drug, number of required events, and power. Operational aspects of implementing the various strategies are discussed. In conclusion, using a closed testing procedure results in the shortest time to the first analysis with a minimal loss in power. Therefore, closed testing procedures should be part of the statistician's standard clinical trials toolbox when planning multiarm clinical trials.  相似文献   

5.
Three-dimensional electron cryomicroscopy of randomly oriented single particles is a method that is suitable for the determination of three-dimensional structures of macromolecular complexes at molecular resolution. However, the electron-microscopical projection images are modulated by a contrast transfer function (CTF) that prevents the calculation of three-dimensional reconstructions of biological complexes at high resolution from uncorrected images. We describe here an automated method for the accurate determination and correction of the CTF parameters defocus, twofold astigmatism and amplitude-contrast proportion from single-particle images. At the same time, the method allows the frequency-dependent signal decrease (B factor) and the non-convoluted background signal to be estimated. The method involves the classification of the power spectra of single-particle images into groups with similar CTF parameters; this is done by multivariate statistical analysis (MSA) and hierarchically ascending classification (HAC). Averaging over several power spectra generates class averages with enhanced signal-to-noise ratios. The correct CTF parameters can be deduced from these class averages by applying an iterative correlation procedure with theoretical CTF functions; they are then used to correct the raw images. Furthermore, the method enables the tilt axis of the sample holder to be determined and allows the elimination of individual poor-quality images that show high drift or charging effects.  相似文献   

6.
7.
Laser speckle contrast imaging (LSCI) is used in clinical research to dynamically image blood flow. One drawback is its susceptibility to movement artifacts. We demonstrate a new, simple method to correct motion artifacts in LSCI signals measured in awake mice with cranial windows during sensory stimulation. The principle is to identify a region in the image in which speckle contrast (SC) is independent of blood flow and only varies with animal movement, then to regress out this signal from the data. We show that (1) the regressed signal correlates well with mouse head movement, (2) the corrected signal correlates better with independently measured blood volume and (3) it has a (59 ± 6)% higher signal-to-noise ratio. Compared to three alternative correction methods, ours has the best performance. Regressing out flow-independent global variations in SC is a simple and accessible way to improve the quality of LSCI measurements.  相似文献   

8.
Null hypothesis significance testing (NHST) is the dominant statistical approach in biology, although it has many, frequently unappreciated, problems. Most importantly, NHST does not provide us with two crucial pieces of information: (1) the magnitude of an effect of interest, and (2) the precision of the estimate of the magnitude of that effect. All biologists should be ultimately interested in biological importance, which may be assessed using the magnitude of an effect, but not its statistical significance. Therefore, we advocate presentation of measures of the magnitude of effects (i.e. effect size statistics) and their confidence intervals (CIs) in all biological journals. Combined use of an effect size and its CIs enables one to assess the relationships within data more effectively than the use of p values, regardless of statistical significance. In addition, routine presentation of effect sizes will encourage researchers to view their results in the context of previous research and facilitate the incorporation of results into future meta-analysis, which has been increasingly used as the standard method of quantitative review in biology. In this article, we extensively discuss two dimensionless (and thus standardised) classes of effect size statistics: d statistics (standardised mean difference) and r statistics (correlation coefficient), because these can be calculated from almost all study designs and also because their calculations are essential for meta-analysis. However, our focus on these standardised effect size statistics does not mean unstandardised effect size statistics (e.g. mean difference and regression coefficient) are less important. We provide potential solutions for four main technical problems researchers may encounter when calculating effect size and CIs: (1) when covariates exist, (2) when bias in estimating effect size is possible, (3) when data have non-normal error structure and/or variances, and (4) when data are non-independent. Although interpretations of effect sizes are often difficult, we provide some pointers to help researchers. This paper serves both as a beginner's instruction manual and a stimulus for changing statistical practice for the better in the biological sciences.  相似文献   

9.
Modelling dietary data, and especially 24-hr dietary recall (24HDR) data, is a challenge. Ignoring the inherent measurement error (ME) leads to biased effect estimates when the association between an exposure and an outcome is investigated. We propose an adapted simulation extrapolation (SIMEX) algorithm for modelling dietary exposures. For this purpose, we exploit the ME model of the NCI method where we assume the assumption of normally distributed errors of the reported intake on the Box-Cox transformed scale and of unbiased recalls on the original scale. According to the SIMEX algorithm, remeasurements of the observed data with additional ME are generated in order to estimate the association between the level of ME and the resulting effect estimate. Subsequently, this association is extrapolated to the case of zero ME to obtain the corrected estimate. We show that the proposed method fulfils the key property of the SIMEX approach, that is, that the MSE of the generated data will converge to zero if the ME variance converges to zero. Furthermore, the method is applied to real 24HDR data of the I.Family study to correct the effects of salt and alcohol intake on blood pressure. In a simulation study, the method is compared with the NCI method resulting in effect estimates with either smaller MSE or smaller bias in certain situations. In addition, we found our method to be more informative and easier to implement. Therefore, we conclude that the proposed method is useful to promote the dissemination of ME correction methods in nutritional epidemiology.  相似文献   

10.
Spectral quality control is an important step in the analysis of infrared spectral data, however, often neglected in scientific literature. A frequently used quality test that was originally developed for infrared spectra of bacteria is provided by OPUS software from Bruker Optik GmbH. In this study, the OPUS quality test is applied to a large number of spectra of bacteria, yeasts and moulds and hyperspectral images of microorganisms. It is shown that the use of strict thresholds for parameters of the OPUS quality test leads to discarding too many spectra. A strategy for optimizing parameters thresholds of the OPUS quality test is provided and a novel approach for spectral quality testing based on extended multiplicative signal correction (EMSC) is suggested. For all the data sets considered in our study, the EMSC quality test is shown to be the best among different alternatives of OPUS quality test provided.  相似文献   

11.
Lui KJ  Kelly C 《Biometrics》2000,56(1):309-315
Lipsitz et al. (1998, Biometrics 54, 148-160) discussed testing the homogeneity of the risk difference for a series of 2 x 2 tables. They proposed and evaluated several weighted test statistics, including the commonly used weighted least squares test statistic. Here we suggest various important improvements on these test statistics. First, we propose using the one-sided analogues of the test procedures proposed by Lipsitz et al. because we should only reject the null hypothesis of homogeneity when the variation of the estimated risk differences between centers is large. Second, we generalize their study by redesigning the simulations to include the situations considered by Lipsitz et al. (1998) as special cases. Third, we consider a logarithmic transformation of the weighted least squares test statistic to improve the normal approximation of its sampling distribution. On the basis of Monte Carlo simulations, we note that, as long as the mean treatment group size per table is moderate or large (> or = 16), this simple test statistic, in conjunction with the commonly used adjustment procedure for sparse data, can be useful when the number of 2 x 2 tables is small or moderate (< or = 32). In these situations, in fact, we find that our proposed method generally outperforms all the statistics considered by Lipsitz et al. Finally, we include a general guideline about which test statistic should be used in a variety of situations.  相似文献   

12.
13.
Numerous previous studies have shown that one-zero sampling in direct observation systematically overestimates behavioral durations. Apost hoc correction procedure has been proposed and demonstrated to be capable of removing most of the systematic errors found in one-zero duration estimates. However, the remaining errors for some individual observation sessions can still be quite substantial in practical terms. In this paper, the condition under which thepost hoc correction procedure will produce results with negligible systematic errors in one-zero duration estimates is identified.  相似文献   

14.
15.
We propose a multiple comparison procedure to identify the minimum effective dose level by sequentially comparing each dose level with the zero dose level in the dose finding test. If we can find the minimum effective dose level at an early stage in the sequential test, it is possible to terminate the procedure in the dose finding test after a few group observations up to the dose level. Thus, the procedure is viable from an economical point of view when high costs are involved in obtaining the observations. In the procedure, we present an integral formula to determine the critical values for satisfying a predefined type I familywise error rate. Furthermore, we show how to determine the required sample size in order to guarantee the power of the test in the procedure. In practice, we compare the power of the test and the required sample size for various configurations of the population means in simulation studies and adopt our sequential procedure to the dose response test in a case study.  相似文献   

16.
In vivo imaging of tissue/vasculature oxygen saturation levels is of prime interest in many clinical applications. To this end, the feasibility of combining two distinct and complementary imaging modalities is investigated: optoacoustics (OA) and near‐infrared optical tomography (NIROT), both operating noninvasively in reflection mode. Experiments were conducted on two optically heterogeneous phantoms mimicking tissue before and after the occurrence of a perturbation. OA imaging was used to resolve submillimetric vessel‐like optical absorbers at depths up to 25 mm, but with a spectral distortion in the OA signals. NIROT measurements were utilized to image perturbations in the background and to estimate the light fluence inside the phantoms at the wavelength pair (760 nm, 830 nm). This enabled the spectral correction of the vessel‐like absorbers' OA signals: the error in the ratio of the absorption coefficient at 830 nm to that at 760 nm was reduced from 60%‐150% to 10%‐20%. The results suggest that oxygen saturation (SO 2) levels in arteries can be determined with <10% error and furthermore, that relative changes in vessels' SO 2 can be monitored with even better accuracy. The outcome relies on a proper identification of the OA signals emanating from the studied vessels.   相似文献   

17.
18.
Liu Q  Chi GY 《Biometrics》2001,57(1):172-177
Proschan and Hunsberger (1995, Biometrics 51, 1315-1324) proposed a two-stage adaptive design that maintains the Type I error rate. For practical applications, a two-stage adaptive design is also required to achieve a desired statistical power while limiting the maximum overall sample size. In our proposal, a two-stage adaptive design is comprised of a main stage and an extension stage, where the main stage has sufficient power to reject the null under the anticipated effect size and the extension stage allows increasing the sample size in case the true effect size is smaller than anticipated. For statistical inference, methods for obtaining the overall adjusted p-value, point estimate and confidence intervals are developed. An exact two-stage test procedure is also outlined for robust inference.  相似文献   

19.
Bivariate line-fitting methods for allometry   总被引:14,自引:0,他引:14  
Fitting a line to a bivariate dataset can be a deceptively complex problem, and there has been much debate on this issue in the literature. In this review, we describe for the practitioner the essential features of line-fitting methods for estimating the relationship between two variables: what methods are commonly used, which method should be used when, and how to make inferences from these lines to answer common research questions. A particularly important point for line-fitting in allometry is that usually, two sources of error are present (which we call measurement and equation error), and these have quite different implications for choice of line-fitting method. As a consequence, the approach in this review and the methods presented have subtle but important differences from previous reviews in the biology literature. Linear regression, major axis and standardised major axis are alternative methods that can be appropriate when there is no measurement error. When there is measurement error, this often needs to be estimated and used to adjust the variance terms in formulae for line-fitting. We also review line-fitting methods for phylogenetic analyses. Methods of inference are described for the line-fitting techniques discussed in this paper. The types of inference considered here are testing if the slope or elevation equals a given value, constructing confidence intervals for the slope or elevation, comparing several slopes or elevations, and testing for shift along the axis amongst several groups. In some cases several methods have been proposed in the literature. These are discussed and compared. In other cases there is little or no previous guidance available in the literature. Simulations were conducted to check whether the methods of inference proposed have the intended coverage probability or Type I error. We identified the methods of inference that perform well and recommend the techniques that should be adopted in future work.  相似文献   

20.
AimDetermine the 1) effectiveness of correction for gradient-non-linearity and susceptibility effects on both QUASAR GRID3D and CIRS phantoms; and 2) the magnitude and location of regions of residual distortion before and after correction.BackgroundUsing magnetic resonance imaging (MRI) as a primary dataset for radiotherapy planning requires correction for geometrical distortion and non-uniform intensity.Materials and MethodsPhantom Study: MRI, computed tomography (CT) and cone beam CT images of QUASAR GRID3D and CIRS head phantoms were acquired. Patient Study: Ten patients were MRI-scanned for stereotactic radiosurgery treatment. Correction algorithm: Two magnitude and one phase difference image were acquired to create a field map. A MATLAB program was used to calculate geometrical distortion in the frequency encoding direction, and 3D interpolation was applied to resize it to match 3D T1-weighted magnetization-prepared rapid gradient-echo (MPRAGE) images. MPRAGE images were warped according to the interpolated field map in the frequency encoding direction. The corrected and uncorrected MRI images were fused, deformable registered, and a difference distortion map generated.ResultsMaximum deviation improvements: GRID3D, 0.27 mm y-direction, 0.07 mm z-direction, 0.23 mm x-direction. CIRS, 0.34 mm, 0.1 mm and 0.09 mm at 20-, 40- and 60-mm diameters from the isocenter. Patient data show corrections from 0.2 to 1.2 mm, based on location. The most-distorted areas are around air cavities, e.g. sinuses.ConclusionsThe phantom data show the validity of our fast distortion correction algorithm. Patient-specific data are acquired in <2 min and analyzed and available for planning in less than a minute.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号