首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
W W Piegorsch 《Biometrics》1990,46(2):309-316
Dichotomous response models are common in many experimental settings. Often, concomitant explanatory variables are recorded, and a generalized linear model, such as a logit model, is fit. In some cases, interest in specific model parameters is directed only at one-sided departures from some null effect. In these cases, procedures can be developed for testing the null effect against a one-sided alternative. These include Bonferroni-type adjustments of univariate Wald tests, and likelihood ratio tests that employ inequality-constrained multivariate theory. This paper examines such tests of significance. Monte Carlo evaluations are undertaken to examine the small-sample properties of the various procedures. The procedures are seen to perform fairly well, generally achieving their nominal sizes at total sample sizes near 100 experimental units. Extensions to the problem of one-sided tests against a control or standard are also considered.  相似文献   

2.
The anticonvulsant potential of chemical substances can be identified with test procedures that act at various biological levels, ranging from subcellular elements to the normal or modified intact animal. All of these procedures modify either some minimal overt threshold electrochemical or neurochemical event or a suprathreshold manifestation such as seizure spread. This suggests that laboratory tests for the detection, quantification, and evaluation of antiepileptic drugs should be designed to identify substances that elevate seizure threshold and/or prevent seizure spread. The s.c. Metrazol (pentylenetetrazol) seizure threshold test and the supramaximal electroshock seizure test are commonly used to achieve this objective. Additional chemoshock tests may be used to delineate further the mechanisms of anticonvulsant action. Numerous variables, such as experimental animals, electroshock apparatus, parameters of electrical and chemical stimulus, and routes of drug administration, must be controlled to ensure accurate, reliable, and reproducible results. The in vivo procedures described are reliable and reproducible, and predict clinical utility of the drugs tested. New models for testing anticonvulsant activity are evaluated against clinically effective antiepileptic drugs originally identified by these same procedures.  相似文献   

3.
Diagnostic imaging tests and microbial infections   总被引:1,自引:0,他引:1  
Despite significant advances in the understanding of its pathogenesis, infection remains a major cause of patient morbidity and mortality. While the presence of infection may be suggested by signs and symptoms, imaging tests are often used to localize or confirm its presence. There are two principal imaging test types: morphological and functional. Morphological tests include radiographs, computed tomography (CT), magnetic resonance imaging, and sonongraphy. These procedures detect anatomic, or structural, alterations produced by microbial invasion and host response. Functional imaging tests reflect the physiological changes that are part of this process. Prototypical functional tests are radionuclide procedures such as bone, gallium, labelled leukocyte and fluorodeoxyglucose (FDG)-positron emission tomography (PET) imaging. In-line functional/morphological tomographic imaging systems, PET/CT and single photon emission tomography (SPECT)/CT, have revolutionized diagnostic imaging. These devices consist of a functional imaging device (PET or SPECT) joined together with a CT scanner. The patient undergoes both tests sequentially without leaving the examination table. Images from each study can be viewed separately and as fused images, providing precisely localized anatomic and functional information. It must be noted, however, that none of the current morphological or functional tests, either alone or in combination, are specific for infection and the goal of finding such an imaging test remains elusive.  相似文献   

4.
The influences of sampling factors, such as age, health of laboratory animals, environmental conditions, and procedures on the standardization of haematological tests in safety drug evaluations performed on animals are discussed. Problems concerning formation and interpretation of haematology reference data are briefly analysed.  相似文献   

5.
A J Bailer  C J Portier 《Biometrics》1988,44(2):417-431
Statistical tests of carcinogenicity are shown to have varying degrees of robustness to the effects of mortality. Mortality induced by two different mechanisms is studied--mortality due to the tumor of interest, and mortality due to treatment independent of the tumor. The two most commonly used tests, the life-table test and the Cochran-Armitage linear trend test, are seen to be highly sensitive to increases in treatment lethality using small-sample simulations. Increases in tumor lethality are seen to affect the performance of commonly used prevalence tests such as logistic regression. A simple survival-adjusted quantal response test appears to be the most robust of all the procedures considered.  相似文献   

6.
In DNA library screening, blood testing, and monoclonal antibody generation, significant savings in the number of assays can be realized by employing group sampling. Practical considerations often limit the number of stages of group testing that can be performed. We address situations in which only two stages of testing are used. We define efficiency to be the expected number of positives isolated per assay performed and assume gold-standard tests with unit sensitivity and specificity. Although practical tests never are golden, polymerase chain reaction (PCR) methods provide procedures for screening recombinant libraries that are strongly selective yet retain high sensitivity even when samples are pooled. Also, results for gold-standard tests serve as bounds on the performance of practical testing procedures. First we derive formulas for the efficiency of certain extensions of the popular rows-and-columns technique. Then we derive an upper bound on the efficiency of any two-stage strategy that lies well below the classical upper bound for situations with no constraint on the number of stages. This establishes that a restriction to only two stages necessitates performing many more assays than efficient multistage procedures need. Next, we specialize the bound to cases in which each item belonging only to pools that tested positive in stage 1 must be tested individually in stage 2. The specialized bound for such positive procedures is tight because we show that an appropriate multidimensional extension of the rows-and-columns technique achieves it. We also show that two-stage positive procedures in which the stage-1 groups are selected at random perform suboptimally, thereby establishing that efficient tests must be structured carefully.  相似文献   

7.
The confirmatory analysis of pre-specified multiple hypotheses has become common in pivotal clinical trials. In the recent past multiple test procedures have been developed that reflect the relative importance of different study objectives, such as fixed sequence, fallback, and gatekeeping procedures. In addition, graphical approaches have been proposed that facilitate the visualization and communication of Bonferroni-based closed test procedures for common multiple test problems, such as comparing several treatments with a control, assessing the benefit of a new drug for more than one endpoint, combined non-inferiority and superiority testing, or testing a treatment at different dose levels in an overall and a subpopulation. In this paper, we focus on extended graphical approaches by dissociating the underlying weighting strategy from the employed test procedure. This allows one to first derive suitable weighting strategies that reflect the given study objectives and subsequently apply appropriate test procedures, such as weighted Bonferroni tests, weighted parametric tests accounting for the correlation between the test statistics, or weighted Simes tests. We illustrate the extended graphical approaches with several examples. In addition, we describe briefly the gMCP package in R, which implements some of the methods described in this paper.  相似文献   

8.
Routine preoperative tests such as the determination of bleeding time and coagulation time are unnecessary and are not recommended. Rulings which require routine preoperative tests result in the adoption of inferior and unreliable time-saving methods in the laboratory. If the clinical staff insists that laboratory procedures to predict hemorrhage be performed on every patient scheduled for operation, approved methods of performing the tests should be employed.Preoperative procedures should include a personal and a family history, a careful and complete physical examination and screening laboratory tests such as urinalysis, hematocrit, leukocyte count and smear examination, including estimation of the number of thrombocytes.Special hemorrhagic studies are indicated on selected patients. These selected patients include those who have a history of abnormal bleeding, those who consider themselves “easy bleeders” or who have apprehension concerning hemorrhage at the time of operation, and those who have physical signs of hemorrhage. Special hemorrhagic studies should also be performed on patients who have diseases that are known to be associated with vascular and coagulation abnormalities, infants who have not been subjected to tests of trauma and on patients from whom a reliable history cannot be obtained.Extra precaution should be taken if operation is to be performed in hospitals or clinics that do not have adequate blood banking facilities and if the operation to be performed is one in which difficulty in hemostasis is anticipated.The preoperative tests that are indicated on selected patients should include as a minimum: The thrombocyte count, determination of the bleeding time by the Ivy method, determination of the coagulation time by the multiple tube method and the observation of the clot. Where facilities are available, the hemorrhagic study should also include the plasma and serum prothrombin activity tests.  相似文献   

9.
Invasive tests to diagnose patients with gastrointestinal disease are rapidly being replaced by procedures which enable organ function to be assessed by monitoring the product of a metabolic reaction in readily available materials such as breath, blood, and urine. Examples of these approaches that will be assessed in this review include the hydrogen breath test for lactase deficiency, radioactive carbon dioxide breath measurements to test for fat digestion and absorption, and tests of pancreatic function based upon synthetic substrates from which fluorescein or para-aminobenzoic acid can be liberated by pancreas-specific enzymes. Significant advances have been made in improving the organ sensitivity of enzyme determinations. The determination of amylase isoenzymes has been less useful than the measurement of immunoreactive trypsin; this latter enzyme is greatly elevated in the blood of neonates with cystic fibrosis, whereas serum levels are greatly depressed in cystic fibrosis patients with pancreatic insufficiency as well as in most patients with steatorrhea due to chronic pancreatitis. Many of these tests are now becoming standard procedures in the investigation of infants with gastrointestinal disease.  相似文献   

10.
A central goal in designing clinical trials is to find the test that maximizes power (or equivalently minimizes required sample size) for finding a false null hypothesis subject to the constraint of type I error. When there is more than one test, such as in clinical trials with multiple endpoints, the issues of optimal design and optimal procedures become more complex. In this paper, we address the question of how such optimal tests should be defined and how they can be found. We review different notions of power and how they relate to study goals, and also consider the requirements of type I error control and the nature of the procedures. This leads us to an explicit optimization problem with objective and constraints that describe its specific desiderata. We present a complete solution for deriving optimal procedures for two hypotheses, which have desired monotonicity properties, and are computationally simple. For some of the optimization formulations this yields optimal procedures that are identical to existing procedures, such as Hommel's procedure or the procedure of Bittman et al. (2009), while for other cases it yields completely novel and more powerful procedures than existing ones. We demonstrate the nature of our novel procedures and their improved power extensively in a simulation and on the APEX study (Cohen et al., 2016).  相似文献   

11.
Routine preoperative tests such as the determination of bleeding time and coagulation time are unnecessary and are not recommended. Rulings which require routine preoperative tests result in the adoption of inferior and unreliable time-saving methods in the laboratory. If the clinical staff insists that laboratory procedures to predict hemorrhage be performed on every patient scheduled for operation, approved methods of performing the tests should be employed. Preoperative procedures should include a personal and a family history, a careful and complete physical examination and screening laboratory tests such as urinalysis, hematocrit, leukocyte count and smear examination, including estimation of the number of thrombocytes. Special hemorrhagic studies are indicated on selected patients. These selected patients include those who have a history of abnormal bleeding, those who consider themselves "easy bleeders" or who have apprehension concerning hemorrhage at the time of operation, and those who have physical signs of hemorrhage. Special hemorrhagic studies should also be performed on patients who have diseases that are known to be associated with vascular and coagulation abnormalities, infants who have not been subjected to tests of trauma and on patients from whom a reliable history cannot be obtained. Extra precaution should be taken if operation is to be performed in hospitals or clinics that do not have adequate blood banking facilities and if the operation to be performed is one in which difficulty in hemostasis is anticipated. THE PREOPERATIVE TESTS THAT ARE INDICATED ON SELECTED PATIENTS SHOULD INCLUDE AS A MINIMUM: The thrombocyte count, determination of the bleeding time by the Ivy method, determination of the coagulation time by the multiple tube method and the observation of the clot. Where facilities are available, the hemorrhagic study should also include the plasma and serum prothrombin activity tests.  相似文献   

12.
Dallas MJ  Rao PV 《Biometrics》2000,56(1):154-159
We introduce two test procedures for comparing two survival distributions on the basis of randomly right-censored data consisting of both paired and unpaired observations. Our procedures are based on generalizations of a pooled rank test statistic previously proposed for uncensored data. One generalization adapts the Prentice-Wilcoxon score, while the other adapts the Akritas score. The use of these particular scoring systems in pooled rank tests with randomly right-censored paired data has been advocated by several researchers. Our test procedures utilize the permutation distributions of the test statistics based on a novel manner of permuting the scores. Permutation versions of tests for right-censored paired data and for two independent right-censored samples that use the proposed scoring systems are obtained as special cases of our test procedures. Simulation results show that our test procedures have high power for detecting scale and location shifts in exponential and log-logistic distributions for the survival times. We also demonstrate the advantages of our test procedures in terms of utilizing randomly occurring unpaired observations that are discarded in test procedures for paired data. The tests are applied to skin graft data previously reported elsewhere.  相似文献   

13.
In the analysis of gene expression by microarrays there are usually few subjects, but high-dimensional data. By means of techniques, such as the theory of spherical tests or with suitable permutation tests, it is possible to sort the endpoints or to give weights to them according to specific criteria determined by the data while controlling the multiple type I error rate. The procedures developed so far are based on a sequential analysis of weighted p-values (corresponding to the endpoints), including the most extreme situation of weighting leading to a complete order of p-values. When the data for the endpoints have approximately equal variances, these procedures show good power properties. In this paper, we consider an alternative procedure, which is based on completely sorting the endpoints, but smoothed in the sense that some perturbations in the sequence of the p-values are allowed. The procedure is relatively easy to perform, but has high power under the same restrictions as for the weight-based procedures.  相似文献   

14.
In a 2 × 2 crossover bioavailability study, the sets of estimates of the pharmacokinetic parameters quite often have a symmetric covariance structure between the two treatments. For testing the equality of the intra‐subject covariance matrices for the two treatments in such studies, we suggest in this paper some statistical tests. When the response vectors are bivariate, we propose an exact test. Since the statistical procedures depend on the assumption of a symmetric covariance structure between the two treatments, we put forth some statistical tests for this assumption. We then apply the discussed tests to real data from a crossover bioavailability trial.  相似文献   

15.
This paper discusses two sample nonparametric comparison of survival functions when only interval‐censored failure time data are available. The problem considered often occurs in, for example, biological and medical studies such as medical follow‐up studies and clinical trials. For the problem, we present and study several nonparametric test procedures that include methods based on both absolute and squared survival differences as well as simple survival differences. The presented tests provide alternatives to existing methods, most of which are rank‐based tests and not sensitive to nonproportional or nonmonotone alternatives. Simulation studies are performed to evaluate and compare the proposed methods with existing methods and suggest that the proposed tests work well for nonmonotone alternatives as well as monotone alternatives. An illustrative example is presented.  相似文献   

16.
The test statistics used until now in the CFA have been developed under the assumption of the overall hypothesis of total independence. Therefore, the multiple test procedures based on these statistics are really only different tests of the overall hypothesis. If one likes to test a special cell hypothesis, one should only assume that this hypothesis is true and not the whole overall hypothesis. Such cell tests can then be used as elements of a multiple test procedure. In this paper it is shown that the usual test procedures can be very anticonservative (except of the two-dimensional, and, for some procedures, the three-dimensional case), and corrected test procedures are developed. Furthermore, for the construction of multiple tests controlling the multiple level, modifications of Holm's (1979) procedure are proposed which lead to sharper results than his general procedure and can also be performed very easily.  相似文献   

17.
Emphasis has increased on accuracy in predicting the effect that anthropogenic stress has on natural ecosystems. Although toxicity tests low in environmental realism, such as standardized single species procedures, have been useful in providing a certain degree of protection to human health and the environment, the accuracy of such tests for predicting the effects of anthropogenic activities on complex ecosystems is questionable. The use of indigenous communities of microorganisms to assess the hazard of toxicants in aquatic ecosystems has many advantages. Theoretical and practical aspects of microbial community tests are discussed, particularly in related to widely cited problems in the use of multispecies test systems for predicting hazard. Further standardization of testing protocols using microbial colonization dynamics is advocated on the basis of previous studies, which have shown these parameters to be useful in assessing risk and impact of hazardous substances in aquatic ecosystems.  相似文献   

18.
In this paper we compare the properties of four different general approaches for testing the ratio of two Poisson rates. Asymptotically normal tests, tests based on approximate p -values, exact conditional tests, and a likelihood ratio test are considered. The properties and power performance of these tests are studied by a Monte Carlo simulation experiment. Sample size calculation formulae are given for each of the test procedures and their validities are studied. Some recommendations favoring the likelihood ratio and certain asymptotic tests are based on these simulation results. Finally, all of the test procedures are illustrated with two real life medical examples.  相似文献   

19.
The need to consider in capture-recapture models random effects besides fixed effects such as those of environmental covariates has been widely recognized over the last years. However, formal approaches require involved likelihood integrations, and conceptual and technical difficulties have slowed down the spread of capture-recapture mixed models among biologists. In this article, we evaluate simple procedures to test for the effect of an environmental covariate on parameters such as time-varying survival probabilities in presence of a random effect corresponding to unexplained environmental variation. We show that the usual likelihood ratio test between fixed models is strongly biased, and tends to detect too often a covariate effect. Permutation and analysis of deviance tests are shown to behave properly and are recommended. Permutation tests are implemented in the latest version of program E-SURGE. Our approach also applies to generalized linear mixed models.  相似文献   

20.
In this paper, several different procedures for constructing confidence regions for the true evolutionary tree are evaluated both in terms of coverage and size without considering model misspecification. The regions are constructed on the basis of tests of hypothesis using six existing tests: Shimodaira Hasegawa (SH), SOWH, star form of SOWH (SSOWH), approximately unbiased (AU), likelihood weight (LW), generalized least squares, plus two new tests proposed in this paper: single distribution nonparametric bootstrap (SDNB) and single distribution parametric bootstrap (SDPB). The procedures are evaluated on simulated trees both with small and large number of taxa. Overall, the SH, SSOWH, AU, and LW tests led to regions with higher coverage than the nominal level at the price of including large numbers of trees. Under the specified model, the SOWH test gives accurate coverage and relatively small regions. The SDNB and SDPB tests led to the small regions with occasional undercoverage. These two procedures have a substantial computational advantage over the SOWH test. Finally, the cutoff levels for the SDNB test are shown to be more variable than those for the SDPB test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号