首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Reference intervals are widely used in the interpretation of results of biochemical and physiological tests of patients. When there are multiple biochemical analytes measured from each subject, a multivariate reference region is needed. Because of their greater specificity against false positives, such reference regions are more desirable than separate univariate reference intervals that disregard the cross-correlations between variables. Traditionally, under multivariate normality, reference regions have been constructed as ellipsoidal regions. This approach suffers from a major drawback: it cannot detect component-wise extreme observations. In the present work, procedures are developed to construct rectangular reference regions in the multivariate normal setup. The construction is based on the criteria for tolerance intervals. The problems addressed include the computation of a rectangular tolerance region and simultaneous tolerance intervals. Also addressed is the computation of mixed reference intervals that include both two-sided and one-sided limits, simultaneously. A parametric bootstrap approach is used in the computations, and the accuracy of the proposed methodology is assessed using estimated coverage probabilities. The problem of sample size determination is also addressed, and the results are illustrated using examples that call for the computation of reference regions.  相似文献   

2.
Multivariate data analysis (MVDA) is a highly valuable and significantly underutilized resource in biomanufacturing. It offers the opportunity to enhance understanding and leverage useful information from complex high‐dimensional data sets, recorded throughout all stages of therapeutic drug manufacture. To help standardize the application and promote this resource within the biopharmaceutical industry, this paper outlines a novel MVDA methodology describing the necessary steps for efficient and effective data analysis. The MVDA methodology is followed to solve two case studies: a “small data” and a “big data” challenge. In the “small data” example, a large‐scale data set is compared to data from a scale‐down model. This methodology enables a new quantitative metric for equivalence to be established by combining a two one‐sided test with principal component analysis. In the “big data” example, this methodology enables accurate predictions of critical missing data essential to a cloning study performed in the ambr15 system. These predictions are generated by exploiting the underlying relationship between the off‐line missing values and the on‐line measurements through the generation of a partial least squares model. In summary, the proposed MVDA methodology highlights the importance of data pre‐processing, restructuring, and visualization during data analytics to solve complex biopharmaceutical challenges.  相似文献   

3.
A score‐type test is proposed for testing the hypothesis of independent binary random variables against positive correlation in linear logistic models with sparse data and cluster specific covariates. The test is developed for univariate and multivariate one‐sided alternatives. The main advantage of using score test is that it requires estimation of the model only under the null hypothesis, that in this case corresponds to the binomial maximum likelihood fit. The score‐type test is developed from a class of estimating equations with block‐diagonal structure in which the coefficients of the linear logistic model are estimated simultaneously with the correlation. The simplicity of the score test is illustrated in two particular examples.  相似文献   

4.
The total deviation index of Lin and Lin et al. is an intuitive approach for the assessment of agreement between two methods of measurement. It assumes that the differences of the paired measurements are a random sample from a normal distribution and works essentially by constructing a probability content tolerance interval for this distribution. We generalize this approach to the case when differences may not have identical distributions -- a common scenario in applications. In particular, we use the regression approach to model the mean and the variance of differences as functions of observed values of the average of the paired measurements, and describe two methods based on asymptotic theory of maximum likelihood estimators for constructing a simultaneous probability content tolerance band. The first method uses bootstrap to approximate the critical point and the second method is an analytical approximation. Simulation shows that the first method works well for sample sizes as small as 30 and the second method is preferable for large sample sizes. We also extend the methodology for the case when the mean function is modeled using penalized splines via a mixed model representation. Two real data applications are presented.  相似文献   

5.
Abstract. A spatially explicit model was developed to study the relationships between the dynamics and spatial structure of forest stands. The objective was to test whether tree spatial structure can be used as an indicator of stand dynamics. The model simulates the growth, mortality and recruitment of trees in a multi‐specific and uneven‐aged stand. It includes deterministic and stochastic processes so that repeated simulations do not lead to the same stand but provide several possible results for a given dynamic (defined by a set of parameters). Second‐order neighbourhood analyses were used to characterize the resulting spatial structures. They showed a high variability for a given set of parameters. Only the main trends in the spatial structure can be interpreted. Sensitivity analyses, concerning the influence of competition on spatial structure, showed that in heterogeneous stands confounding effects can hinder the interpretation of the spatial structure if all the trees are considered. The spatial structure of the canopy trees alone proved easier to interpret as it is directly linked to post recruitment competition. Inference on the dominant modality of competition (one‐sided or two‐sided) based on the spatial structure proved difficult.  相似文献   

6.
Recently, Brown , Hwang , and Munk (1998) proposed and unbiased test for the average equivalence problem which improves noticeably in power on the standard two one‐sided tests procedure. Nevertheless, from a practical point of view there are some objections against the use of this test which are mainly adressed to the ‘unusual’ shape of the critical region. We show that every unbiased test has a critical region with such an ‘unusual’ shape. Therefore, we discuss three (biased) modifications of the unbiased test. We come to the conclusion that a suitable modification represents a good compromise between a most powerful test and a test with an appealing shape of its critical region. In order to perform these tests figures are given containing the rejection region. Finally, we compare all tests in an example from neurophysiology. This shows that it is beneficial to use these improved tests instead of the two one‐sided tests procedure.  相似文献   

7.
Diversity indices might be used to assess the impact of treatments on the relative abundance patterns in species communities. When several treatments are to be compared, simultaneous confidence intervals for the differences of diversity indices between treatments may be used. The simultaneous confidence interval methods described until now are either constructed or validated under the assumption of the multinomial distribution for the abundance counts. Motivated by four example data sets with background in agricultural and marine ecology, we focus on the situation when available replications show that the count data exhibit extra‐multinomial variability. Based on simulated overdispersed count data, we compare previously proposed methods assuming multinomial distribution, a method assuming normal distribution for the replicated observations of the diversity indices and three different bootstrap methods to construct simultaneous confidence intervals for multiple differences of Simpson and Shannon diversity indices. The focus of the simulation study is on comparisons to a control group. The severe failure of asymptotic multinomial methods in overdispersed settings is illustrated. Among the bootstrap methods, the widely known Westfall–Young method performs best for the Simpson index, while for the Shannon index, two methods based on stratified bootstrap and summed count data are preferable. The methods application is illustrated for an example.  相似文献   

8.
Recently BHATTI (1993) considered an efficient estimation of random coefficient model based on survey data. The main objective of this paper is to construct one sided test for testing equicorrelation coefficient in presence of random coefficients using optimal testing procedure. The test statistic is a ratio of quadratic forms in normal variables which is most powerful and point optimal invariant.  相似文献   

9.
Lee and Spurrier (1995) present one‐sided and two‐sided confidence interval procedures for making successive comparisons between ordered treatments. Their procedures have important applications for problems where the treatments can be assumed to satisfy a simple ordering, such as for a sequence of increasing dose‐levels of a drug. The two‐sided procedure provides both upper and lower bounds on the differences between successive treatments, whereas the one‐sided procedure only provides lower bounds on these differences. However, the one‐sided procedure allows sharper inferences regarding which treatments can be declared to be better than their previous ones. In this paper we apply the results obtained in Hayter , Miwa , and Liu (2000) to develop a new procedure which combines the good aspects of both the one‐sided and the two‐sided procedures. This new procedure maintains the inferential sensitivity of the one‐sided procedure while also providing both upper and lower bounds on the differences between successive treatments. Some new critical points are needed which are tabulated for the balanced case where the sample sizes are all equal. The application of the new procedure is illustrated with an example.  相似文献   

10.
Thermal tolerance is an important factor influencing the distribution of ectotherms, but our understanding of the ability of species to evolve different thermal limits is limited. Based on univariate measures of adaptive capacity, it has recently been suggested that species may have limited evolutionary potential to extend their upper thermal limits under ramping temperature conditions that better reflect heat stress in nature. To test these findings more broadly, we used a paternal half‐sibling breeding design to estimate the multivariate evolutionary potential for upper thermal limits in Drosophila simulans. We assessed heat tolerance using static (basal and hardened) and ramping assays. Our analyses revealed significant evolutionary potential for all three measures of heat tolerance. Additive genetic variances were significantly different from zero for all three traits. Our G matrix analysis revealed that all three traits would contribute to a response to selection for increased heat tolerance. Significant additive genetic covariances and additive genetic correlations between static basal and hardened heat‐knockdown time, marginally nonsignificant between static basal and ramping heat‐knockdown time, indicate that direct and correlated responses to selection for increased upper thermal limits are possible. Thus, combinations of all three traits will contribute to the evolution of upper thermal limits in response to selection imposed by a warming climate. Reliance on univariate estimates of evolutionary potential may not provide accurate insight into the ability of organisms to evolve upper thermal limits in nature.  相似文献   

11.
We consider uniformly most powerful (UMP) as well as uniformly most powerful unbiased (UMPU) tests and their non‐randomized versions for certain hypotheses concerning a binomial parameter. It will be shown that the power function of a UMP(U)‐test based on sample size n can coincide on the entire parameter space with the power function of the corresponding test based on sample size n + 1. A complete characterization of this paradox will be derived. Apart some exceptional cases for two‐sided tests and equivalence tests the paradox appears if and only if a test based on sample size n is non‐randomized.  相似文献   

12.
A method for modeling the relationship of polychotomous health ratings with predictors such as area characteristics, the distance to a source of environmental contamination, or exposure to environmental pollutants is presented. The model combines elements of grouped regression and multilevel analysis. The statistical model describes the entire response distribution as a function of the predictors so that any measure that summarizes this distribution can be calculated from the model. With the model, polychotomous health ratings can be used, and there is no need for a priori dichotomizing such variables which would lead to loss of information. It is described how, according to the model, various measures describing the response distribution are related to the exposure, and the confidence and tolerance intervals for these relationships are presented. Specific attention is given to the incorporation of random factors in the model. The application that here serves as an example, concerns annoyance from transportation noise. Exposure-response relationships obtained with the described method of modeling are presented for aircraft, road traffic, and railway noise.  相似文献   

13.
In the development of structural equation models (SEMs), observed variables are usually assumed to be normally distributed. However, this assumption is likely to be violated in many practical researches. As the non‐normality of observed variables in an SEM can be obtained from either non‐normal latent variables or non‐normal residuals or both, semiparametric modeling with unknown distribution of latent variables or unknown distribution of residuals is needed. In this article, we find that an SEM becomes nonidentifiable when both the latent variable distribution and the residual distribution are unknown. Hence, it is impossible to estimate reliably both the latent variable distribution and the residual distribution without parametric assumptions on one or the other. We also find that the residuals in the measurement equation are more sensitive to the normality assumption than the latent variables, and the negative impact on the estimation of parameters and distributions due to the non‐normality of residuals is more serious. Therefore, when there is no prior knowledge about parametric distributions for either the latent variables or the residuals, we recommend making parametric assumption on latent variables, and modeling residuals nonparametrically. We propose a semiparametric Bayesian approach using the truncated Dirichlet process with a stick breaking prior to tackle the non‐normality of residuals in the measurement equation. Simulation studies and a real data analysis demonstrate our findings, and reveal the empirical performance of the proposed methodology. A free WinBUGS code to perform the analysis is available in Supporting Information.  相似文献   

14.
This paper is motivated by the GH‐2000 biomarker test, though the discussion is applicable to other diagnostic tests. The GH‐2000 biomarker test has been developed as a powerful technique to detect growth hormone misuse by athletes, based on the GH‐2000 score. Decision limits on the GH‐2000 score have been developed and incorporated into the guidelines of the World Anti‐Doping Agency (WADA). These decision limits are constructed, however, under the assumption that the GH‐2000 score follows a normal distribution. As it is difficult to affirm the normality of a distribution based on a finite sample, nonparametric decision limits, readily available in the statistical literature, are viable alternatives. In this paper, we compare the normal distribution–based and nonparametric decision limits. We show that the decision limit based on the normal distribution may deviate significantly from the nominal confidence level or nominal FPR when the distribution of the GH‐2000 score departs only slightly from the normal distribution. While a nonparametric decision limit does not assume any specific distribution of the GH‐2000 score and always guarantees the nominal confidence level and FPR, it requires a much larger sample size than the normal distribution–based decision limit. Due to the stringent FPR of the GH‐2000 biomarker test used by WADA, the sample sizes currently available are much too small, and it will take many years of testing to have the minimum sample size required, in order to use the nonparametric decision limits. Large sample theory about the normal distribution–based and nonparametric decision limits is also developed in this paper to help understanding their behaviours when the sample size is large.  相似文献   

15.
In risk assessment, it is often desired to make inferences on the low dose levels at which a specific benchmark risk is attained. Applications of simultaneous hyperbolic confidence bands for low‐dose risk estimation with quantal data under different dose‐response models (multistage, Abbott‐adjusted Weibull, and Abbott‐adjusted log‐logistic models) have appeared in the literature. The use of simultaneous three‐segment bands under the multistage model has also been proposed recently. In this article, we present explicit formulas for constructing asymptotic one‐sided simultaneous hyperbolic and three‐segment bands for the simple log‐logistic regression model. We use the simultaneous construction to estimate upper hyperbolic and three‐segment confidence bands on extra risk and to obtain lower limits on the benchmark dose by inverting the upper bands on risk under the Abbott‐adjusted log‐logistic model. Monte Carlo simulations evaluate the characteristics of the simultaneous limits. An example is given to illustrate the use of the proposed methods and to compare the two types of simultaneous limits at very low dose levels.  相似文献   

16.
Construction of simultaneous confidence sets for several effective doses currently relies on inverting the Scheffé type simultaneous confidence band, which is known to be conservative. We develop novel methodology to make the simultaneous coverage closer to its nominal level, for both two‐sided and one‐sided simultaneous confidence sets. Our approach is shown to be considerably less conservative than the current method, and is illustrated with an example on modeling the effect of smoking status and serum triglyceride level on the probability of the recurrence of a myocardial infarction.  相似文献   

17.
Organisms in all domains, Archaea, Bacteria, and Eukarya will respond to climate change with differential vulnerabilities resulting in shifts in species distribution, coexistence, and interactions. The identification of unifying principles of organism functioning across all domains would facilitate a cause and effect understanding of such changes and their implications for ecosystem shifts. For example, the functional specialization of all organisms in limited temperature ranges leads us to ask for unifying functional reasons. Organisms also specialize in either anoxic or various oxygen ranges, with animals and plants depending on high oxygen levels. Here, we identify thermal ranges, heat limits of growth, and critically low (hypoxic) oxygen concentrations as proxies of tolerance in a meta‐analysis of data available for marine organisms, with special reference to domain‐specific limits. For an explanation of the patterns and differences observed, we define and quantify a proxy for organismic complexity across species from all domains. Rising complexity causes heat (and hypoxia) tolerances to decrease from Archaea to Bacteria to uni‐ and then multicellular Eukarya. Within and across domains, taxon‐specific tolerance limits likely reflect ultimate evolutionary limits of its species to acclimatization and adaptation. We hypothesize that rising taxon‐specific complexities in structure and function constrain organisms to narrower environmental ranges. Low complexity as in Archaea and some Bacteria provide life options in extreme environments. In the warmest oceans, temperature maxima reach and will surpass the permanent limits to the existence of multicellular animals, plants and unicellular phytoplankter. Smaller, less complex unicellular Eukarya, Bacteria, and Archaea will thus benefit and predominate even more in a future, warmer, and hypoxic ocean.  相似文献   

18.
Automated variable selection procedures, such as backward elimination, are commonly employed to perform model selection in the context of multivariable regression. The stability of such procedures can be investigated using a bootstrap‐based approach. The idea is to apply the variable selection procedure on a large number of bootstrap samples successively and to examine the obtained models, for instance, in terms of the inclusion of specific predictor variables. In this paper, we aim to investigate a particular important problem affecting this method in the case of categorical predictor variables with different numbers of categories and to give recommendations on how to avoid it. For this purpose, we systematically assess the behavior of automated variable selection based on the likelihood ratio test using either bootstrap samples drawn with replacement or subsamples drawn without replacement from the original dataset. Our study consists of extensive simulations and a real data example from the NHANES study. Our main result is that if automated variable selection is conducted on bootstrap samples, variables with more categories are substantially favored over variables with fewer categories and over metric variables even if none of them have any effect. Importantly, variables with no effect and many categories may be (wrongly) preferred to variables with an effect but few categories. We suggest the use of subsamples instead of bootstrap samples to bypass these drawbacks.  相似文献   

19.
In this work, magnetic graphene double‐sided mesoporous nanocomposites (mag‐graphene@mSiO2) were synthesized by coating a layer of mesoporous silica materials on each side of magnetic grapheme. The surfactant (CTAB) mediated sol‐gel coating was performed using tetraethyl orthosilicate as the silica source. The as‐made magnetic graphene double‐sided mesoporous silica composites were treated with high‐temperature calcination to remove the hydroxyl on the surface. The novel double‐sided materials possess high surface area (167.8 cm2/g) and large pore volume (0.2 cm3/g). The highly open pore structure presents uniform pore size (3.2 nm) and structural stability. The hydrophobic interior pore walls could ensure an efficient adsorption of target molecules through hydrophobic–hydrophobic interaction. At the same time, the magnetic Fe3O4 particles on both sides of the materials could simplify the process of enrichment, which plays an important role in the treatment of complex biological samples. The magnetic graphene double‐sided nanocomposites were successfully applied to size‐selective and specific enrichment of peptides in standard peptide mixtures, protein digest solutions, and human urine samples. Finally, the novel material was applied to selective enrichment of endogenous peptides in mouse brain tissue. The enriched endogenous peptides were then analyzed by LC‐MS/MS, and 409 endogenous peptides were detected and identified. The results demonstrate that the as‐made mag‐graphene@mSiO2 have powerful potential for peptidome research.  相似文献   

20.
Summary We introduce a nearly automatic procedure to locate and count the quantum dots in images of kinesin motor assays. Our procedure employs an approximate likelihood estimator based on a two‐component mixture model for the image data; the first component has a normal distribution, and the other component is distributed as a normal random variable plus an exponential random variable. The normal component has an unknown variance, which we model as a function of the mean. We use B‐splines to estimate the variance function during a training run on a suitable image, and the estimate is used to process subsequent images. Parameter estimates are generated for each image along with estimates of standard errors, and the number of dots in the image is determined using an information criterion and likelihood ratio tests. Realistic simulations show that our procedure is robust and that it leads to accurate estimates, both of parameters and of standard errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号