首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Agresti A  Min Y 《Biometrics》2005,61(2):515-523
This article investigates the performance, in a frequentist sense, of Bayesian confidence intervals (CIs) for the difference of proportions, relative risk, and odds ratio in 2 x 2 contingency tables. We consider beta priors, logit-normal priors, and related correlated priors for the two binomial parameters. The goal was to analyze whether certain settings for prior parameters tend to provide good coverage performance regardless of the true association parameter values. For the relative risk and odds ratio, we recommend tail intervals over highest posterior density (HPD) intervals, for invariance reasons. To protect against potentially very poor coverage probabilities when the effect is large, it is best to use a diffuse prior, and we recommend the Jeffreys prior. Otherwise, with relatively small samples, Bayesian CIs using more informative (even uniform) priors tend to have poorer performance than the frequentist CIs based on inverting score tests, which perform uniformly quite well for these parameters.  相似文献   

2.
A cause-specific cumulative incidence function (CIF) is the probability of failure from a specific cause as a function of time. In randomized trials, a difference of cause-specific CIFs (treatment minus control) represents a treatment effect. Cause-specific CIF in each intervention arm can be estimated based on the usual non-parametric Aalen–Johansen estimator which generalizes the Kaplan–Meier estimator of CIF in the presence of competing risks. Under random censoring, asymptotically valid Wald-type confidence intervals (CIs) for a difference of cause-specific CIFs at a specific time point can be constructed using one of the published variance estimators. Unfortunately, these intervals can suffer from substantial under-coverage when the outcome of interest is a rare event, as may be the case for example in the analysis of uncommon adverse events. We propose two new approximate interval estimators for a difference of cause-specific CIFs estimated in the presence of competing risks and random censoring. Theoretical analysis and simulations indicate that the new interval estimators are superior to the Wald CIs in the sense of avoiding substantial under-coverage with rare events, while being equivalent to the Wald CIs asymptotically. In the absence of censoring, one of the two proposed interval estimators reduces to the well-known Agresti–Caffo CI for a difference of two binomial parameters. The new methods can be easily implemented with any software package producing point and variance estimates for the Aalen–Johansen estimator, as illustrated in a real data example.  相似文献   

3.

Background

Improving feed efficiency in fish is crucial at the economic, social and environmental levels with respect to developing a more sustainable aquaculture. The important contribution of genetic improvement to achieve this goal has been hampered by the lack of accurate basic information on the genetic parameters of feed efficiency in fish. We used video assessment of feed intake on individual fish reared in groups to estimate the genetic parameters of six growth traits, feed intake, feed conversion ratio (FCR) and residual feed intake in 40 pedigreed families of the GIFT strain of Nile tilapia, Oreochromis niloticus. Feed intake and growth were measured on juvenile fish (22.4 g mean body weight) during 13 consecutive meals, representing 7 days of measurements. We used these data to estimate the FCR response to different selection criteria to assess the potential of genetics as a means of increasing FCR in tilapia.

Results

Our results demonstrate genetic control for FCR in tilapia, with a heritability estimate of 0.32?±?0.11. Response to selection estimates showed FCR could be efficiently improved by selective breeding. Due to low genetic correlations, selection for growth traits would not improve FCR. However, weight loss at fasting has a high genetic correlation with FCR (0.80?±?0.25) and a moderate heritability (0.23), and could be an easy to measure and efficient criterion to improve FCR by selective breeding in tilapia.

Conclusion

At this age, FCR is genetically determined in Nile tilapia. A selective breeding program could be possible and could help enabling the development of a more sustainable aquaculture production.
  相似文献   

4.
We propose a method to construct simultaneous confidence intervals for a parameter vector from inverting a series of randomization tests (RT). The randomization tests are facilitated by an efficient multivariate Robbins–Monro procedure that takes the correlation information of all components into account. The estimation method does not require any distributional assumption of the population other than the existence of the second moments. The resulting simultaneous confidence intervals are not necessarily symmetric about the point estimate of the parameter vector but possess the property of equal tails in all dimensions. In particular, we present the constructing the mean vector of one population and the difference between two mean vectors of two populations. Extensive simulation is conducted to show numerical comparison with four methods. We illustrate the application of the proposed method to test bioequivalence with multiple endpoints on some real data.  相似文献   

5.
We present algorithms for time-series gene expression analysis that permit the principled estimation of unobserved time points, clustering, and dataset alignment. Each expression profile is modeled as a cubic spline (piecewise polynomial) that is estimated from the observed data and every time point influences the overall smooth expression curve. We constrain the spline coefficients of genes in the same class to have similar expression patterns, while also allowing for gene specific parameters. We show that unobserved time points can be reconstructed using our method with 10-15% less error when compared to previous best methods. Our clustering algorithm operates directly on the continuous representations of gene expression profiles, and we demonstrate that this is particularly effective when applied to nonuniformly sampled data. Our continuous alignment algorithm also avoids difficulties encountered by discrete approaches. In particular, our method allows for control of the number of degrees of freedom of the warp through the specification of parameterized functions, which helps to avoid overfitting. We demonstrate that our algorithm produces stable low-error alignments on real expression data and further show a specific application to yeast knock-out data that produces biologically meaningful results.  相似文献   

6.
Clegg LX  Gail MH  Feuer EJ 《Biometrics》2002,58(3):684-688
We propose a new Poisson method to estimate the variance for prevalence estimates obtained by the counting method described by Gail et al. (1999, Biometrics 55, 1137-1144) and to construct a confidence interval for the prevalence. We evaluate both the Poisson procedure and the procedure based on the bootstrap proposed by Gail et al. in simulated samples generated by resampling real data. These studies show that both variance estimators usually perform well and yield coverages of confidence intervals at nominal levels. When the number of disease survivors is very small, however, confidence intervals based on the Poisson method have supranominal coverage, whereas those based on the procedure of Gail et al. tend to have below-nominal coverage. For these reasons, we recommend the Poisson method, which also reduces the computational burden considerably.  相似文献   

7.
This paper proposes a procedure for testing and classifying data with multiple factors. A two-way analysis of covariance is used to classify the differences among the batches as well as another factor such as package type and/or product strength. In the test procedure, slopes and intercepts of the main effects are tested using a combination of simultaneous and sequential F-tests. Based on the test procedure results, the data are classified into one of four different groups. For each group, shelf life can be calculated accordingly. We examine if the procedure produces satisfactory control of the probability of a Type I error and the power of detecting the difference of degradation rates and intercepts for different nominal levels. The method is evaluated with a Monte Carlo simulation study. The proposed procedure is compared with the current FDA procedure using real data.  相似文献   

8.
CRYOSTAT MICROTOME SECTIONS OF BIRCH WOOD DEGRADED BY WHITE ROT FUNGI WERE EXAMINED BY LIGHT MICROSCOPY AFTER TREATMENT WITH TWO STAINS: astra-blue, which stains cellulose blue only in the absence of lignin, and safranin, which stains lignin regardless of whether cellulose is present. The method provided a simple and reliable screening procedure that distinguishes between fungi that cause decay by selectively removing lignin and those that degrade both cellulose and lignin simultaneously. Moreover, morphological characteristics specific to selective delignification were revealed.  相似文献   

9.
Qiu J  Hwang JT 《Biometrics》2007,63(3):767-776
Summary Simultaneous inference for a large number, N, of parameters is a challenge. In some situations, such as microarray experiments, researchers are only interested in making inference for the K parameters corresponding to the K most extreme estimates. Hence it seems important to construct simultaneous confidence intervals for these K parameters. The naïve simultaneous confidence intervals for the K means (applied directly without taking into account the selection) have low coverage probabilities. We take an empirical Bayes approach (or an approach based on the random effect model) to construct simultaneous confidence intervals with good coverage probabilities. For N= 10,000 and K= 100, typical for microarray data, our confidence intervals could be 77% shorter than the naïve K‐dimensional simultaneous intervals.  相似文献   

10.
We develop an inference method that uses approximate Bayesian computation (ABC) to simultaneously estimate mutational parameters and selective constraint on the basis of nucleotide divergence for protein-coding genes between pairs of species. Our simulations explicitly model CpG hypermutability and transition vs. transversion mutational biases along with negative and positive selection operating on synonymous and nonsynonymous sites. We evaluate the method by simulations in which true mean parameter values are known and show that it produces reasonably unbiased parameter estimates as long as sequences are not too short and sequence divergence is not too low. We show that the use of quadratic regression within ABC offers an improvement over linear regression, but that weighted regression has little impact on the efficiency of the procedure. We apply the method to estimate mutational and selective constraint parameters in data sets of protein-coding genes extracted from the genome sequences of primates, murids, and carnivores. Estimates of CpG hypermutability are substantially higher in primates than murids and carnivores. Nonsynonymous site selective constraint is substantially higher in murids and carnivores than primates, and autosomal nonsynonymous constraint is higher than X-chromsome constraint in all taxa. We detect significant selective constraint at synonymous sites in primates, carnivores, and murid rodents. Synonymous site selective constraint is weakest in murids, a surprising result, considering that murid effective population sizes are likely to be considerably higher than the other two taxa.  相似文献   

11.
If the variables in MANOVA problem can be arranged according to the order of their importance, then J. ROY'S (1958) step-down procedure may be more appropriate than the conventional invariant inference techniques. However, it may often be possible only to identify subsets such that variables within subsets are equally important and subsets are of unequal importance. In experimental situations, it is common to have a set of variables of primary interest and another of “addon” variables. The step-down reasoning is extended to such cases and a set of simultaneous confidence bounds based upon the procedure which uses the largest root criterion at each stage are derived. The confidence bounds are on all linear functions of means only that do not involve nuisance parameters, and are therefore suitable for studying the configuration of means. This method yields shorter intervals for contrasts among the means of the variables of primary interest compared with the conventional intervals based upon the largest root. The method is illustrated using BARNARD'S data (1935) on skull characters.  相似文献   

12.
In this article, we construct an approximate EM algorithm to estimate the parameters of a nonlinear mixed effects model. The iterative procedure can be viewed as an iterative method of moments procedure for estimating the variance components and an iterative reweighted least squares estimates for estimating the fixed effects. Therefore, it is valid without the normality assumptions on the random components. A computationally simple method of moments estimates of the model parameters are used as the starting values for our iterative procedure. A simulation study was conducted to compare the performances of the proposed procedure with the procedure proposed by Lindstrom and Bates (1990) for some normal models and nonnormal models.  相似文献   

13.
14.
Renwick A  Davison L  Spratt H  King JP  Kimmel M 《Genetics》2001,159(2):737-747
We examine length distributions of approximately 6000 human dinucleotide microsatellite loci, representing chromosomes 1-22, from the GDB database. Under the stepwise mutation model, results from theory and simulation are compared with the empirical data. In both constant and expanding population scenarios, a simple single-step model with parameters chosen to account for the observed variance of microsatellite lengths produces results inconsistent with the observed heterozygosity and the dispersion of length skewness. Complicating the model by allowing a variable mutation rate accounts for the homozygosity, and introducing a small probability of a large mutation step accounts for the dispersion in skewnesses. We discuss these results in light of the long-term evolution of microsatellites.  相似文献   

15.
Giant axonal neuropathy skin fibroblasts, which are characterized by a selective and partial disorganization of vimentin filaments [1] exhibited, when compared with normal skin fibroblasts, less fibrin clot retractile (FCR) activity and spreading within the fibrin clot both during active growth and resting stage. Skin fibroblasts derived from patients affected with adenomatosis of the colon and rectum, which display a disorganized actin network [2], exhibited reduced FCR activity and spreading within the fibrin clot only during resting stage. FCR inhibition was also obtained by treating the cells with colcemid, cytochalasin B (CB) and dihydrocytochalasin B. The data suggest that FCR activity is under the control of different cytoskeletal structures. For the first time, a direct involvement of intermediate-sized filaments could be demonstrated in the interaction between fibroblasts and an organic substratum.  相似文献   

16.
We have developed a method using a bolus of [(2)H(5)]glycerol to determine parameters of VLDL-triglyceride (VLDL-TG) turnover and have compared the data to that obtained using simultaneously a bolus of [2-(3)H]glycerol in six young normolipidemic men. No measurable enrichment was found after 12 h for [(2)H(5)]glycerol; therefore, we could only perform a monoexponential analysis of its data. No differences in fractional catabolic rate (FCR) were seen when comparing the multicompartmental modeling of [2-(3)H]glycerol data (modeled over 48 h) either to the monoexponential analyses of the [2-(3)H]glycerol or that of the [(2)H(5)]glycerol data. The two monoexponential approaches were highly correlated (r = 0.96 for FCR), however, FCR was 18% higher with the [(2)H(5)]glycerol than with the [2-(3)H]glycerol data (P < 0.003). In all six subjects, a 10-h infusion of [1-(13)C]acetate was started at the same time as the glycerol boluses were given. In two men we were able reliably to detect VLDL-TG-fatty acid enrichment. The measurement of FCR in these two subjects using the mass isotopomer distribution analysis (MIDA) approach was in good agreement (within 10%) with FCRs determined with the labeled glycerol methods.In conclusion, our results have shown that results obtained with the [(2)H(5)]glycerol bolus were highly correlated with those obtained with the [2-(3)H]glycerol, but the FCRs were slightly higher with the former. We have also demonstrated that FCRs determined from monoexponential modeling were in good agreement with those determined from the multicompartmental modeling of the TG-glycerol data.  相似文献   

17.
Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample.  相似文献   

18.
Dynamic models of metabolism are instrumental for gaining insight and predicting possible outcomes of perturbations. Current approaches start from the selection of lumped enzyme kinetics and determine the parameters within a large parametric space. However, kinetic parameters are often unknown and obtaining these parameters requires detailed characterization of enzyme kinetics. In many cases, only steady-state fluxes are measured or estimated, but these data have not been utilized to construct dynamic models. Here, we extend the previously developed Ensemble Modeling methodology by allowing various kinetic rate expressions and employing a more efficient solution method for steady states. We show that anchoring the dynamic models to the same flux reduces the allowable parameter space significantly such that sampling of high dimensional kinetic parameters becomes meaningful. The methodology enables examination of the properties of the model's structure, including multiple steady states. Screening of models based on limited steady-state fluxes or metabolite profiles reduces the parameter space further and the remaining models become increasingly predictive. We use both succinate overproduction and central carbon metabolism in Escherichia coli as examples to demonstrate these results.  相似文献   

19.
Krafty RT  Gimotty PA  Holtz D  Coukos G  Guo W 《Biometrics》2008,64(4):1023-1031
SUMMARY: In this article we develop a nonparametric estimation procedure for the varying coefficient model when the within-subject covariance is unknown. Extending the idea of iterative reweighted least squares to the functional setting, we iterate between estimating the coefficients conditional on the covariance and estimating the functional covariance conditional on the coefficients. Smoothing splines for correlated errors are used to estimate the functional coefficients with smoothing parameters selected via the generalized maximum likelihood. The covariance is nonparametrically estimated using a penalized estimator with smoothing parameters chosen via a Kullback-Leibler criterion. Empirical properties of the proposed method are demonstrated in simulations and the method is applied to the data collected from an ovarian tumor study in mice to analyze the effects of different chemotherapy treatments on the volumes of two classes of tumors.  相似文献   

20.
S M Kokoska 《Biometrics》1987,43(3):525-534
This paper is concerned with the analysis of certain cancer chemoprevention experiments that involve Type I censoring. In experiments of this nature, two common response variables are the number of induced cancers and the rate at which they develop. In this study we assume that the number of induced tumors and their times to detection are described by the Poisson and gamma distributions, respectively. Using the method of maximum likelihood, we discuss a procedure for estimating the parameters characterizing these two distributions. We apply standard techniques in order to construct a confidence region and conduct a hypothesis test concerning the parameters of interest. We discuss a method for comparing the effects of two different treatments using the likelihood ratio principle. A technique for isolating group differences in terms of the mean number of promoted tumors and the mean time to detection is described. Using the techniques developed in this paper, we reanalyze an existing data set in the cancer chemoprevention literature and obtain contrasting results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号