首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There has been growing interest, when comparing an experimental treatment with an active control with respect to a binary outcome, in allowing the non-inferiority margin to depend on the unknown success rate in the control group. It does not seem universally recognized, however, that the statistical test should appropriately adjust for the uncertainty surrounding the non-inferiority margin. In this paper, we inspect a naive procedure that treats an "observed margin" as if it were fixed a priori, and explain why it might not be valid. We then derive a class of tests based on the delta method, including the Wald test and the score test, for a smooth margin. An alternative derivation is given for the asymptotic distribution of the likelihood ratio statistic, again for a smooth margin. We discuss the asymptotic behavior of these tests when applied to a piecewise smooth margin. A simple condition on the margin function is given which allows the likelihood ratio test to carry over to a piecewise smooth margin using the same critical value as for a smooth margin. Simulation experiments are conducted, under a smooth margin and a piecewise linear margin, to evaluate the finite-sample performance of the asymptotic tests studied.  相似文献   

2.
For a non‐inferiority trial without a placebo arm, the direct comparison between the test treatment and the selected positive control is in principle the only basis for statistical inference. Therefore, evaluating the test treatment relative to the non‐existent placebo presents extreme challenges and requires some kind of bridging from the past to the present with no current placebo data. For such inference based partly on an indirect bridging manipulation, fixed margin method and synthesis method are the two widely discussed methods in the recent literature. There are major differences in statistical inference paradigm between the two methods. The fixed margin method employs the historical data that assess the performances of the active control versus a placebo to guide the selection of the non‐inferiority margin. Such guidance is not part of the ultimate statistical inference in the non‐inferiority trial. In contrast, the synthesis method connects the historical data to the non‐inferiority trial data for making broader inferences relating the test treatment to the non‐existent current placebo. On the other hand, the type I error rate associated with the direct comparison between the test treatment and the active control cannot shed any light on the appropriateness of the indirect inference for faring the test treatment against the non‐existent placebo. This work explores an approach for assessing the impact of potential bias due to violation of a key statistical assumption to guide determination of the non‐inferiority margin.  相似文献   

3.
Without a placebo arm, any non-inferiority inference involving assessment of the placebo effect under the active control trial setting is difficult. The statistical risk for falsely concluding non-inferiority cannot be evaluated unless the constancy assumption approximately holds that the effect of the active control under the historical trial setting where the control effect can be assessed carries to the noninferiority trial setting. The constancy assumption cannot be checked because of missing the placebo arm in the non-inferiority trial. Depending on how serious the violation of the assumption is thought to be, one may need to seek an alternative design strategy that includes a cushion for a very conservative non-inferiority analysis or shows superiority of the experimental treatment over the control. Determination of the non-inferiority margin depends on what objective the non-inferiority analysis is intended to achieve. The margin can be a fixed margin or a margin functionally defined. Between-trial differences always exist and need to be properly considered.  相似文献   

4.
Further evidence defining an active role for the chloride cell in teleost osmoregulation is presented in this study. Gill filaments from salt water adaptedFundulus heteroclitus were fixed in a solution of silver acetate-osmium tetroxide in an attempt to localize chloride at the level of light and electron microscopes. Characteristically, a reaction product in the form of dense granules appeared concentrated and localized near the margin of the chloride cell apical cavity. Selected area electron diffraction patterns obtained from the localized accumulations of reaction product match diffraction patterns from control (known) silver chloride preparations. It should be emphasized that since the reaction product is not concentrated in other regions of the branchial epithelium, these observations strongly support an electrolyte regulating function for the chloride cell.Supported by a Training Grant (2 G-707) to K. R. Porter from the United States Public Health Service.  相似文献   

5.
D B Fogel  G B Fogel  K Ohkura 《Bio Systems》2001,61(2-3):155-162
Self-adaptation is a common method for learning online control parameters in an evolutionary algorithm. In one common implementation, each individual in the population is represented as a pair of vectors (x, sigma), where x is the candidate solution to an optimization problem scored in terms of f(x), and sigma is the so-called strategy parameter vector that influences how offspring will be created from the individual. Experimental evidence suggests that the elements of sigma can sometimes become too small to explore the given response surface adequately. The evolutionary search then stagnates, until the elements of sigma grow sufficiently large as a result of random variation. A potential solution to this deficiency associates multiple strategy parameter vectors with a single individual. A single strategy vector is active at any time and dictates how offspring will be generated. Experiments are conducted on four 10-dimensional benchmark functions where the number of strategy parameter vectors is varied over 1, 2, 3, 4, 5, 10, and 20. The results indicate advantages for using multiple strategy parameter vectors. Furthermore, the relationship between the mean best result after a fixed number of generations and the number of strategy parameter vectors can be determined reliably in each case.  相似文献   

6.
Lipid droplets are the major organelle for intracellular storage of triglycerides and cholesterol esters. Various methods have been attempted for automated quantitation of fluorescently stained lipid droplets using either thresholding or watershed methods. We find that thresholding methods deal poorly with clusters of lipid droplets, whereas watershed methods require a smoothing step that must be optimized to remove image noise. We describe here a novel three-stage hybrid method for automated segmentation and quantitation of lipid droplets. In this method, objects are initially identified by thresholding. They are then tested for circularity to distinguish single lipid droplets from clusters. Clusters are subjected to a secondary watershed segmentation. We provide a characterization of this method in simulated images. Additionally, we apply this method to images of fixed cells containing stained lipid droplets and GFP-tagged proteins to provide a proof-of-principle that this method can be used for colocalization studies. The circularity measure can additionally prove useful for the identification of inappropriate segmentation in an automated way; for example, of non-cellular material. We will make the programs and source code available to the community under the Gnu Public License. We believe this technique will be of interest to cell biologists for light microscopic studies of lipid droplet biology.  相似文献   

7.
Lu CH  Huang SW  Lai YL  Lin CP  Shih CH  Huang CC  Hsu WL  Hwang JK 《Proteins》2008,72(2):625-634
Recently, we have developed a method (Shih et al., Proteins: Structure, Function, and Bioinformatics 2007;68: 34-38) to compute correlation of fluctuations of proteins. This method, referred to as the protein fixed-point (PFP) model, is based on the positional vectors of atoms issuing from the fixed point, which is the point of the least fluctuations in proteins. One corollary from this model is that atoms lying on the same shell centered at the fixed point will have the same thermal fluctuations. In practice, this model provides a convenient way to compute the average dynamical properties of proteins directly from the geometrical shapes of proteins without the need of any mechanical models, and hence no trajectory integration or sophisticated matrix operations are needed. As a result, it is more efficient than molecular dynamics simulation or normal mode analysis. Though in the previous study the PFP model has been successfully applied to a number of proteins of various folds, it is not clear to what extent this model will be applied. In this article, we have carried out the comprehensive analysis of the PFP model for a dataset comprising 972 high-resolution X-ray structures with pairwise sequence identity or=0.5. Our result shows that the fixed-point model is indeed quite general and will be a useful tool for high throughput analysis of dynamical properties of proteins.  相似文献   

8.
Computational biomechanics for human body modeling has generally been categorized into two separated domains: finite element analysis and multibody dynamics. Combining the advantages of both domains is necessary when tissue stress and physical body motion are both of interest. However, the method for this topic is still in exploration. The aim of this study is to implement unique controlling strategies in finite element model for simultaneously simulating musculoskeletal body dynamics and in vivo stress inside human tissues. A finite element lower limb model with 3D active muscles was selected for the implementation of controlling strategies, which was further validated against in-vivo human motion experiments. A unique feedback control strategy that couples together a basic Proportion-Integration-Differentiation (PID) controller and generic active signals from Computed Muscle Control (CMC) method of the musculoskeletal model or normalized EMG singles was proposed and applied in the present model. The results show that the new proposed controlling strategy show a good correlation with experimental test data of the normal gait considering joint kinematics, while stress distribution of local lower limb tissue can be also detected in real-time with lower limb motion. In summary, the present work is the first step for the application of active controlling strategy in the finite element model for concurrent simulation of both body dynamics and tissue stress. In the future, the present method can be further developed to apply it in various fields for human biomechanical analysis to monitor local stress and strain distribution by simultaneously simulating human locomotion.  相似文献   

9.
We test the success of Principal Components, Factor and Regression Analysis at recovering environmental signals using numerical experiments in which we control species environmental responses, the environmental conditions and the sampling scheme used for calibration. We use two general conditions, one in which sampling of a continental margin for benthic foraminiferal assemblages is done in a standard grid and the driving environmental variables are correlated to one another, and the other where sampling is done so that the environmental variables are uncorrelated. The first condition mimics many studies in the literature. We find that where the controlling environmental variables are correlated, Principal Components/Factor Analysis yield factors that reflect the common variance (correlation) of those variables. Since this common variance is largely a product of the sampling scheme, the factors extracted do not reliably present true species ecologic behavior. This behavior cannot be accurately diagnosed and faulty interpretations may lead to substantial error when using factor coefficients to reconstruct conditions in the past. When the sampling scheme is constructed so that the controlling environmental variables for the calibration data set are uncorrelated the factor patterns will reflect these variables more accurately. Species responses can be more successfully interpreted from the Principal Components/Factor Analysis structure matrices. Additionally, regression analysis can successfully extract the independent environmental signals from the biotic data set. However, matrix closure is a confounding effect in all our numerical results as it distorts species' abundances and spatial distribution in the calibration data set. Our results show clearly that a knowledge of the controlling environmental variables, and the correlations among these variables over a study area, is essential for the successful application of multivariate techniques for paleoenvironmental reconstruction.  相似文献   

10.
Heparin is a sulfated glycosaminoglycan (GAG), which contains N-acetylated or N-sulfated glucosamine (GlcN). Heparin, which is generally obtained from the healthy porcine intestines, is widely used as an anticoagulant during dialysis and treatments of thrombosis such as disseminated intravascular coagulation. Dermatan sulfate (DS) and chondroitin sulfate (CS), which are galactosamine (GalN)-containing GAGs, are major process-related impurities of heparin products. The varying DS and CS contents between heparin products can be responsible for the different anticoagulant activities of heparin. Therefore, a test to determine the concentrations of GalN-containing GAG is essential to ensure the quality and safety of heparin products. In this study, we developed a method for determination of relative content of GalN from GalN-containing GAG in heparin active pharmaceutical ingredients (APIs). The method validation and collaborative study with heparin manufacturers and suppliers showed that our method has enough specificity, sensitivity, linearity, repeatability, reproducibility, and recovery as the limiting test for GalN from GalN-containing GAGs. We believe that our method will be useful for ensuring quality, efficacy, and safety of pharmaceutical heparins. On July 30, 2010, the GalN limiting test based on our method was adopted in the heparin sodium monograph in the Japanese Pharmacopoeia.  相似文献   

11.
Ma Y  Guo J  Shi NZ  Tang ML 《Biometrics》2002,58(4):917-927
In this article a new non-model-based significance test for detecting dose-response relationship with the incorporation of historical control data is proposed. This non-model-based test is considered simpler from a regulatory perspective because it does not require validating any modeling assumptions. Moreover, our test is especially appropriate to those studies in which the intravenous doses for the investigational chemical are labeled as, e.g., low, medium and high or the dose labels do not suggest any obvious choices of dose scores. This test can be easily adopted for detecting general dose-response shape, such as an umbrella pattern. Simple adjustments will be proposed for better control of the actual Type I error. Data sets from two carcinogenesis studies will be used to illustrate our method. We also evaluate the performance of the proposed test and the famous model-based Tarone's trend test with respect to size and power.  相似文献   

12.
Nestedness analysis is a popular tool for inferring spatial species distributions, and therefore has management and conservation relevance. Ecologists frequently compute nestedness and subsequently use Spearman rank correlations for inferring relationships between the observed nested ranks of sites with biogeographic and environmental variables. Using temporary pond microcrustaceans hatched from microcosms as a case study, this paper shows that the application of this method can be problematic. While the overall degree and significance of nestedness was robust against a statistical error, the results obtained from randomly generated matrices, in which community structure from the original microcrustacean incidence matrix was maintained (fixed rows –fixed columns constraints), showed that rank correlations of observed nested patterns can be vulnerable to a Type 1 error (detecting an effect when there is none). Using expected nestedness patterns derived from rarefied original matrices to control for sample size effects did not change this result. This problem may have arisen as a result of a quantitative bias related to the disproportionate impact of rank positions of individual ponds in the analysis. Future extensive simulations studies, involving different community structures, should help identify the general reliability of rank correlation results in nestedness analyses. (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
In 1-year experiments, the final population density of nematodes is usually modeled as a function of initial density. Often, estimation of the parameters is precarious because nematode measurements, although laborious and expensive, are imprecise and the range in initial densities may be small. The estimation procedure can be improved by using orthogonal regression with a parameter for initial density on each experimental unit. In multi-year experiments parameters of a dynamic model can be estimated with optimization techniques like simulated annealing or Bayesian methods such as Markov chain Monte Carlo (MCMC). With these algorithms information from different experiments can be combined. In multi-year dynamic models, the stability of the steady states is an important issue. With chaotic dynamics, prediction of densities and associated economic loss will be possible only on a short timescale. In this study, a generic model was developed that describes population dynamics in crop rotations. Mathematical analysis showed stable steady states do exist for this dynamic model. Using the Metropolis algorithm, the model was fitted to data from a multi-year experiment on Pratylenchus penetrans dynamics with treatments that varied between years. For three crops, parameters for a yield loss assessment model were available and gross margin of the six possible rotations comprising these three crops and a fallow year were compared at the steady state of nematode density. Sensitivity of mean gross margin to changes in the parameter estimates was investigated. We discuss the general applicability of the dynamic rotation model and the opportunities arising from combination of the model with Bayesian calibration techniques for more efficient utilization and collection of data relevant for economic evaluation of crop rotations.  相似文献   

14.
Hyperspectral imaging is a promising technique for resection margin assessment during cancer surgery. Thereby, only a specific amount of the tissue below the resection surface, the clinically defined margin width, should be assessed. Since the imaging depth of hyperspectral imaging varies with wavelength and tissue composition, this can have consequences for the clinical use of hyperspectral imaging as margin assessment technique. In this study, a method was developed that allows for hyperspectral analysis of resection margins in breast cancer. This method uses the spectral slope of the diffuse reflectance spectrum at wavelength regions where the imaging depth in tumor and healthy tissue is equal. Thereby, tumor can be discriminated from healthy breast tissue while imaging up to a similar depth as the required tumor‐free margin width of 2 mm. Applying this method to hyperspectral images acquired during surgery would allow for robust margin assessment of resected specimens. In this paper, we focused on breast cancer, but the same approach can be applied to develop a method for other types of cancer.  相似文献   

15.
Pharmaceutical manufacturing processes consist of a series of stages (e.g., reaction, workup, isolation) to generate the active pharmaceutical ingredient (API). Outputs at intermediate stages (in-process control) and API need to be controlled within acceptance criteria to assure final drug product quality. In this paper, two methods based on tolerance interval to derive such acceptance criteria will be evaluated. The first method is serial worst case (SWC), an industry risk minimization strategy, wherein input materials and process parameters of a stage are fixed at their worst-case settings to calculate the maximum level expected from the stage. This maximum output then becomes input to the next stage wherein process parameters are again fixed at worst-case setting. The procedure is serially repeated throughout the process until the final stage. The calculated limits using SWC can be artificially high and may not reflect the actual process performance. The second method is the variation transmission (VT) using autoregressive model, wherein variation transmitted up to a stage is estimated by accounting for the recursive structure of the errors at each stage. Computer simulations at varying extent of variation transmission and process stage variability are performed. For the scenarios tested, VT method is demonstrated to better maintain the simulated confidence level and more precisely estimate the true proportion parameter than SWC. Real data examples are also presented that corroborate the findings from the simulation. Overall, VT is recommended for setting acceptance criteria in a multi-staged pharmaceutical manufacturing process.  相似文献   

16.
We have generated embryonic stem (ES) cells and transgenic mice carrying a tau-tagged green fluorescent protein (GFP) transgene under the control of a powerful promoter active in all cell types including those of the central nervous system. GFP requires no substrate and can be detected in fixed or living cells so is an attractive genetic marker. Tau-tagged GFP labels subcellular structures, including axons and the mitotic machinery, by binding the GFP to microtubules. This allows cell morphology to be visualized in exquisite detail. We test the application of cells derived from these mice in several types of cell-mixing experiments and demonstrate that the morphology of tau-GFP-expressing cells can be readily visualized after they have integrated into unlabeled host cells or tissues. We anticipate that these ES cells and transgenic mice will prove a novel and powerful tool for a wide variety of applications including the development of neural transplantation technologies in animal models and fundamental research into axon pathfinding mechanisms. A major advantage of the tau-GFP label is that it can be detected in living cells and labeled cells and their processes can be identified and subjected to a variety of manipulations such as electrophysiological cell recording.  相似文献   

17.
K K Lan  J M Lachin 《Biometrics》1990,46(3):759-770
To control the Type I error probability in a group sequential procedure using the logrank test, it is important to know the information times (fractions) at the times of interim analyses conducted for purposes of data monitoring. For the logrank test, the information time at an interim analysis is the fraction of the total number of events to be accrued in the entire trial. In a maximum information trial design, the trial is concluded when a prespecified total number of events has been accrued. For such a design, therefore, the information time at each interim analysis is known. However, many trials are designed to accrue data over a fixed duration of follow-up on a specified number of patients. This is termed a maximum duration trial design. Under such a design, the total number of events to be accrued is unknown at the time of an interim analysis. For a maximum duration trial design, therefore, these information times need to be estimated. A common practice is to assume that a fixed fraction of information will be accrued between any two consecutive interim analyses, and then employ a Pocock or O'Brien-Fleming boundary. In this article, we describe an estimate of the information time based on the fraction of total patient exposure, which tends to be slightly negatively biased (i.e., conservative) if survival is exponentially distributed. We then present a numerical exploration of the robustness of this estimate when nonexponential survival applies. We also show that the Lan-DeMets (1983, Biometrika 70, 659-663) procedure for constructing group sequential boundaries with the desired level of Type I error control can be computed using the estimated information fraction, even though it may be biased. Finally, we discuss the implications of employing a biased estimate of study information for a group sequential procedure.  相似文献   

18.
Noninferiority trials   总被引:2,自引:0,他引:2  
Noninferiority trials are intended to show that the effect of a new treatment is not worse than that of an active control by more than a specified margin. These trials have a number of inherent weaknesses that superiority trials do not: no internal demonstration of assay sensitivity, no single conservative analysis approach, lack of protection from bias by blinding, and difficulty in specifying the noninferiority margin. Noninferiority trials may sometimes be necessary when a placebo group can not be ethically included, but it should be recognized that the results of such trials are not as credible as those from a superiority trial.  相似文献   

19.
One method for demonstrating disease modification is a delayed-start design, consisting of a placebo-controlled period followed by a delayed-start period wherein all patients receive active treatment. To address methodological issues in previous delayed-start approaches, we propose a new method that is robust across conditions of drug effect, discontinuation rates, and missing data mechanisms. We propose a modeling approach and test procedure to test the hypothesis of noninferiority, comparing the treatment difference at the end of the delayed-start period with that at the end of the placebo-controlled period. We conducted simulations to identify the optimal noninferiority testing procedure to ensure the method was robust across scenarios and assumptions, and to evaluate the appropriate modeling approach for analyzing the delayed-start period. We then applied this methodology to Phase 3 solanezumab clinical trial data for mild Alzheimer’s disease patients. Simulation results showed a testing procedure using a proportional noninferiority margin was robust for detecting disease-modifying effects; conditions of high and moderate discontinuations; and with various missing data mechanisms. Using all data from all randomized patients in a single model over both the placebo-controlled and delayed-start study periods demonstrated good statistical performance. In analysis of solanezumab data using this methodology, the noninferiority criterion was met, indicating the treatment difference at the end of the placebo-controlled studies was preserved at the end of the delayed-start period within a pre-defined margin. The proposed noninferiority method for delayed-start analysis controls Type I error rate well and addresses many challenges posed by previous approaches. Delayed-start studies employing the proposed analysis approach could be used to provide evidence of a disease-modifying effect. This method has been communicated with FDA and has been successfully applied to actual clinical trial data accrued from the Phase 3 clinical trials of solanezumab.  相似文献   

20.
MOTIVATION: Biochemical signaling pathways and genetic circuits often involve very small numbers of key signaling molecules. Computationally expensive stochastic methods are necessary to simulate such chemical situations. Single-molecule chemical events often co-exist with much larger numbers of signaling molecules where mass-action kinetics is a reasonable approximation. Here, we describe an adaptive stochastic method that dynamically chooses between deterministic and stochastic calculations depending on molecular count and propensity of forward reactions. The method is fixed timestep and has first order accuracy. We compare the efficiency of this method with exact stochastic methods. RESULTS: We have implemented an adaptive stochastic-deterministic approximate simulation method for chemical kinetics. With an error margin of 5%, the method solves typical biologically constrained reaction schemes more rapidly than exact stochastic methods for reaction volumes >1-10 micro m(3). We have developed a test suite of reaction cases to test the accuracy of mixed simulation methods. AVAILABILITY: Simulation software used in the paper is freely available from http://www.ncbs.res.in/kinetikit/download.html  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号